Posts Tagged ‘analytics’

Syncsort Now Precisely After Pitney Bowes Acquisition

May 29, 2020

After announcing its acquisition of Pitney Bowes last August and completing the deal in December, Syncsort earlier this month rebranded itself as Precisely. The company, a long established mainframe ISV, is trying to position Precisely as a major player among enterprises seeking to handle quantities of data in various ways.

Precisely’s combined and updated Syncsort and Pitney Bowes product lines to span what the rebranded operation now describes as  “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools.

The rebranded company’s solution portfolio spans five areas based on the use case. 

  • Integrate is its data integration line that features Precisely Connect, Ironstream, Assure, and Syncsort.
  • Verify unit of data quality tools includes Precisely Spectrum Quality, Spectrum Context, and Trillium.
  • Location intelligence (Locate) touts Precisely Spectrum Spatial, Spectrum Geocoding, MapInfo, and Confirm
  • Enrich features Precisely Streets, Boundaries, Points Of Interest, Addresses, and Demographics. 
  • Engage unit aims to create seamless, personalized and omnichannel communications on any medium, anytime

According to the company, the updated product line will span what it describes as “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools. Adds Josh Rogers, CEO, Syncsort, now Precisely,  “With the combination of Syncsort and Pitney Bowes software and data, we are creating in Precisely a new company that is focused on helping enterprises advance their use of data through expertise across data domains, disciplines and platforms.”

Rogers continued: “Advancements in storage, compute, analytics, and machine learning have opened up a world of possibilities for enhanced decision-making, but inaccuracies and inconsistencies in data have held back innovation and stifled value creation. Achieving data integrity is the next business imperative. Put simply, better data means better decisions, and Precisely offers the industry’s most complete portfolio of data integrity products, providing the link between data sources and analytics that helps companies realize the value of their data and investments.”

Precisely may again be onto something by emphasizing the quality of data for decision making, which is just an amplification of the old GIGO (Garbage In Garbage Out), especially now as the volume, variety, and availability of data skyrockets. When edge devices begin generating new and different data it will further compound these challenges. Making data-driven decisions already has become increasingly complex for even the largest enterprises.

Despite the proliferation of cloud-based analytics tools, according to published studies in Forbes, Harvard Business Review, and elsewhere CEOs found that 84 percent do not trust the data they are basing decisions on, and with good reason, as another study found almost half of newly created data records have at least one critical error. Meanwhile, the cost of noncompliance with new governmental regulations, including GDPR and CCPA, has created an even greater urgency for trusted data.

Out of the gate, Precisely has more than 2,000 employees and 12,000 customers in more than 100 countries, with 90 of those part of the Fortune 100. The company boasts annual revenue of over $600 million.

Prior to its acquisition Pitney Bowes delivered solutions, analytics, and APIs in the areas of ecommerce fulfillment, shipping and returns; cross-border ecommerce; office mailing and shipping; presort services; and financing.

Syncsort provides data integration and optimization software alongside location Intelligence, data enrichment, customer information management, and engagement solutions. Together, the two companies serve more than 11,000 enterprises and hundreds of channel partners worldwide.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Supercomputers Battle COVID-19

May 22, 2020

When the world finally defeats the coronavirus and Covad-19, a small part in the victory will go to massive computer power. As Dario Gil, Director of IBM Research, noted; 16 systems with more than 400 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting are among the firepower.

Back in March DancingDinosaur reported here that IBM’s Summit, which the company was boasting as the world’s most powerful supercompter was able to simulate 8,000 chemical compounds in a matter of days in a hunt for something that could impact the COVID-19 infection. 

Writing this today, late in May, we already know that teams of medical researchers, scientists, technology experts, and a vast array of talents are working almost non-stop to find, develop, test, and mass produce a cure, with luck in the form of a vaccine. We should also note all the countless nurses, doctors, aides, assistants and various hospital and food and logistics staff of all types and outside support roles who are involved in keeping things working, feeding staff, wheeling patients around, and otherwise helping to save lives.

As Gil explains: high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling–all the required science disciplines that need to be involved in whatever success is ultimately achieved. You can probably throw in chemistry and a few other areas of electronics and engineering as well. Without massive computer horsepower these experiments would take years to complete if worked by hand, or months if handled on slower, traditional computing platforms.

These machines—more than 25 U.S.-based supercomputers with more than 400 petaflops of computing power—are now available for free to those working toward a vaccine or treatment against the virus, through the COVID-19 High Performance Computing Consortium.

It was created with government, academia and industry—including competitors, working side by side. IBM is co-leading the effort with the U.S. Department of Energy, which operates the National Laboratories of the United States. Google, Microsoft, Amazon, and Hewlett Packard Enterprise have joined, as well as NASA, the National Science Foundation, Pittsburgh Supercomputing Center, and six National Labs—Lawrence Livermore, Lawrence Berkeley, Argonne, Los Alamos, Oak Ridge, and Sandia, and others. And then there are academic institutions, including MIT, Rensselaer Polytechnic Institute, the University of Texas, Austin, and the University of California, San Diego.

The White House has been getting deservedly bashed for its slowness, incompetence, and narrow-minded bungling. However, Gil reports the White House’s Office of Science and Technology Policy has taken up an effort that can make a real difference. He adds; I want to offer this promise: IBM will continue to explore everything in our power to use our technology and expertise to drive meaningful progress in this global fight.

The first thing Gil’s team did was to spread the word to people who might be working on this on any or all fronts—from drug discovery and development with AI-led simulations to genomics, epidemiology and health systems.

He goes on:  We need to understand the whole life cycle of this virus, all the gearboxes that drive it—how it encounters and infects the host cell and replicates inside it, preventing it from producing vital particles. We need to know the molecular components, the proteins involved in the virus’ biochemistry, and then to use computational modeling to see how we can interrupt the cycle. That’s the standard scientific methodology of drug discovery, but we want to amplify it and speed it up.

The virus has been exploding in humans for months, providing an abundance of samples for computer modeling and analysis, Gil continued. Scientists already are depositing samples into public data sources such as GenBank and Protein Data Bank. There are many unknowns and assumptions but a lot of proposals involve using the available protein structures to come up with potential molecular compounds that could lead to a therapeutic treatment or a vaccine. Let’s hope they have great success, the sooner the better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

5G Joins Edge Technology and Hybrid Multicloud

May 11, 2020

At IBM’s virtual Think Conference the first week in May the company made a big play for edge computing and 5G together. 

From connected vehicles to intelligent manufacturing equipment, the amount of data from devices has resulted in unprecedented volumes of data at the edge. IBM is convinced the data volumes will compound as 5G networks increase the number of connected mobile devices.

z15 T02  and the LinuxONE 111 LT2

Edge computing  and 5G networks promise to reduce latency while improving speed, reliability, and processing. This will deliver faster and more comprehensive data analysis, deeper insights, faster response times, and improved experiences for employees, customers, and their customers.

First gaining prominence with the Internet of Things (IoT) a few years back IBM defined edge computing as a distributed computing framework that brings enterprise applications closer to where data is created and often remains, where it can be processed. This is where decisions are made and actions taken.

5G stands for the Fifth Generation of cellular wireless technology. Beyond higher speed and reduced latency, 5G standards will have a much higher connection density, allowing networks to handle greater numbers of connected devices combined with network slicing to isolate and protect designated applications.

Today, 10% of data is processed at the edge, an amount IBM expects to grow to 75% by 2025. Specifically, edge computing enables:

  • Better data control and lower costs by minimizing data transport to central hubs and reducing vulnerabilities and costs
  • Faster insights and actions by tapping into more sources of data and processing that data there, at the edge
  • Continuous operations by enabling systems that run autonomously, reduce disruption, and lower costs because data can be processed by the devices themselves on the spot and where decisions can be made

In short: the growing number of increasingly capable devices, faster 5G processing, and the increased pressure to drive the edge computing market beyond what the initial IoT proponents, who didn’t have 5G yet, envisioned. They also weren’t in a position to imagine the growth in the processing capabilities of edge devices in just the past year or two.

But that is starting to happen now, according to IDC: By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today.

Also unimagined was the emergence of the hybrid multicloud, which IBM has only recently started to tout. The convergence of 5G, edge computing, and hybrid multicloud, according to the company, is redefining how businesses operate. As more embrace 5G and edge, the ability to modernize networks to take advantage of the edge opportunity is only now feasible. 

And all of this could play very well with the new z machines, the z15 T02  and LinuxONE lll LT2. These appear to be sufficiently capable to handle the scale of business edge strategies and hybrid cloud requirements for now. Or the enterprise class z15 if you need more horsepower.

By moving to a hybrid multicloud model, telcos can process data at both the core and network edge across multiple clouds, perform cognitive operations and make it easier to introduce and manage differentiated digital services. As 5G matures it will become the network technology that underpins the delivery of these services. 

Enterprises adopting a hybrid multicloud model that extends from corporate data centers (or public and private clouds) to the edge is critical to unlock new connected experiences. By extending cloud computing to the edge, enterprises can perform AI/analytics faster, run enterprise apps to reduce impacts from intermittent connectivity, and minimize data transport to central hubs for cost efficiency. 

Deploying a hybrid multicloud model from corporate data centers to the edge is central to capitalizing on  new connected experiences. By extending cloud computing to the edge, organizations can run AI/analytics faster  while minimizing data transport to central hubs for cost efficiency. By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today. It’s time to start thinking about making edge part of your computer strategy. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Power9 Summit Fights COVID-19

March 16, 2020

IBM has unleashed its currently top-rated supercomputer, Summit, to simulate 8,000 chemical compounds in a matter of days in a hunt for something that will impact the COVID-19 infection process by binding to the virus’s spike, a key early step in coming up with an effective vaccine or cure. In the first few days Summit already identified 77 small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

POWER9 Summit Supercomputer battles COVID-19

 

The US Dept of Energy turned to the IBM Summit supercomputer to help in the fight against COVID-19 that appears almost unstoppable as it has swept through 84 countries on every continent except Antarctica, according to IBM. The hope is that by quickly culling the most likely initial chemical candidates, the lab researchers could get an early jump on the search for an effective cure.

As IBM explains it, viruses infect cells by binding to them and using a ‘spike’ to inject their genetic material into the host cell. When trying to understand new biological compounds, like viruses, researchers in wet labs grow the micro-organism and see how it reacts in real-life to the introduction of new compounds, but this can be a slow process without computers that can perform fast digital simulations to narrow down the range of potential variables.And even then there are challenges. 

Computer simulations can examine how different variables react with different viruses, but when each of these individual variables can be comprised of millions or even billions of unique pieces of data and compounded by the need to be run in multiple simulations this isn’t trivial. Very quickly this can become a very time-intensive process, especially  if you are using commodity hardware. 

But, IBM continued, by using Summit, researchers were able to simulate 8,000 compounds in a matter of days to model which bone might impact that infection process by binding to the virus’s spike. As of last week, they have identified dozens of small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

“Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, Governor’s Chair at the University of Tennessee, director of the UT/ORNL Center for Molecular Biophysics, and principal researcher in the study. “Our results don’t mean that we have found a cure or treatment for COVID-19. But we are very hopeful  that our computational findings will both inform future studies and provide a framework that the subsequent researchers can use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.” 

After the researchers turn over the most likely possibilities to the medical scientists they are still a long way from finding a cure.  The medical folks will take them into the physical wet lab and do whatever they do to determine whether a compound might work or not.  

Eventually, if they are lucky,  they will end up with something promising, which then has to be tested against the coronavirus and COVID-19. Published experts suggest this can take a year or two or more. 

Summit gave the researchers a jump start with its massive data processing capability, enabled through its 4,608 IBM Power Systems AC922 server nodes, each equipped with two IBM POWER9 CPUs and six NVIDIA Tensorcore V100 GPUs, giving it a peak performance of 200 petaflops, in effect more powerful than one million high-end laptops. 

Might quantum computing have sped up the process even more? IBM didn’t report throwing one of its quantum machines at the problem, relying instead on Summit, which has already been acclaimed as the world’s fastest supercomputer.

Nothing stays the same in the high performance computing world. HEXUS reports that when time is of the essence and lives are at stake, the value of supercomputers is highly evident. Now a new one, is being touted as  the world’s first 2 Exaflops+ supercomputer, is set to begin operations in 2023. This AMD-powered giant, HEXUS notes, is claimed to be about 10x faster than Summit. That’s good to know, but let’s hope the medical researchers have already beaten the Coronavirus and COVID-19  by then.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

2020 IBM Quantum Gains

January 13, 2020

IBM returned from the holidays announcing a flurry of activity around quantum computing. Specifically, it has expanded its set of Q Network partners, including a range of commercial, academic, startup, government, and research entities.  

IBM Qiskit screen

The Q Network now includes over 100 organizations, across multiple industries, including: Airline, Automotive, Banking and Finance, Energy, Insurance, Materials, and Electronics.  Specifically, Anthem, Delta Air Lines, Goldman Sachs, Wells Fargo, and Woodside Energy are among the latest organizations to begin to explore practical applications using quantum computing.

In addition to these industry leaders, a number of academic, government research labs and startups have also joined the IBM Q Network, including the Georgia Institute of Technology (Georgia Tech), Stanford University, Los Alamos National Laboratory, AIQTech, Beit, Quantum Machines, Tradeteq, and Zurich Instruments.

These organizations join over 200,000 users, who have run hundreds of billions of executions on IBM’s quantum systems and simulators through the IBM Cloud. This has led to the publication of more than 200 third-party research papers on practical quantum applications.

More quantum: IBM also recently announced the planned installation of the first two IBM Q System One commercial universal quantum computers outside the US – one with Europe’s leading organization for applied research, Fraunhofer-Gesellschaft, in Germany; another with The University of Tokyo. Both are designed to advance country-wide research and provide an education framework program to engage universities, industry, and government to grow a quantum computing community and foster new economic opportunities.

Growing a quantum computing community should quickly become a critical need and, more likely, a major headache. My own cursory search of employment sites revealed no quantum computing openings  listed. Just a few casual inquiries suggest curiosity about quantum computing but not much insight or readiness or actual skills or openings to generate action. 

Still, even at this early stage things already are happening.

Anthem, Inc., a leading health benefits company is expanding its research and development efforts to explore how quantum computing may further enhance the consumer healthcare experience. For Anthem, quantum computing offers the potential to analyze vast amounts of data inaccessible to classical computing while also enhancing privacy and security. It also brings the potential to help individuals through the development of more accurate and personalized treatment options while improving the prediction of health conditions.

Delta Air Lines joined the IBM Q Hub at North Carolina State University to embark on a multi-year collaborative effort with IBM to explore the potential capabilities of quantum computing in transforming experiences for customers and employees as they encounter challenges throughout the  travel day.

Quantum Machines (QM), a provider of control and operating systems for quantum computers, brings customers among the leading players in the field, including multinational corporations, academic institutions, start-ups and national research labs. As part of the IBM and QM collaboration, a compiler between IBM’s quantum computing programming languages, like Qiskit (see graphic above),  and those of QM is being developed for use by QM’s customers. Such development will lead to the increased adoption of IBM’s open-sourced programming languages across the industry.

The Los Alamos National Laboratory also has joined as an IBM Q Hub to greatly help the lab’s research efforts, including developing and testing near-term quantum algorithms and formulating strategies for mitigating errors on quantum computers. A 53-qubit system will also allow Los Alamos to benchmark the abilities to perform quantum simulations on real quantum hardware and perhaps to finally push beyond the limits of classical computing. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

A Blockchain Feast

January 6, 2020

Hope everybody had wonderful holidays and feasted to your heart’s content.  Did you, by chance, think about the data involved in that feast? No, not the calories you consumed but the data that tracked the food you consumed from the farm or ranch through numerous processing and shipping steps to finally arrive at your table.  Well, maybe next time.

Apple Pie: Courtesy of IBM

The IBM Food Trust, which is built on blockchain, enables sellers and consumers to trace their food from farm to warehouse to kitchen, explains IBM. For more eco- and safety-conscious diners, IBM continues, this information is crucial for ensuring a safer, smarter, more transparent and sustainable food ecosystem. The company, unfortunately, hasn’t yet said anything about  Food Trust counting calories consumed.

As IBM describes it, the Food Trust is a collaborative network of growers, processors, wholesalers, distributors, manufacturers, retailers, and others, enhancing visibility and accountability across the food supply chain. Built on IBM Blockchain, this solution connects participants through a permissioned, immutable, and shared record of food provenance, transaction data, processing details, and more.

To date, IBM reports more than 200 companies participate in Food Trust, the first network of its kind to connect participants across the food supply chain through a permanent and shared record of data. The result, according to the company,  is a suite of solutions that improve food safety and freshness, unlock supply chain efficiencies, minimize waste, and empower consumers who care about where their food comes from. 

Take chicken, for example, if you can  shop at the European grocery chain Carrefour, where chicken is being tracked by IBM Food Trust alongside a mix of other foods, like eggs, milk, oranges, pork and cheese.  This selection of foods will grow by more than 100 over the next year, says the company, but so popular is the blockchain-tracked chicken, claims IBM, that the grocer reports sales growth exceeding that of non-blockchain poultry.

Carrefour shoppers just use their smartphones to scan QR codes on the chicken’s packaging. What they will find is information on the livestock’s date of birth, nutrition information and packing date. Sounds interesting until my wife feels obligated to send the chicken a birthday gift.  Customers also learn about the food’s journey from farm to store, providing additional transparency about the life and times of this chicken. It said nothing, however, about whether it lived a wild youth.

Maybe you wonder if your seafood is correctly labeled and sustainably caught. IBM is turning to  blockchain to bring more trust and transparency to the supply chain of the fish and seafood we consume.  Specifically, the sustainable Shrimp Partnership now uses blockchain to trace the journey of Ecuadorian farmed shrimp and attest to the highest social and environmental standards. 

Similarly, the seafood industry in Massachusetts is tracing the provenance of fresh scallops. It also allows consumers in restaurants to use a QR code to learn about the seafood’s quality and origin. That’s something I might actually do. Finally, the National Fisheries Institute has joined the Food Trust Network in an effort to trace multiple seafood species.

IBM is trying to do the same with coffee, pasta, mashed potatoes, and more. This is something that I might actually grow to rely on if it were readily available and dead simple. One question is how accessible this information will be when a shopper or diner really needs it. OK, we can all use QR codes as long as they are right in front of us. But beyond that, as a diner I’m too impatient to bother to do much more.

This blog has periodically been following blockchain for years, always expecting the technology to take off imminently.  Maybe with Food Trust the technology will finally pick up some traction.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Syncsort Acquires Pitney Bowes Software & Data

December 10, 2019

It is easy to forget that there are other ISVs  who work with the z. A recent list of z ISVs ran to over a dozen, including Rocket Software, Compuware, GT Software, and Syncsort, among others.  

Syncsort has grabbed some attention of late by announcing  the completion of an agreement to combine Pitney Bowes, the postal metering company, to take over its software and data operations. As a result, Syncsort claims a position of one of the leading data management software companies in the world, serving more than 11,000 primarily z customers.

The combined portfolio brings together capabilities in location intelligence, data enrichment, customer information management, and engagement solutions with powerful data integration and optimization software. About the only thing they haven’t listed is AI.

Over the coming months, teams will be working to combine the Syncsort-Pitney Bowes organizations and portfolios. While there may be some changes within the Syncsort organization, not much will change for its customers immediately. They can still expect to receive the same level of service they have received to support their everyday needs.

Syncsort’s acquisition of the Pitney Bowes software and data business creates a data management software company with more than 11,000 enterprise customers, $600 million in revenue, and 2,000 employees worldwide. Although modest in comparison with today’s Internet tech giants and even IBM, the resulting company brings sufficient scale, agility, and breadth of portfolio to enable leading enterprises to gain a competitive advantage from their data, Syncsort noted in its announcement.

“Enterprises everywhere are striving to increase their competitiveness through the strategic use of data…”  As a result, “organizations must invest in next-generation technologies like cloud, streaming, and machine learning, while simultaneously leveraging and modernizing decades of investment in traditional data infrastructure,” said Josh Rogers, CEO, Syncsort. Now “our increased scale allows us to expand the scope of partnerships with customers so that they can maximize the value of all their data,” he added.

According to Paige Bartley of 451 Research accompanying Syncsort’s announcement:  “The ability to derive actionable human intelligence from data requires ensuring that it has been integrated from all relevant sources, is representative and high quality, and has been enriched with additional context and information. Syncsort, as a longtime player in the data management space, is further addressing these issues with the acquisition of Pitney Bowes Software Solutions’ assets – technology that complements existing data-quality capabilities to provide additional context and enrichment for data, as well as leverage customer data and preferences to drive business outcomes.” 

The combined portfolio brings together much-in-demand capabilities in location intelligence, data enrichment, customer information management, and engagement solutions with powerful data integration and optimization software. These end-to-end capabilities, Syncsort adds,  will empower organizations to overcome ever-increasing challenges around the integrity of their data so their IT and business operations can easily integrate, enrich, and improve data assets to maximize insights.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Suggests Astounding Productivity with Cloud Pak for Automation

November 25, 2019

DancingDinosaur thought IBM would not introduce another Cloud Pak until after the holidays, but I was wrong. Last week IBM launched Cloud Pak for security. According to IBM it helps an organization uncover threats, make more informed risk-based decisions, and prioritize your team’s time. 

More specifically, it connects the organization’s existing data sources to generate deeper insights. In the process you can access IBM and third-party tools to search for threats across any cloud or on-premises location. Quickly orchestrate actions and responses to those threats  while leaving your data where it is.

DancingDinosaur’s only disappointment in the IBM’s new security cloud pak as with other IBM Cloud Paks is that it runs only on Linux. That means it doesn’t run RACF, the legendary IBM access control tool for zOS. IBM’s Cloud Paks reportedly run on z Systems, but only those running Linux. Not sure how IBM can finesse this particular issue. 

Of the 5 original IBM Cloud Paks (application, data, integration, multicloud mgt, and automation) only one offers the kind of payback that will wow top c-level execs; automation.  Find Cloud Park for Automation here.

To date, IBM reports  over 5000 customers have used IBM Digital Business Automation to run their digital business. At the same time, IBM claims successful digitization has increased organizational scale and fueled growth of knowledge work.

McKinsey & Company notes that such workers spend up to 28 hours each week on low value work. IBM’s goal with digital business automation is to bring digital scale to knowledge work and free these workers to work on high value tasks.

Such tasks include collaborating and using creativity to come up with new ideas or meeting and building relationships with clients or resolving issues and exceptions. By automating these tasks the payoff, says IBM, can be staggering simply  by applying intelligent automation.

“We can reclaim 120 billion hours a year  spent by knowledge workers on low value work by using intelligent automation,” declares IBM.  So what value can you reclaim over the course of the year for your operation with, say, 100 knowledge workers, earning, maybe, $22 per hour, or maybe 1000 workers earning $35/hr. You can do the math. 

As you would expect,  automation is the critical component of this particular Cloud Pak. The main targets for enhancement or assistance among the rather broad category of knowledge workers are administrative/departmental work and expert work, which includes cross enterprise work.  IBM offers vendor management as one example.

The goal is to digitize core services by automating at scale and building low code/no code apps for your knowledge workers. For what IBM refers to as digital workers, who are key to this plan, the company wants to free them for higher value work. IBM’s example of such an expert worker would be a loan officer. 

Central to IBM’s Cloud Pak for Automation is what IBM calls its Intelligent Automation Platform. Some of this is here now, according to the company, with more coming in the future. Here now is the ability to create apps using low code tooling, reuse assets from business automation workflow, and create new UI assets.

Coming up in some unspecified timeframe is the ability to enable  digital workers to automate job roles, define and create content services to enable intelligent capture and extraction, and finally to envision and create decision services to offload and automate routine decisions.

Are your current and would-be knowledge workers ready to contribute or participate in this scheme? Maybe for some. it depends for others. To capture those billions of hours of increased productivity, however, they will have to step up to it. But you can be pretty sure IBM will do it for you if you ask.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

 


%d bloggers like this: