Author Archive

8 Steps to Bolster Your Supply Chain

November 12, 2021

By the time you read this it will be at least near the end of the work week.. But supply chain issues apparently aren’t likely to go away anytime soon, even after the holiday season. 

Writer Olivia Staub, writing for IBM on the widely reported current supply chain upheaval: “By now we’ve all heard the rumblings (and the roars!) about how myriad challenges — backlogged ports, factory shutdowns, and shipping delays — are going to impact holiday shopping this year (and probably beyond, I’d bet).”

IBM supply chain components

You’ve heard it. I’ve heard about it too. Still Staub offers  a current holiday piece: 8 Steps to Bolster your Supply Chain, with some of my comments added. The piece is now becoming widely replicated.

We’ve all heard the rumblings about the myriad challenges — backlogged ports, factory shutdowns, and shipping delays — are going to impact holiday shopping this year. According to Business Insider, the cargo ships waiting off the coast of California are partially creating this record backlog.

She continues: “We don’t want to minimize the serious impact of these substantial roadblocks — especially as the retail industry enters the eight-week holiday shopping window that accounts for a big portion of many businesses’ annual sales — but we also want to acknowledge that there are steps you can take to work around similar issues anytime.”

So, here are her 8 steps to survive this holiday season (or any similar crunch time.):

1. Be up-front with your customers

Chances are your customers have heard about many of these issues and have concerns they might not be able to buy the gifts and goods they want for the holidays. Address their concerns directly. For instance, encourage customers to shop early, to sign up to be notified when an out-of-stock item is back in stock.

2. Kick-off the holidays earlier than usual

This piece already is probably too late for this. But she notes: Early holiday shopping is anticipated this year. Be ready by taking steps like adding extra staff–if you can find any to hire–to handle increased sales volume.

3. Rethink promotions

As the price for goods — and for shipping and delivering goods — go up, it leaves less room for discounts.This year consider promotions that encourage loyalty or offer bundles and low-cost freebies with purchase.

4. Prepare your organization to be ready for more in-person shoppers

Of course, staff shortages are being widely reported–some retailers are even sending recruitment emails to me, an old retired guy. But, Staub continues;  well-designed visual merchandising and signage will entice shoppers to stop in, but she adds, give shoppers more time during the busiest time of year, especially the procrastinators and last-minute shoppers, by extending your store hours.

5. Hone in on your most popular items

Good point if you can find and get them. With so much in flux, she notes, some companies are finding solace (and sales) in the classics. In addition to stocking up on your customers traditional favorites, consider including less variety than you might normally offer.

6. Consider supplier alternatives 

If the clogged ports and shipping delays have you rethinking some of your vendors, and your company is in a position to get your products from other sources, consider finding local or US-based suppliers. In general it is always good to have alternative providers in your Rolodex — just don’t be surprised if you have never contacted them before that they won’t have time to return your call. But, she continues, this can reduce delivery time and allow you to market the products as “local”.

7. Find shipping alternatives

As national shipping carriers face staffing issues and major backlogs, look for local and regional alternatives. If you can, she says unrealistically, consider building up your own delivery services, even at a local level, to help relieve some of the shipping pressures. But it is probably not feasible to set up and manage your own shipping on short notice, even locally. Better to find someone locally who may have some excess capacity.

8. Embrace technology

DancingDinosaur always encourages embracing technology, but not at the last moment. Technology requires planning, preparation, and training. Holiday sales are projected to grow to 5.4 trillion dollars in 2022. That is serious money. If you can use any of Staub’s suggestions in any way to grab even a sliver of that what do you have to lose? Go for it.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Nedbank Teams with BMC Compuware

November 5, 2021

Been awhile since DancingDinosaur dug into a mainframe customer case study but is one from Australia. The case study is Nedbank, a large South African bank. The bank  accelerates IBM Z Innovation with BMC Compuware ISPW. The  bank reports the z15 as its core banking platform. “While we did assess our options, the reality is that nothing processes as fast as the IBM Z platform,”notes Devi Moodley | Executive Enterprise DevOps and Release Management. Nedbank is one of the largest banks in South Africa, serving both the commercial and retail banking sectors in addition to the insurance market.

Nedbank adopted Agile development methodologies .“We could see unicorns (high valuation startups) and fintechs coming in which were offering fewer traditional solutions, far more agile than us, and closer to their clients’ needs,” said Devi Moodley, Executive, Enterprise DevOps and Release Management,. 

“This meant a key part of the wider digital transformation journey required modernizing on the Z platform too along with its distributed systems. Compuware then joined with BMC to empower Nedbank’s next generation of developers to mainstream its Z platform. The goal: bring apps that run on the Z to market faster with increased quality.

The bank also realized that its legacy Source Code Management (SCM) tool was unable to support its need to embrace Agile development, and it needed a new set of tooling. The current review process took too long, and it couldn’t handle parallel development or manage contention. Branching, merging, and versioning also proved difficult, and errors often weren’t identified until the end of the day,” Moodley observed. 

Nedbank identified two teams—Core Banking and Payments, as these teams had the largest number of IBM Z components. It then evaluated three potential solutions: BMC Compuware on its own, BMC Compuware with an analysis tool layered on top, and a BMC Compuware competitor. 

The teams gathered quantitative and qualitative feedback. The solutions were compared through a cost lens along with other factors, such as ease of ownership, how many sets of tools would be needed, the size of the team needed to implement and maintain the solutions, and what kind of support would be needed from each supplier. 

The bank also identified BMC Compuware ISPW as the best solution for Continuous Integration and Continuous Delivery (CI/CD) on the Z platform. BMC Compuware ISPW ensures the Z code pipelines are secure, stable, and streamlined throughout the DevOps lifecycle. 

This gave Nedbank’s Z team the ability to better manage processes with the ability to see the status of all programs throughout the development lifecycle, perform concurrent development, and confidently merge code updates and readily integrate with modern DevOps toolchains using Jenkins plugins, REST APIs, and Command Line Interfaces (CLIs) 

“BMC Compuware’s DevOps engineers and technical specialists sat side-by-side with our teams for a week or more and were on hand to answer any questions,” said Moodley. They helped with conducting practical tests and guiding specific story points through BMC Compuware ISPW.” 

The bank was also challenged to prioritize what aspect of the solution should be adopted and when. Having strong senior executive application owners who fully supported the journey and adoption of DevOps made a huge difference. It was about much more than just adopting new tools; the ‘left shift’ needed to be embraced into the Nedbank culture.

While other platform teams adopted DevOps earlier, adoption on the IBM Z platform proved faster. The bank also benefitted in many way, particularly through increased operational efficiency, other efficiencies enabled the bank to reduce the elapsed time on administrative processes by 95 percent, contributing towards operational cost savings of more than ZAR 3 million ($176k) per annum.  Sounds good to me.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

IBM, Raytheon Collaborate

October 26, 2021

How far afield should IBM range in search of growth and revenue? Probably as far as it can to attract opportunities, advance its technological prowess, and grow the business.  IBM hasn’t been a PC company for decades. The company hasn’t been the $100 billion plus powerhouse for years, but with this latest announcement, who knows?

In a recent October announcement IBM is ranging into AI, Cryptography, and Quantum–areas the company is long familiar with. But it is the partner that make it interesting.  IBM announced  Raytheon Technologies (NYSE: RTX) will jointly develop advanced artificial intelligence, cryptographic and quantum solutions with it for the aerospace, defense and intelligence industries, including the federal government, as part of a strategic collaboration agreement between them.

IBM Quantum Computer

Artificial intelligence and quantum technologies, IBM explains, give aerospace and government customers the ability to design systems more quickly, better secure their communications networks, and improve decision-making processes. By combining IBM’s breakthrough commercial research with Raytheon Technologies’ own research, plus aerospace and defense expertise, the companies hope to be able to crack currently unsolvable challenges.

“The rapid advancement of quantum computing and its exponential capabilities has spawned one of the greatest technological races in recent history – one that demands unprecedented agility and speed,” said Dario Gil, IBM senior vice president and director of Research. “Our new collaboration with Raytheon Technologies will be a catalyst in advancing these state-of-the-art technologies – combining their expertise in aerospace, defense and intelligence with IBM’s next-generation technologies to make discovery faster, and the scope of that discovery larger than ever.”

Collaboration or not, IBM is moving ahead with quantum computing. Its development roadmap specifies plans to achieve frictionless quantum computing on a 1000+ qubit system by the end of 2023. The company’s guiding principle as it advances its quantum computing systems is how it can increase the amount of useful work that these systems can accomplish, which it refers to as quantum computing performance.

Quantum computing performance is determined by three key metrics, and you’re already familiar with two of them: the number of qubits and Quantum Volume. As quantum computing evolves, more importance is placed on the useful work that its systems can do in a reasonable amount of time, which it is introducing as a third metric, probably time-related.

Bringing quantum computers into organizations’ computing workflows to solve the world’s most pressing problems requires pushing forward on all three of these metrics all the time. “We expect that only superconducting qubits will be able to achieve performance gains across all three metrics in the near term,” according to IBM.

In addition to artificial intelligence and quantum, the companies will jointly research and develop advanced cryptographic technologies that lie at the heart of some of the toughest problems faced by the aerospace industry and government agencies.

“Take something as fundamental as encrypted communications,” said Mark E. Russell, Raytheon Technologies chief technology officer. “As computing and quantum technologies advance, existing cybersecurity and cryptography methods are at risk of becoming vulnerable. IBM and Raytheon Technologies will now be able to collaboratively help customers maintain secure communications and defend their networks better than previously possible.”

IBM puts in bluntly: Our development roadmap, and plan to achieve frictionless quantum computing on a 1000+ system by the end of 2023 is our guiding principle as we advance our quantum computing systems and increase the amount of useful work that these systems can accomplish, which IBM noted as quantum computing performance.

Quantum computing performance is determined by three key metrics and you’re probably familiar with two of them: the number of qubits and Quantum Volume. As quantum computing evolves, more importance is placed on the useful work that the systems can do in a reasonable amount of time, so IBM is introducing a third metric, probably time related.

Bringing quantum computers into organizations’ computing workflows to solve the world’s most pressing problems requires pushing forward on all three of IBM’s  key metrics all the time. IBM expects that only superconducting qubits will be able to achieve performance gains across all three metrics in the near term. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Enterprise Server 3.0

October 8, 2021

Have you heard of Enterprise Server 3.0?  It is being promoted by a small number of companies but the one that comes most immediately to mind is Bloor Research.

Z15 beats Enterprise Server 3.0

Bloor defines Enterprise Server 3.0: a server that can “add or hot swap system capacity without disruption… handle very high volume input and output (I/O) and emphasize throughput computing… replace dozens or even hundreds of smaller servers.” 

In short, what Bloor is talking about are servers that can readily adopt to changes in system usage. He sees the future of business as mutable. That is a future where businesses will be in a constant state of evolutionary change in response to rapidly evolving business environments. 

So, what can Enterprise Server 3.0 do about it? Bloor suggests that a properly managed enterprise server should have capabilities in the areas of security, resilience, performance, and reliability that are hard to match with clusters of x86 servers running commodity operating systems. 

Moreover, he continues, there may well be no obvious and holistic business benefit from the migration. The bottom line is that a company has to stay in business while modernizing its systems and any modernization must have a clearly documented business case and properly managed risk. Sounds more like Business 101 to me, something we all took a decade or more ago in grad school.

Some companies take this into account from the start. IBM has provided some version of Server 3.0 in the Z15. It offers a variety of  pay-per-use options under Tailored Fit Pricing for IBM Z, which now delivers a hardware consumption pricing solution.

Furthermore, the company also offers cross-stack pricing, storage capacity on demand, elastic capacity of IBM Power, and more. Combined, these consumption-based solutions offer some measure of control and flexibility to scale IT as needed while balancing expenses to achieve whatever value you feel you get from IBM Z.

Of course, you are not limited to IBM and Z only. Dell EMC PowerEdge servers powered by 3rd Generation AMD EPYC processors push workload boundaries with tailored IT and business solutions. They promise faster and more competitive outcomes, efficient data center manageability, and end-to-end infrastructure security.

If you are going to leave Z, your other choices include whatever x86 vendors offer, including Dell, Oracle RAC, and Microsoft. Are those going to be comparable to a Z? Not likely. But you probably could piece together something workable if you invest the time, expertise, testing, and patience. You might save some money but spend a fortune trying to reproduce the security, availability, capacity, and flexibility of any Z.

So here are Bloor’s options for modernising existing business systems running on Enterprise Server 3.0 in more detail. He starts by examining the sources of assistance that are available. He doesn’t see much in the way of discrete technology platforms.

The Enterprise Server 3 platform needs extremely powerful database services, capable of processing huge volumes of data and servicing tens of thousands (even, in special cases, over a million) transactions per second.This is a huge subject, ne notes, deserving a paper in its own right. No kidding. 

Similarly, Bloor sees all the traditional technology platforms, in a well managed organisation as an integrated whole, with currents of data flowing between the different areas, each with different local characteristics, of the technology ocean. He is not just talking about DB2 relational database but the IMS hierarchical database; Adabas inverted list database and the Natural 4GL; IDMS network database; and others. Then he notes: this is a huge subject deserving a paper in its own right. Amen.

IDC may say it best: Businesses that remain on what are sometimes referred to as legacy platforms and that take advantage of the plethora of hardware and software innovations that have been made available for those platforms have an overall better outcome, quantitatively and qualitatively, than those that move off them. Z enthusiasts couldn’t say it better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

5G Networks Are Arriving

October 5, 2021

5G Networks Are Arriving

On Sept. 23  IBM announced a strategic multi-year agreement with Telefónica to use IBM intelligent automation software and services to implement a product dubbed  UNICA Next, Telefónica’s first cloud-native, 5G core network platform. With UNICA Next, Telefónica aims to deliver the agility, reliability, and efficiency to continuously optimize its services, now and in the future. 

IBM Maximo works with 5G to improve operations

The industry has been drooling over the potential of  5G for some time. Now Telefónica and other Communications Service Providers (CSPs) around the world are preparing for the benefits 5G and the edge are expected to  bring demand for core network functions. At some point soon Telefónia hopes to have an open, secure, intelligent, and highly automated network that can power transformation for both consumer and enterprise customers across all industries.

To bolster its 5G effort the company intends to modernize its 5G core network platform built with the IBM Cloud Pak for Network Automation, Red Hat OpenShift, and Juniper networking. Telefonica expects its 5G to deliver low latency, high bandwidth, and advanced network slicing, promising to effectively enable Telefónica to assist with business transformation across industries.

The 5G networking market, however, is only just taking shape. An Amazon search suggests the early players offer a single plan that’s only available in parts of a handful of American cities.

One standard you want to keep in mind is ETSI. ETSI, an EU-based standardization group, focuses on a variety of areas including Content Delivery, Networks, Wireless Systems, Transportation, Connecting Things, Interoperability, Public Safety, Security, and, of course, 4G alongside upcoming 5G. 

With its UNICA Next offer, Telefónica seems to be counting on its partnership with IBM to acquire the agility, reliability, and efficiency to continuously optimize its services now and in the future. As Telefónica, along with other emerging CSPs, 5G promises to be the hot upcoming technology.  It will bring core network functions, such as an open, secure, intelligent, and highly automated networking that can facilitate transformation for consumer and enterprise customers across all industries.

To make all of this happen Telefónica has engaged IBM Global Business Services, a leading systems integrator and the digital transformation services and consultancy arm of IBM, Red Hat, and Juniper networking to deploy a cloud-native platform. The offering is planned to feature a new open-standard, open-networking technology-compliant platform that will be deployed across multiple central, regional, and distributed data centers, each offering low latency and high bandwidth, while able to deliver services in an agile manner. UNICA Next expects its data centers to be deployed starting in October 2021; that’s this month! Its scalable architecture is designed to address ETSI and other relevant industry standards.

“Building out the UNICA Next platform with its next-generation network architecture shows how important it is to build the infrastructure now to support the deployment of 5G, which has the potential to support thousands of use cases and applications for consumers and enterprises in all industries. Our collaboration will not only help us harness the potential of 5G, but also prepare for the future through a hybrid-cloud led technology and business transformation. With IBM, Telefónica is combining the latency and bandwidth advancements of 5G with the customization and intelligence of the cloud: we anticipate the results will be transformative,” said Javier Gutierrez, director of strategy, network, and IT development for Telefónica.

“This implementation of Telefónica’s cloud-native, 5G core network platform reflects IBM’s significant investments in AI-powered automation software and the telco’s prime systems integration expertise required to deploy a complete telecommunication networks – core, access, and edge,“adds Gutierrez.

IBM Global Telco Solutions Lab in Coppell, Texas, connected along with Telefónica’s Network Cloud Lab in Madrid, will help accelerate UNICA Next’s evolution by building new, fully integrated releases using CI/CD methodology for ongoing life-cycle upgrades to the existing UNICA Next platform. By working with IBM in this way, Telefónica will be able to increase agility and data security and continue to innovate and transform, drawing on IBM’s large network function ecosystem and Red Hat’s vast set of certified partners, and Juniper’s relationships with network function and hardware vendors.

5G may arrive as soon as this month. Are you ready?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Mainframe to Grow Through 2029

September 28, 2021

Can we finally put the mainframe-is-dead nonsense to bed once and for all? I  named DancingDinosaur about 10 years ago or more in an effort to counter those analysts who insisted the mainframe was a dinosaur marching to extinction. OK, maybe it wasn’t so obvious back then but it should be by now. The mainframe ain’t going away.

As BMC Software, and IBM note: that dying mainframe is poised to witness huge growth by 2029. Does that sound dead to you?

The global mainframe market is projected to grow at a CAGR of 3.2% through 2029

A recent Global Mainframe market report brings growth data for the estimated year 2021 and forecasted till 2029 in terms of both value (US$ MN) and volume (MT). The study, conducted by BMC Software, and IBM applies both top-down and bottom-up approaches and further iterative methods to validate and size market estimation and trends of the Global Mainframe market. 

The global mainframe market is projected to witness substantial growth, growing at a CAGR of 3.2% during the forecast period from 2021 to 2029. Mainframe computers, created in the early 1940’s were designed for managing high volumes of data processing and business transactions. At present and for more than 40 years, mainframes have evolved to meet the numerous challenges related to data processing and they are still evolving.

Mainframes normally have considerable memory capacity for processing several computing tasks at once along with great storage capacity. Mainframes also are mainly used by large scale enterprises for mission-critical applications. No computing platform has the capability or capacity for handling such a diversity of workloads better than a mainframe. As a result, rising needs for high processing computing power is one of the important factors driving the growth of the overall mainframe market.This is expected to continue through 2029 and probably beyond.

Mainframe computers were from the start designed for managing high volumes of data processing and business transactions and never to fail. Mainframes today are being used by midsize to large scale enterprises for mainly mission critical applications. No computing platform has the capability of handling such a wide diversity of workloads better than a mainframe.

Today’s mainframe offers a significant upturn in the system scalability over the preceding mainframe servers. With total system capacity and increased performance, organizations can continue to consolidate diverse applications on this single platform. In the years to come, mainframes are likely to become more flexible and faster, while also reducing its physical size. This, in turn, is expected to facilitate more market growth.

Mainframes occupy a desirable place in various fields like healthcare, banking, insurance, finance, government, and a plethora of other public as well as private market segments. In 2020, the  banking, financial services and insurance (BFSI) sector accounted for the largest market share in the global mainframe industry. Mainframes continue to be most widely used, especially in the banking sector. More than 70% of banking corporate data still resides on the mainframe because the mainframe alone is reliable, robust, and can deliver the rapid processing power that is required by financial institutions to do all major computing functions in one place.

Based on geography, North America led the overall mainframe market accounting for the largest market share in 2020. The region is likely to remain foremost throughout the forecast period through 2029. The presence of some major players like IBM, BMC Software, Dell, Hitachi, Vantara Corporation, and others support the market growth and is expected to provide more lucrative opportunities in the years to come. In North America, the U.S. dominated the regional mainframe market in 2020. Several players in the market actually have their production bases mainly in the U.S. As a result, the country is projected to dominate throughout the forecast period from 2021 to 2029.

Meanwhile, mainframe manufacturers are adopting strategies like new product development and partnerships that cater the needs of their customers and gain competitive advantage versus other players. So remember: the mainframe  is nowhere near dead.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

IBM and Sumo Accelerate Hybrid Cloud

September 20, 2021

Sumo Logic, a player in continuous intelligence and IBM announced the availability of Sumo Logic’s Continuous Intelligence Platform on Red Hat Marketplace, an open cloud marketplace for buying and deploying certified container-based software. Sumo’s platform delivers Continuous Intelligence Platform that can help companies make data-driven decisions and reduce the time to investigate security and operational issues. 

Its cloud-native SIEM, real-time security analytics and compliance functions are designed to help organizations cloud-native security and observability solutions for companies running on the Red Hat OpenShift platform. The payback: they can deploy faster and achieve insights into their cloud and hybrid infrastructures, applications, and services sooner.

Ramin Sayar, President and CEO at Sumo Logic said in the announcement: “Companies want to streamline the procurement, deployment, and management of their container applications.” Sumo’s Continuous Intelligence Platform allows organizations to get all their data into one place for observability, security, and business intelligence and reduce administrative labor in the process. 

Sayar continues:IBM helps us to offer our platform with Red Hat OpenShift on Red Hat Marketplace. This makes it easier for organizations to take advantage of our platform as they modernize and/or migrate their applications across hybrid cloud environments.”

Sumo Logic achieved Red Hat OpenShift Operator Certification status based on the support of the Continuous Intelligence Platform’s cloud-native and hybrid cloud environments, and compatibility with Red Hat OpenShift. As part of the certification, Sumo Logic will extend the company’s existing Kubernetes collection agents to Red Hat’s OpenShift Operator model, designed to make it easier to deploy and manage data from customers’ OpenShift Kubernetes clusters. 

Its Continuous Intelligence Platform helps companies make data-driven decisions and reduces the time to investigate security and operational issues. A cloud-native SIEM, real-time tool, it enables companies to make data-driven decisions and reduce the time to investigate security and operational issues.

An IDC study sponsored by Red Hat noted that organizations can achieve an average of 49% higher revenue for software products that have been certified like Sumo, by Red Hat. Such customers increasingly either require or prefer certified solutions.

For companies building cloud-native infrastructure and applications, Red Hat Marketplace is an essential destination for unlocking the value of cloud investments, designed to minimize the barriers facing global organizations as they accelerate innovation. 

Bob Lord, Senior Vice President, Worldwide Ecosystems at IBM adds: “Red Hat Marketplace is a one stop shop for enterprises to seamlessly deploy and manage software across hybrid cloud environments, spanning multiple clouds and on-premises. Sumo Logic’s certification for Red Hat OpenShift, along with its availability on Red Hat Marketplace, can help deliver innovation and value for clients via containerized workloads and the simplified deployment and management of data.”

“We believe Red Hat Marketplace is an essential destination to unlock the value of cloud investments,” said Lars Herrmann, vice president, Partner Ecosystems, Product & Technologies, Red Hat. “Our goal with the marketplace, is to make it faster and easier for companies to implement the tools and technologies that can help them succeed in this hybrid multicloud world.” Red Hat emerged as instrumental in recent quarters for IBM’s recent success in bolstering its revenues with Hybrid Cloud customer projects.

Now for the standard corporate boilerplate: Any information regarding offerings, updates, functionality, or other modifications, including release dates, is subject to change without notice. The development, release, and timing of any offering, update, functionality, or modification described herein remains at the sole discretion of Sumo Logic, and should not be relied upon in making a purchase decision, nor as a representation, warranty, or commitment to deliver specific offerings, updates, functionalities, or modifications in the future.

Similarly, Red Hat and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Red Hat itself is owned by IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

Next Generation of IBM Power Servers

September 10, 2021

IBM is not relenting on its push into hybrid clouds. Its success to date has been encouraging, but now it is going further. The company is throwing a new line of servers, the Power10 Processor, which they say is designed specifically for hybrid cloud environments.

IBM E1080, Power10 processor

The new IBM Power E1080 server, the first in a new family of servers based on the IBM Power10 processor, was designed specifically for hybrid cloud environments. The company adds: IBM Power10-equipped E1080 server is engineered to be one of the most secured server platforms and is designed to help companies operate a secured, frictionless hybrid cloud experience. 

It apparently was intended to offer a platform to meet what IBM perceived as the unique needs of enterprise hybrid cloud computing. Specifically  the POWER10 processor focuses on energy efficiency and performance in a 7nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the previous IBM POWER9 processor.

To IBM, the POWER10 processor is an important evolution in IBM’s roadmap for POWER. It is expected to be available in the second half of 2021, which is not very far away.

Some of the new processor innovations highlighted back in August included: the company’s first commercial 7nm processor, expected to deliver up to a 3x improvement in capacity. It also promises processor energy efficiency within the same power envelope as the POWER9 but allowing for greater performance.

It supports Multi-Petabyte Memory Clusters with a breakthrough new technology IBM calls Memory Inception, This is intended to improve cloud capacity and economics for memory-intensive workloads from ISVs like SAP, the SAS Institute, and others as well as large-model AI inference.

And there is more: new hardware-enabled security capabilities include transparent memory encryption, which is designed to support end-to-end security.  Also new processor core architectures in the IBM POWER10 processor with an embedded Matrix Math Accelerator is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16, and INT8 calculations per socket.

There is far more packed into the Power10 processor, more than I can squeeze into one piece. But IBM points out another consideration worth keeping in mind: not all hybrid cloud models are designed equally. You might say DancingDinosaur taps the hybrid cloud because he draws on multiple cloud providers, but that is not anywhere near what the E1080 is intended to address.

Reports Dylan Boday, IBM VP of Product Management for AI and Hybrid Cloud: “When we were designing the E1080, we had to be cognizant of how the pandemic was changing not only consumer behavior, but also our customer’s behavior and needs from their IT infrastructure. The resulting E1080 is IBM’s first system designed from the silicon up for hybrid cloud environments, a system tailor-built to serve as the foundation for our vision of a dynamic and secure, frictionless hybrid cloud experience.”

By any measure, the E1080 is not a trivial machine. It offers 240 cores, 64 TB memory, 1.6 TB/sec of memory, and 576 GB/sec of IO bandwidth. Need more? The 1080 hybrid cloud capabilities include hybrid cloud-like planned industry-first with the tight metering of Red Hat software, including Red Hat OpenShift and Red Hat Enterprise Linux. It also brings 4.1x greater OpenShift containerized throughput per core vs x86. C’mon IBM, with a chip like the 1080 do you still need CPU comparisons to x86?

Klaus Fehiker, a long time Power user from Finanz Informatik is ready to board the E1080 train. “The new server addresses our demands to continue delivering our services at scale with high resiliency requirements, including new levels of security and improved energy-efficiency. We are also keen to see how the new features can accelerate our journey to cloud and the infusion of AI into our business applications.”

The IBM Power E1080 server helps deliver on the customer demand for a frictionless hybrid cloud experience, with architectural consistency across the entire hybrid cloud estate to simplify management, At the same time it can seamlessly scale applications to meet the dynamic needs of today’s world.

There is no doubt that the 1080 processor is an incredible achievement in almost any dimension you want to look at. After spending my entire career writing about amazing technology IBM’s  E1080 is as impressive as any I covered.  

After spending over two hours painstakingly picking my way through IBM’s spec sheet for it here please don’t ask me how much it might cost you. It’s apparent IBM is putting a lot of effort into making the E1080 flexible. And there is a dazzling array of configuration options and promises of flexible pricing. You’ll just have to figure it out on your own or hope an IBM rep can.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

Hi Telum

September 3, 2021

Hi Telum

Last week IBM unveiled details of its upcoming new IBM Telum processor, the next-gen-microprocessor for IBM Z and IBM LinuxONE.

IBM Telum Processor was designed to bring deep learning inference to enterprise workloads. It’s mission: address fraud in real-time. To do that Telum is IBM’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. In short, you could catch a bad guy doing a bad thing while it is taking place. Of course you would need to put the people and processes in place to respond effectively and have them properly trained, but that is probably an entirely different conversation.

In short, you could catch a bad guy doing a bad thing while it is taking place. If you had trained people in place.Of course, you would need to put trained people and processes in place to respond effectively and have them properly trained.

The breakthrough of this new on-chip hardware acceleration is intended to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. Don’t expect a Telum-based system tomorrow. Rather, IBM suggests it is planned for the first half of 2022.

IBM is touting the Telum processor as the next-gen microprocessor for IBM Z and IBM LinuxONE, both Z-based systems. As IBM puts it: Telum is the company’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is intended to help customers achieve business insights at scale key industries and do it in what amounts to near real time.

Somehow, DancingDinosaur is not convinced most businesses are prepared to go that far that fast. You can imagine any number of complications arising starting with litigation.

Telum is the first IBM chip with a processor and technology created by the IBM Research AI Hardware Center. In addition, Samsung is IBM’s technology development partner for the Telum processor, developed in 7nm EUV technology node.

A little more from IBM on Telum specifics: The microprocessor contains 8 processor cores, clocked at over 5GHz, with each core supported by a redesigned 32MB private level-2 cache. The level-2 caches interact to form a 256MB virtual Level-3 and 2GB Level-4 cache.

Along with improvements to the processor core itself, the 1.5x growth of cache per core over the z15 generation is designed to enable a significant increase in both per-thread performance and total capacity IBM can deliver in the next generation IBM Z system. Telum’s performance improvements are vital for rapid response times in complex transaction systems, especially when augmented with real time AI inference.

Telum also features significant innovation in security, with transparent encryption of main memory. Telum’s Secure Execution improvements are designed to provide increased performance and usability for Hyper Protected Virtual Servers and trusted execution environments, making Telum an optimal choice for processing sensitive data in Hybrid Cloud architectures, a big IBM marketing target.

IBM’s standard  boilerplate: Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.


DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

IBM Delivers 2Q21 Win

August 26, 2021

IBM’s latest quarterly earnings (2Q21) are finally getting  interesting now that it has put years of quarterly losses into the past, at least for now. And finally the company has started gaining traction with hybrid clouds. Total cloud revenue in the quarter was $7.0 billion.

IBM and Red Hat announce hybrid cloud software

In fact, the company credits cloud computing for its record highest quarterly sales growth in over 2 years. It almost makes you forget the recent years of incessant quarterly losses. CFO James Kavanaugh prefers to cheer on the strong spending by clients in retail, manufacturing, and travel in the US. And who can blame him?

He continues: Sales from its cloud computing services jumped 21 percent while the company experienced a sales decline in global technology services.  So much for the excitement around IBM’s Kyndryl services restructuring although it may be too soon to expect to see much from that.

Here’s how Reuters explained the Kyndryl thing: The plan to separate was announced in 2020, which DancingDinosaur reported at that time. It followed years of IBM trimming its legacy businesses as it increasingly focused on its cloud offerings to counter slowing software sales and seasonal demand for its mainframe servers–DancingDinosaur wouldn’t exactly call it seasonal; sales increases invariably followed each upgrade of the Z. BTW, those cloud offerings were specifically tagged for hybrid clouds

So, how are you going to treat Kyndryl? Will you include it in a proposal? Will you buy from it, grant it credibility? If it delivers the same high quality IT services you expect at a competitive price and in a timeframe that meets your needs why not give it a consideration?

DancingDinosaur will be watching to see how other top tier IT services providers respond. In the meantime Kyndryl came out respectfully in IBM 2Q statement:  The impact of the Kyndryl separation costs for second-quarter 2021 was ($0.15) per share.

Sales from IBM’s cloud computing services jumped 21 percent to $6.5 billion) in the quarter. The 109-year-old firm is preparing to split itself into two public companies, with the namesake firm narrowing its focus on the so-called hybrid cloud, where it sees a $1 trillion market opportunity. The company did record a sales decline in global technology services, but added it was largely offset by a rise in revenue in the remaining three units, including a surprise growth in the business that hosts mainframe computers.

Mainframe saw strong traction from the financial services industry, where its banking clients shopped for more capacity as trading volumes soared during the retail trading frenzy, CFO added Kavanaugh said. The pandemic wasn’t bad for every business.

“I am glad to see that strategic projects, which are IBM’s bread and butter, are coming back,” added Patrick Moorhead, analyst at Moor Insights & Strategy. He noted: systems and global business services growth was a surprise on the cloud.

IBM revenue rose nearly 1 percent to $17.73 billion in the quarter, beating analysts’ average estimate of $17.35 billion, according to IBES data from Refinitiv.

Net income fell to $955 million in the quarter ended March 31, from $1.18 billion compared to a year earlier. Overall, the company earned $1.77 per share, beating market expectation of $1.63. 

Is that good enough? DancingDinosaur isn’t an investor so it won’t offer an opinion. However, it is nice not to be writing about incessant losses quarter after quarter.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.


%d bloggers like this: