IBM Accelerates Hybrid Cloud

October 15, 2020

IBM has been drooling over the idea of the hybrid cloud for several years at least. Hybrid clouds result when an organization operates more than one cloud, often an on-premises cloud or more and one or more clouds from third-party providers. IBM wants to provide its IBM Cloud to the customer as its on-premises cloud and provide a variety of specialized clouds to the same customers, effectively expanding its hybrid cloud revenues.

So last week, (Oct. 8) IBM revealed complex plans to accelerate its hybrid cloud growth strategy to drive digital transformations for its clients. BTW, if you expect to sell complex technology and systems, you better tie it directly to business transformation, which has emerged as the hot C-suite buzzword. 

And, IBM continued, it will separate the Managed Infrastructure Services unit of its Global Technology Services division into a new public company, imaginatively called NewCo for now. This creates two companies, each with strategic focus and flexibility to drive client and shareholder value. This will be achieved as a tax-free spin-off to IBM shareholders, and completed by the end of 2021.

Of course, others have similar ideas. Oracle offers its cloud infrastructure with a free promotion. Microsoft offers its Azure Cloud platform. Even HPE is there with its Pointnext cloud services. So you have choices.

Arvind Krishna, IBM Chief Executive Officer adds: “IBM is laser-focused on the $1 trillion hybrid cloud opportunity.” Client buying needs, he continued, “for application and infrastructure services are diverging, while adoption of our hybrid cloud platform is accelerating.”

IBM will focus on its open hybrid cloud platform and AI capabilities. NewCo will have greater agility to design, run and modernize the infrastructure. Does it tempt you to jump in right now to buy some IBM shares (about $125 a share)?

The company is understandably enthusiastic. As two independent companies, IBM and NewCo might capitalize on their respective strengths. IBM will accelerate clients’ digital transformation journeys, and NewCo will accelerate clients’ infrastructure modernization efforts.

IBM will focus on its open hybrid cloud platform, which, it claims, represents a $1 trillion market opportunity. To build its hybrid cloud foundation, IBM acquired Red Hat, for $34 billion, to unlock the cross platform value of the cloud. This platform also promises to facilitate the deployment of powerful AI capabilities to enable the power of data, application modernization services, and systems. This moves from a company with more than half of its revenues in services to one with a majority in high-value cloud software and solutions and producing more than 50% of its portfolio in recurring revenues.

IBM’s open hybrid cloud platform architecture, based on Red Hat OpenShift, works with the entire range of clients’ existing IT infrastructures, regardless of vendor, driving up to 2.5 times more value for clients than a public cloud-only solution, it claims. 

This is a fresh repackaging of what ibm has been moving toward for some years and at considerable expense. Would you as a customer buy it? DancingDinosaur isn’t being asked but it would wait to see specific pricing, packaging, terms, and conditions. And absolutely shop around. You have choices.

After parsing much of the IBM boilerplate around this announcement, it is somewhat disappointing that IBM said nothing about where the Z fits in. The Z has been the company’s only profitable product performer in recent quarters. It certainly isn’t going to be a services player. 

DancingDinosaur has covered the Z under its various names for 20 years. Guess Z fans will just have to see where it ends up, which very well could be nowhere. Is it likely that IBM would abandon a profitable product line that attracts so much of the Fortune 100? Or will they dump it the way they dumped chip fabrication, by paying somebody to take it? Guess we’ll just have to wait and see. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

BMC 15th Annual Mainframe Survey

October 9, 2020

This month BMC came out with the results of its 15th annual mainframe survey. Survey respondents laid out some ambitious goals starting with plans to increase adoption of DevOps on the mainframe. Specifically they want greater stability, security, and scalability. 

Respondents also reported a desire to speed up AI adoption as they seek smarter operational data. Forty-six percent of respondents made data recovery a priority. Driving the increased interest was the desire to better predict data recovery times. Even more, 64%, wanted to reduce planned outages, effectively ensuring that high availability continues as a priority.

Similarly respondents expressed increased interest in SIEM. Security-related concerns ranked significantly higher than in last year’s survey, as did vulnerability scanning. It was not too long ago that mainframe shops were quite complacent about mainframe security. No longer.

Overall the mainframe comes out very well, especially compared to years when respondents were reporting plans to deactivate the mainframe. For example, 90% of respondents reported positive sentiments toward the mainframe by management.

Similarly, 68% forecast MIPS growth of 6 percent. Among the largest mainframe shops 67% report more than half of their data resides on the mainframe. 

Mainframe staffing has been an ongoing concern for years. Remember the experienced mainframe veterans hitting retirement and are impossible to replace. IBM and the Open Mainframe Project have been working on this and finally appear to be making headway.

To start, IBM has been expanding the capabilities of the mainframe itself. Over 20 years ago, IBM introduced Linux on the mainframe. That provided, at some level, an alternative to zOS. It was clunky and inelegant at first but over the last two decades it has been refined. Today there are powerful LinuxONE mainframes that can handle the largest transaction workloads. IBM has made it easier to use Linux as a Linux machine with the power of the z15. Throw in Java on the mainframe and you have a very flexible mainframe that doesn’t look and feel like a traditional mainframe.

More recently, the Open Mainframe Project introduced Zowe. a new open-source framework. Zowe brings together systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe, IBM CA, and Rocket Software introduced Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT, cloud development, and the mainframe. 

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe as he or she would on any cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know and use them to access, call, and integrate mainframe resources and services. So you can stop pining for those retired mainframe veterans. They’re drinking Scotch on the beach.

Ironically the mainframe is probably older than the programmers Zowe will attract. Zowe opens new possibilities for next generation applications from next generation programmers and developers for mainframe shops desperately needing new, mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation, while making experienced professionals more efficient. BTW, Zowe’s code is made available under the open-source Eclipse Public License 2.0.

Whether it is Zowe or just the opening of the mainframe things are changing in the right direction. The people with 1-10 years experience on the mainframe have increase from 47% to 63% while those with more than 20 years of mainframe experience represent 18%. Most encouraging is the growth of women, who now constitute 40% (from 30% a year before) while men have declined from 70% to 60%. 

Respondents also reported a desire to speed up AI adoption as they seek smarter operational data. Forty-six percent of respondents made data recovery a priority. Driving the increased interest was the desire to better predict data recovery times. Even more, 64%, wanted to reduce planned outages, effectively ensuring that high availability continues as a priority.

Similarly respondents expressed increased interest in SIEM. Security-related concerns ranked significantly higher than in last year’s survey, as did vulnerability scanning. It was not too long ago that mainframe shops were quite complacent about mainframe security. No longer.

Overall the mainframe comes out very well, especially compared to years when respondents were reporting plans to deactivate the mainframe. For example, 90% of respondents reported positive sentiments toward the mainframe by management.

Similarly, 68% forecast MIPS growth of 6 percent. Among the largest mainframe shops 67% report more than half of their data resides on the mainframe. 

Mainframe staffing has been an ongoing concern for years. Remember the experienced mainframe veterans hitting retirement and are impossible to replace. IBM and the Open Mainframe Project have been working on this and finally appear to be making headway.

To start, IBM has been expanding the capabilities of the mainframe itself. Over 20 years ago, IBM introduced Linux on the mainframe. That provided, at some level, an alternative to zOS. It was clunky and inelegant at first but over the last two decades it has been refined. Today there are powerful LinuxONE mainframes that can handle the largest transaction workloads. IBM has made it easier to use Linux as a Linux machine with the power of the z15. Throw in Java on the mainframe and you have a very flexible mainframe that doesn’t look and feel like a traditional mainframe.

More recently, the Open Mainframe Project introduced Zowe. a new open-source framework. Zowe brings together systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe, IBM CA, and Rocket Software introduced Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT, cloud development, and the mainframe. 

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe as he or she would on any cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know and use them to access, call, and integrate mainframe resources and services. So you can stop pining for those retired mainframe veterans. They’re drinking Scotch on the beach.

Ironically the mainframe is probably older than the programmers Zowe will attract. Zowe opens new possibilities for next generation applications from next generation programmers and developers for mainframe shops desperately needing new, mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation, while making experienced professionals more efficient. BTW, Zowe’s code is made available under the open-source Eclipse Public License 2.0.

Whether it is Zowe or just the opening of the mainframe things are changing in the right direction. The people with 1-10 years experience on the mainframe have increase from 47% to 63% while those with more than 20 years of mainframe experience represent 18%. Most encouraging is the growth of women, who now constitute 40% (from 30% a year before) while men have declined from 70% to 60%. 

When DancingDinosaur was young with a full head of dark hair he complained of the dearth of women at IT conferences. Now, as he approaches retirement, he hopes the young men appreciate it. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

  • Post
  • Block

No block selected.Open publish panel

When DancingDinosaur was young with a full head of dark hair he complained of the dearth of women at IT conferences. Now, as he approaches retirement, he hopes the young men appreciate it. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Quantum Computing Use Cases

October 2, 2020

Have you been dreaming of all the great things you would do if you just had ready access to a stable and sufficiently powerful quantum computer? Through the IBM Q Network you can access any of IBM’s quantum computers they are making available over the Internet. IBM laid out its roadmap for quantum computers just a couple of weeks ago, which DancingDinosaur covered here

The company reports that substantive quantum work is being attempted using machines available through its Q Network. DancingDinosaur has looked at Qiskit, IBM’s quantum programming language, and looked at the learning materials that accompany it, but even then, I haven’t experienced that Eureka moment–an idea that could only be effectively handled if only I had a sufficiently powerful quantum machine. My best programming ideas, I’m embarrassed to admit, can be handled perfectly well on an x86 box running Visual Basic. Sorry, but I’m just not yearning for the 1000+ qubit machine IBM is promising in 2023 or the million-plus qubit machine after that.

D-Wave Quantum machine

D-Wave Systems Inc., a Canadian quantum computing company, hired 451 Research to investigate enterprise attitude and appetite with regard to quantum computing. The survey found that quantum computing is emerging as a powerful tool for large-scale businesses, the majority of which generate over $1 billion in revenue.

Among the priorities the researchers found were increasing efficiency and productivity at an organizational level, boosting profitability, and solving large and complex business problems that may not be solvable with current methods, tools, and technology. And the researchers concluded, of course, that  now is the time for executives to take the quantum computing investment seriously because the competition is already exploring how to solve complex problems and gain the coveted first-to-market advantages. 

If that sounds familiar, we have been hearing versions of it for decades. This is the classic way to drive decision makers to invest in the next greatest thing–the fear of being left behind. DancingDinosaur has been writing exactly those kinds of reports arriving at similar conclusions and driving similar results for years.

D-Wave’s Volkswagon quantum story begins with the company launching in Lisbon the world’s first pilot project for traffic optimization using a D-Wave quantum computer. For this purpose, the Group is equipping buses of the city of Lisbon with a traffic management system developed in-house. This system uses a D-Wave quantum computer and calculates the fastest route for each of the nine participating buses individually and almost in real-time. The result: passengers’ travel times will be significantly reduced, even during peak traffic periods, and traffic flow will be improved.

Global industry leaders across fields – from transportation to pharmaceuticals to financial services – are now looking to quantum computing to rethink business solutions and maintain competitive advantage over their peers.

The survey found that while 39% of surveyed enterprises are already experimenting with quantum computing today, a staggering 81% have a use-case in mind for the next three years. High on the agenda for critical business benefits via quantum are increased efficiency and improved profitability, followed closely by improved processes, productivity, revenue, and a faster time to market for new products. Increased efficiency? Hey, I could have said that about what I did with Visual Basic on x86.

Efficiency is particularly critical to business leaders because enterprises often suffer productivity losses when tackling complex problems. In fact, over a third of enterprises have abandoned complex problems in the last three years due to time constraints, complexity, or a lack of capacity. Yet, 97% of enterprises rate solving complex problems as high importance or business-critical. Clearly today’s computing technology is not adequately meeting large-scale business’ needs, and VB on x86 just can’t cut it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Z Open Terminal Emulation

September 25, 2020

You can spend a lot of time working with the Z and not find much new in terminal emulation. But there actually are a few new things, mainly because times change and people work differently, using different devices and doing new things. Sure, it all goes back to the mainframe, but it is a new world.

Terminal emulator screen

Rocket Software’s latest wrinkle in terminal emulation is BlueZone Web, which promises to simplify using the mainframe by enabling users to access host-based applications anywhere and on any type of device. It is part of a broader initiative Rocket calls Open AppDev for Z. From DancingDinosaur’s perspective its strength lies in being Zowe-compliant, an open source development environment from the Open Mainframe Project.This makes IBM Z a valuable open platform for an enterprise DevOps infrastructure.

Zowe is the first open source framework for z/OS. It facilitates DevOps teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Launched in a collaboration of initial contributors IBM, CA Technologies, and Rocket Software, and supported by the Open Mainframe Project. The goal is to cultivate the next generation of mainframe developers, whether or not they have Z experience. Zowe promotes a faster team on-ramp to productivity, collaboration, knowledge sharing, and communication.

This is the critical thing about Zowe: you don’t need Z platform experience. Open source developers and programmers can use a wide range of popular open source tools, languages, and technologies–the tools they already know. Sure it’d be nice to find an experienced zOS developer  but that is increasingly unlikely, making Zowe a much better bet.   

According to the Open Source Project, IBM’s initial contribution to Zowe was an extensible z/OS framework that provides REST-based services and APIs that will allow even inexperienced developers to rapidly use new technology, tools, languages, and modern workflows with z/OS. 

IBM continues to invest in the open source environment through Zowe and other open source initiatives.  Zowe also has help from Rocket Software, which provides a web user interface, and CA, which handles the Command Line Interface. You can find more about zowe here.

IBM introduced Linux, a leading open source technology, to the Z over 20 years ago. In time it has expanded the range of the Z through open-source tools that can be combined with products developed by different communities. This does create unintentional regulatory and security risks. Rocket Open AppDev for Z helps mitigate these risks, offering a solution that provides developers with a package of open tools and languages they want, along with the security, easy management, and support IBM Z customers require.

“We wanted to solve three common customer challenges that have prevented enterprises from leveraging the flexibility and agility of open software within their mainframe environment: user and system programmer experience, security, and version latency,” said Peter Fandel, Rocket’s Product Director of Open Software for Z. “With Rocket Open AppDev for Z, we believe we have provided an innovative secure path forward for our customers,” he adds. Businesses can now extend the mainframe’s capabilities through the adoption of open source software, making IBM Z a valuable platform for their DevOps infrastructure.”

But there is an even bigger question here that Rocket turned to IDC to answer. The question: whether businesses that run mission-critical workloads on IBM Z or IBMi should remain on these platforms and modernize them by leveraging the innovative tools that exist today or replatform by moving to an alternative on-premises solution, typically x86 or the cloud.

IDC investigated more than 440 businesses that have either modernized the IBM Z or IBMi or replatformed. The results: modernizers incur lower costs for their modernizing initiative than the replatformers.  Modernizers were more satisfied with the new capabilities of their modernized platform than replatformers; and the modernizers achieved a new baseline for which they paid less in hardware, software, and staffing. There is much more of interest in this study, which DancingDinosaur will explore in the weeks or months ahead.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

IBM Roadmap for Quantum Computing

September 18, 2020

IBM started quantum computing in the cloud a few years back by making their small qubit machines available in the cloud and even larger ones now. In mid September, IBM released its quantum computing roadmap to take it from today to million-plus qubit devices. The first benchmark is a 1,000-plus qubit device, IBM Condor, targeted for the end of 2023. Its latest challenge: going beyond what’s possible on conventional computers by running revolutionary applications across industries.

control lots of qubits for long enough with few errors

The key is making quantum computers stable by keeping them cold. To that end IBM is developing a dilution refrigerator larger than any currently available commercially. Such a refrigerator puts IBM on a course toward a million-plus qubit processor. 

The IBM Quantum team builds quantum processors that rely on the mathematics of elementary particles in order to expand its computational capabilities running quantum circuits. The biggest challenge facing IBM’s team today is figuring out how to control large systems of qubits for long enough, and with minimal errors, to run the complex quantum circuits required by future quantum applications.

IBM has been exploring superconducting qubits since the mid-2000s, increasing coherence times and decreasing errors to enable multi-qubit devices in the early 2010s. Continued refinements allowed it to put the first quantum computer in the cloud in 2016. 

Today, IBM maintains more than two dozen stable systems on the IBM Cloud for clients and the general public to experiment on, including the 5-qubit IBM Quantum Canary processor and its 27-qubit IBM Quantum Falcon processor,  on which it recently ran a long enough quantum circuit to declare a Quantum Volume of 64, an IBM created metric. This achievement also incorporated improvements to the compiler, refined calibration of the two-qubit gates, and upgrades to the noise handling and readout based on tweaks to the microwave pulses.

This month IBM quietly released its 65-qubit IBM Quantum Hummingbird processor to its Q Network members. This device features 8:1 readout multiplexing, meaning it combines readout signals from eight qubits into one, reducing the total amount of wiring and components required for readout and improving its ability to scale.

Next year, IBM intends to debut a 127-qubit IBM Quantum Eagle processor. Eagle features several upgrades in order to surpass the 100-qubit milestone: through silicon vias, which allow electrical signals to pass through the substrates to enable smaller device sizes and a reduced signal path and multi-level wiring to effectively fan-out a large density of conventional control signals while protecting the qubits in a separate layer in order to maintain high coherence times. The qubit layout will allow IBM to implement the heavy-hexagonal error-correcting code that its team debuted last year, as it scaled up the number of physical qubits and error-corrected logical qubits.

These design principles established for its smaller processors will set it on a course to release a 433-qubit IBM Quantum Osprey system in 2022. More efficient and denser controls and cryogenic infrastructure will ensure that scaling up the processors doesn’t sacrifice the performance of the individual qubits, introduce further sources of noise, or take too large a footprint.

In 2023, IBM intends to debut the 1,121-qubit Quantum Condor processor, incorporating the lessons learned from previous processors while continuing to lower the critical two-qubit errors so that it can run longer quantum circuits. IBM presents Condor as a milestone that marks its ability to implement error correction and scale up devices while simultaneously complex enough to solve problems that can be solved more efficiently on a quantum computer than on the world’s best supercomputers, achieving the quantum Holy Grail.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Pushing Quantum Onto the Cloud

September 4, 2020

Did you ever imagine the cloud would become your quantum computing platform, a place where you would run complex quantum algorithms requiring significant specialized processing across multi-qubit machines available at a click? But that is exactly what is happening.

IBM started it a few years back by making their small qubit machines available in the cloud and even larger ones now. Today Xanadu is offering 8-qubit or 12-qubit chips, and even a 24-qubit chip in the next month or so, according to the Toronto-based company.

Xanadu quantum processor

As DancingDinosaur has previously reported, there are even more: Google reports a quantum computer lab with five machines and Honeywell has six quantum machines. D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.

D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.In September, Xanadu introduced its quantum cloud platform. This allows developers to access its gate-based photonic quantum processors with 8-qubit or 12-qubit chips across the cloud.

Photonics-based quantum machines have certain advantages over other platforms, according to the company. Xanadu’s quantum processors operate at room temperature, not low Kelvin temperatures. They can easily integrate into an existing fiber optic-based telecommunication infrastructure, enabling quantum computers to be networked. It also offers scalability and fault tolerance, owing to error-resistant physical qubits and flexibility in designing error correction codes. Xanadu’s type of qubit is based on squeezed states – a special type of light generated by its own chip-integrated silicon photonic devices, it claims.

DancingDinosaur recommends you check out Xanadu’s documentation and details. It does not have sufficient familiarity with photonics, especially as related to quantum computing, to judge any of the above statements. The company also notes it offers a cross-platform Python library for simulating and executing programs on quantum photonic hardware. Its open source tools are available on GitHub.

Late in August IBM has unveiled a new milestone on its quantum computing road map, achieving the company’s highest Quantum Volume to date. By following the link, you see that Quantum Value is a metric conceived by IBM to measure and compare quantum computing power. DancingDinosaur is not aware of any other quantum computing vendors using it, which doesn’t mean anything of course. Quantum computing is so new and so different and with many players joining in with different approaches it will be years before anadu see what metrics prove most useful. 

To come up with its Quantum Volume rating, IBM  combined a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit, systems to achieve the high Quantum Volume rating. The company has made a total of 28 quantum computers available over the last four years through the IBM Quantum Experience, which companies join to gain access to its quantum machines and tools, including its software development toolset, 

Do not confuse Quantum Volume with Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer versus a conventional one. Quantum Advantage will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume, notes IBM, measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve its Quantum Volume milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit. The IBM Quantum team has shared details on the technical improvements made across the full stack to reach Quantum Volume 64 in a preprint released on arXiv, today.

What is most exciting is that the latest quantum happenings are things quantum you can access over the cloud without having to cool your data center to near zero Kelvin temperatures. If you try any of these, DancingDinosaur would love to hear how it goes.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Here Comes POWER10

August 26, 2020

Early in my writing about computers Intel began regularly introducing a series of x86 processors, including one called Pentium. Another IT analyst friend was drooling over his purchase of a laptop built on the new Pentium. “This is a mainframe in a laptop!” he exclaimed. It wasn’t but sounded exciting.

IBM POWER10

IBM’s latest technology announcement now is the new POWER10, expected in the second half of 2021. According to IBM’s announcement, the new processor delivers 3X performance based on pre-silicon engineering analysis of Integer, Enterprise, and Floating Point environments on a POWER10 dual socket server offering with 2×30-core modules vs a POWER9 dual socket server offering with 2×12-core modules. More power for sure, but you won’t find DancingDinosaur, apologies to my old friend, even suggesting this is comparable to a mainframe. 

The IBM POWER10 was designed for enterprise hybrid cloud computing. The POWER10 uses a design focused on energy efficiency and performance in a 7nm form factor, fabricated by Samsung,  with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the current POWER9 processor.

This is a processor intended for today’s increasingly complex hybrid cloud workloads. To that end, IBM packed the processor with innovations, including:

    • IBM’s First Commercialized 7nm Processor that is expected to deliver up to a 3x improvement in capacity and processor energy efficiency within the same power envelope as IBM POWER9, allowing for greater performance.
    • Support for Multi-Petabyte Memory Clusters with a new technology called Memory Inception, designed to improve cloud capacity and economics for memory-intensive workloads. Memory Inception enables any of the IBM POWER10 processor-based systems to share memory and drive cost and energy savings.
    • New Hardware-Enabled Security Capabilities including transparent memory encryption designed to support end-to-end security.  IBM engineered the POWER10 processor to achieve significantly faster encryption performance with quadruple the number of AES encryption engines per core compared to IBM POWER9 for today’s most demanding standards and anticipated future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption, (FHE), which let’s you perform computation directly on the data wherever it lands while it remains encrypted. Sounds ideal for hybrid clouds. It also brings new enhancements to container security.
    • New Processor Core Architectures in the IBM POWER10 processor with an embeddedC. which is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16 and INT8 calculations per socket respectively than the IBM POWER9 processor, while improving performance for enterprise AI inference workloads.

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor includes  innovations like:

    • Support for Multi-Petabyte Memory Clusters, which leverages Memory Inception, designed to improve cloud capacity and economics for memory-intensive workloads.
    • New Hardware-Enabled Security Capabilities including transparent memory encryption designed to support end-to-end security. The POWER10 processor is engineered to achieve significantly faster encryption performance with quadruple the number of AES encryption engines per core compared to IBM POWER9. It handles the most demanding standards and anticipated future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption. It also brings new enhancements to container security. 
    • New Processor Core Architectures in the IBM POWER10 processor with an embedded Matrix Math Accelerator, which is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16 and INT8 calculations per socket respectively to infuse AI into business applications.

With the 7nm processor not shipping until the second half of 2021, you have time to think about this. IBM has not mentioned pricing or packaging. Notes Stephen Leonard, GM of IBM Cognitive Systems. “With IBM POWER10 our stated goal of making Red Hat OpenShift the default choice for hybrid cloud. IBM POWER10 brings hardware-based capacity and security enhancements for containers to the IT infrastructure level.” Translation: POWER10 won’t come cheap.

With only a vague shipping date and no hint of pricing and packaging, you have time to think about fitting POWER10 into your plans.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

5G Will Accelerate a New Wave of IoT Applications and Z

August 10, 2020

Even before the advent of 5G DancingDinosaur, which had ghostwritten a top book on IoT, believed that IoT and smartphones would lead back to the Z eventually, somehow. Maybe the arrival of 5G and smart edge computing might slow the path to the Z. Or maybe not.

Even transactions and data originating and being processed at the edge will need to be secured, backed up, stored, distributed to the cloud, to other servers and systems, to multiple clouds, on premises, and further  processed and reprocessed in numerous ways. Along the way, they will find their ways back to a Z somehow and somewhere, sooner or later.

an edge architecture

5G is driving change in the Internet of Things (IoT). It’s a powerful enabling technology for a new generation of use cases that will leverage edge computing to make IoT more effective and efficient,” writes Rishi Vaish and Sky Matthews. Rishi Vaish is CTO and VP, IBM AI Applications; Sky Matthews is CTO, Engineering Lifecycle Management at IBM.  DancingDinosaur completely agrees, adding only that it won’t just stop there.

Vaish and Matthews continue: “In many ways, the narrative of 5G is the interaction between two inexorable forces: the rise in highly reliable, high-bandwidth communications, and the rapid spread of available computing power throughout the network. The computing power doesn’t just end at the network, though. End-point devices that connect to the network are also getting smarter and more powerful.” 

True enough, the power does not just end there; neither does it start there. There is a long line of powerful systems, the z15 and generations of Z before it that handle and enhance everything that happens in whatever ways are desired at that moment or, as is often the case, later. 

And yes, there will be numerous ways to create comparable services using similarly smart and flexible edge devices. But experience has shown that it takes time to work out the inevitable kinks that invariably will surface, often at the least expected and most inopportune moment. Think of it as just the latest manifestation of Murphy’s Law moved to the edge and 5G.

The increasingly dynamic and powerful computational environment that’s taking shape as telcos begin to redesign their networks for 5G will accelerate the uptake of IoT applications and services throughout industry,  Vaish and Matthews continue. We expect that 5G will enable new use cases in remote monitoring and visual inspection, autonomous operations in large-scale remote environments such as mines, connected vehicles, and more.

This rapidly expanding range of computing options, they add,  requires a much more flexible approach to building and deploying applications and AI models that can take advantage of the most cost-efficient compute resources available.

IBM chimes in: There are many ways that this combination of 5G and edge computing can enable new applications and new innovations in various industries. IBM and Verizon, for example, are developing potential 5G and edge solutions like remote-controlled robotics, near real-time video analysis, and other kinds of factory-floor automation.

The advantage comes from smart 5G edge devices doing the analytics immediately, at the spot where decisions may be best made. Are you sure that decisions made at the edge immediately are always the best? DancingDinosaur would like to see a little more data on that.

In that case, don’t be surprised to discover that there will be other decisions that benefit from being made later, with the addition of other data and analysis. There is too much added value and insight packed into the Z data center to not take advantage of it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

AI Hardware Accelerators and AI Models

August 3, 2020

At times it seems like IBM is just dallying with AI, but at a late July briefing IBM showed just how seriously it is pursuing AI and how difficult the challenge is. It starts with today’s computers and supercomputers, essentially bit processors. Can we say that’s the easy stuff; at least that’s the stuff we are most familiar with.

Neurons come next in the IBM scheme. Here biology and information are tapped for today’s rudimentary AI systems. Next you throw in qubits, which combine physics and information. Now we’re entering the realm of quantum machines.

DancingDinosaur is mesmerized by quantum computing but only understands it at the level of his 40-year old physics course. It starts with today’s computers and supercomputers, essentially bit processors. Can we say this is the easy stuff; at least that’s the stuff we are most familiar with.

Where all this is going is not toward some mesmerizing future of quantum systems dazzling us with nearly instant solutions to seemingly impossible problems. No, it seems, at one level, more prosaic than that, according to Jeffrey Burns.  IBM’s Director, AI Compute.

As Burns  puts it: IBM is building the future of computing. That future, he continues, is a pipeline of innovation for the future of Hybrid Cloud and AI. We should have known that except various IBMers have been saying it for several years at least and it just sounded too simple.

Burns breaks it down into four areas: Core Technology, Innovation for AI, Innovation for Hybrid Cloud, and Foundational Scientific Discovery. 

It is tempting to jump right to the Foundational Scientific Discovery stuff; that’s the sexy part. It includes new devices and materials, breakthrough data communications, computational, secured storage, and  persistent memory architectures.

At the other end is what Burns calls core technology. This encompasses semiconductor devices, processor architecture,  novel memory, MRAM, and advanced packaging.

Among the innovations for AI are AI hardware, real-time AI for transaction processing,  HW and SW for federated AI learning to enhance security and privacy.

Finally, there are innovations for hybrid cloud. These include Red Hat RHEL integration,  storage and data recovery,  high speed networking,  security, and heterogeneous system architecture for hybrid cloud.  

But, AI and Hybrid Cloud can advance only as far as hardware can take them, notes Burns. The processing demands at even the first two steps are significant. For example image recognition training with a dataset of 22K requires 4 GPUs, takes 16 days, and consumes 385 kWh. If you want it faster, you can throw 256 GPUs at it for 7 hours, which still consumes 450 kWh. Or think of it another way, he suggests: 1 model training run eats the equivalent of ~2 weeks of home energy consumption.

And we’ve just been talking about narrow AI. Broad AI, Burns continues, brings even more computational demands and greater functionality requirements at the edge.If you’re thinking of trying this with your data center, none of this is trivial. Last year IBM invested $2B  to create an  Artificial Intelligence Hardware Center. Twelve organizations have joined it and it continues to grow, Burns reports. You’re welcome to join.

IBM’s idea is to innovate and lead in AI accelerators for training and inferencing, leverage partnerships to drive AI leadership from materials and devices through software, and generate AI application demonstrators with an industry leading roadmap. 

Here is where Burns wants to take it: Extending performance by 2.5X/year through 2025.  Apply approximate computing principles to Digital AI cores with reduced precision, as well as Analog AI Cores (Remember analog? Burns sees it playing a big energy-saving role.), which could potentially offer another 100x in energy-efficiency.

If you want to try your hand at AI at this level, DancingDinosaur would love to know and throw some digital ink your way.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Elastic Storage System 5000–AI Utility Storage Option

July 27, 2020

The newest storage from IBM is the Elastic Storage System 5000. It promises, as  you would expect, leading performance, density, and scalability, but that’s not the most interesting part of IBM’s July and Aug. storage offerings. In conjunction with the new storage hardware, IBM is tapping AI through IBM Storage Spectrum, a familiar product to create a smart utility storage option. This allows you to define your current and future storage needs on day one, deploy it all that day while paying only for the capacity you are actually using and activate more only when you need it, paying for it only when you have activated it.

AI Elastic Storage Systems via Spectrum 

Will that save you much money? Maybe, but what it mainly will save you is time. That comes from not having to go through the entire process of ordering and installing the extra capacity. DancingDinosaur guesses it will save you money if you usually over-ordered what you needed initially and paid for it then. IBM says some customers actually do that, but certainly there is no good reason to ever do that–paying for it in advance–at least no longer.

Yes, data volumes continue to grow at explosive rates. And yes, a prudent IT manager does not want to suddenly find the company in a position when a lack of sufficient storage is constraining the performance of critical applications. 

But a prudent data center manager should never be in that position. DancingDinosaur was always taught that the selective use of data compression can free up some top tier storage capacity, even on very short notice. And who doesn’t also have at least a few old storage arrays hanging around that can’t be put back into use to ease a sudden crunch, even if only briefly? OK, it won’t be the fastest or best storage but it could work for a short time at least.

Of course, IBM puts a somewhat different spin on the situation. It explains: For the last 30 years, the standard method has been for organizations to calculate their current application capacity needs and then guess the rest—hoping the guess is enough to meet future needs. The organization then works up an RFQ, RFP or some other tortuous procurement method to get the capacity they need as quickly as possible. The organization then invites all the vendors it knows and some it doesn’t to pitch their solution—at which point the organization usually finds it will need much more capacity than originally thought—or budgeted. 

And IBM continues: Then, only a few months later, the organization realizes its needs have changed—and that what it originally requested is no longer adequate. A year past the initial start, your new capacity is finally in place—and the organization hopes it won’t have to go through the same process again next year. 

Does that sound like you? DancingDinosaur hopes not, at least not in 2020. 

Maybe if your storage environment is large, one with more than 250 TB of storage and is growing, IBM notes. And depending how you specified it initially, additional capacity through its elastic storage program, is instantly available by simply provisioning what you need. From a billing standpoint, it allows you to move what would otherwise be high upfront costs to a predictable quarterly charge directly related to your business activity.

DancingDinosaur has long felt that IBM Spectrum Storage delivered a nifty set of capabilities even before AI became a current fad. If you can take advantage of it to shortcut the storage acquisition and provisioning process while holding onto a few bucks for a little longer what’s not to like. 

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/


%d bloggers like this: