Posts Tagged ‘Power Systems’

Supercomputers Battle COVID-19

May 22, 2020

When the world finally defeats the coronavirus and Covad-19, a small part in the victory will go to massive computer power. As Dario Gil, Director of IBM Research, noted; 16 systems with more than 400 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting are among the firepower.

Back in March DancingDinosaur reported here that IBM’s Summit, which the company was boasting as the world’s most powerful supercompter was able to simulate 8,000 chemical compounds in a matter of days in a hunt for something that could impact the COVID-19 infection. 

Writing this today, late in May, we already know that teams of medical researchers, scientists, technology experts, and a vast array of talents are working almost non-stop to find, develop, test, and mass produce a cure, with luck in the form of a vaccine. We should also note all the countless nurses, doctors, aides, assistants and various hospital and food and logistics staff of all types and outside support roles who are involved in keeping things working, feeding staff, wheeling patients around, and otherwise helping to save lives.

As Gil explains: high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling–all the required science disciplines that need to be involved in whatever success is ultimately achieved. You can probably throw in chemistry and a few other areas of electronics and engineering as well. Without massive computer horsepower these experiments would take years to complete if worked by hand, or months if handled on slower, traditional computing platforms.

These machines—more than 25 U.S.-based supercomputers with more than 400 petaflops of computing power—are now available for free to those working toward a vaccine or treatment against the virus, through the COVID-19 High Performance Computing Consortium.

It was created with government, academia and industry—including competitors, working side by side. IBM is co-leading the effort with the U.S. Department of Energy, which operates the National Laboratories of the United States. Google, Microsoft, Amazon, and Hewlett Packard Enterprise have joined, as well as NASA, the National Science Foundation, Pittsburgh Supercomputing Center, and six National Labs—Lawrence Livermore, Lawrence Berkeley, Argonne, Los Alamos, Oak Ridge, and Sandia, and others. And then there are academic institutions, including MIT, Rensselaer Polytechnic Institute, the University of Texas, Austin, and the University of California, San Diego.

The White House has been getting deservedly bashed for its slowness, incompetence, and narrow-minded bungling. However, Gil reports the White House’s Office of Science and Technology Policy has taken up an effort that can make a real difference. He adds; I want to offer this promise: IBM will continue to explore everything in our power to use our technology and expertise to drive meaningful progress in this global fight.

The first thing Gil’s team did was to spread the word to people who might be working on this on any or all fronts—from drug discovery and development with AI-led simulations to genomics, epidemiology and health systems.

He goes on:  We need to understand the whole life cycle of this virus, all the gearboxes that drive it—how it encounters and infects the host cell and replicates inside it, preventing it from producing vital particles. We need to know the molecular components, the proteins involved in the virus’ biochemistry, and then to use computational modeling to see how we can interrupt the cycle. That’s the standard scientific methodology of drug discovery, but we want to amplify it and speed it up.

The virus has been exploding in humans for months, providing an abundance of samples for computer modeling and analysis, Gil continued. Scientists already are depositing samples into public data sources such as GenBank and Protein Data Bank. There are many unknowns and assumptions but a lot of proposals involve using the available protein structures to come up with potential molecular compounds that could lead to a therapeutic treatment or a vaccine. Let’s hope they have great success, the sooner the better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

IBM Power9 Certified for SAP HANA Enterprise Cloud

April 28, 2020

SAP HANA has again this year been designated a top performer in the cloud-native, multi-tenant, business intelligence segment by Gartner.  Driving its popularity is the broad interest in its wide base of SAP enterprise applications and the SAP Analytics Cloud,  a cloud-native, multi-tenant platform with a broad set of analytic capabilities. 

Behind the SAP Cloud, increasingly, is IBM’s POWER 9 servers. Specifically, the SAP-managed, private cloud environment runs on IBM POWER9 systems, specifically the E980, which brings the industry’s largest virtualized server scalability at 24TB, more than enough for even the largest SAP HANA database applications to run in memory, where they experience the best performance. In truth, most HANA users don’t require 24 TB but it can be there if they need it.


IBM Power E980

IBM Power Systems has been certified for the SAP HANA Enterprise Cloud as a critical infrastructure platform provider for large in-memory usage. The goal is to simplify the IT infrastructure for the managed, private cloud environment. The service will run on IBM POWER9-based Power Systems E980 servers, which offer the industry’s largest virtualized server scalability for the HANA database. The E980 server lineup starts as small as 2 sockets and runs up to 16 sockets. 

The IBM Power9, notes IBM, more than provides the IT infrastructure for this mission-critical managed environment. The Power9 is a scalable and secured service that is designed to accelerate a user’s evolution on the path to cloud readiness, explains Vicente Moranta, Vice President, Offering Management for IBM’s Enterprise Linux on Power Systems. It provides capabilities that span the software and hardware stack through a comprehensive menu of functional and technical services with the level of control in the SAP cloud that clients should expect on premises, all in one privately SAP-managed environment.

SAP HANA Enterprise Cloud users can take advantage of the firmware-based virtualization built into the IBM POWER platform as PowerVM, a virtualization engine implemented at the firmware level. PowerVM delivers better capabilities while avoiding the problem of noisy neighbors where multiple clients on the box can interfere. It helps with this through micro-partitions and other advanced features. As a result, it delivers the largest SAP HANA scalability in a scale-up system. 

This combination is the result of a three-year collaboration between IBM Power Systems and SAP to provide virtualization on-demand via hypervisor-defined features. These features give an SAP HANA LPAR the ability to match what a client wants, effectively avoiding long acquisition cycles and wasteful over-provisioning. Specifically it provides what amounts to virtual on demand and accurately configured systems for great granularity. It avoids the need for SAP users to revert to bare metal servers due to virtualization issues. SAP manages this work itself through Power9 to achieve optimum performance. 

The latest 2019 Information Technology Intelligence Consulting (ITIC) Reliability Update polled over 800 corporations from July through early September 2019. The study compared the reliability and availability of over a dozen of the most widely deployed mainstream server platforms. Among the mainstream server distributions, IBM’s Power Systems, led by the Power9 topped the field registering a record low of 1.75 minutes per server downtime. Each of the mainstream servers studied delivered a solid five nines (99.999%) of inherent hardware reliability.

Not surprisingly, one server beat them all: the IBM Z mainframe system delivered what ITIC called true fault tolerance, delivering six nines (99.9999%) uptime to 89% of enterprise users. That translates into 0.74 second per server due to any inherent flaws in the server hardware. Just imagine how much you could accomplish in that 0.74 second?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at 

Mayflower Autonomous Ship

March 31, 2020

Growing up in Massachusetts, DancingDinosaur was steadily inundated with historical milestones: the Boston Massacre, Pilgrims landing at Plymouth Rock, we even had a special school holiday, Evacuation Day. It is only celebrated in Boston, to commemorate the day the colonials forced the British out of Boston. This year commemorates the 400th anniversary of the Mayflower’s arrival, which evolved into Thanksgiving and subsequently turned into a great day for high school football games.

IBM took the occasion as an  opportunity to build a completely autonomous ship and sail it from England to Massachusetts unmanned to mark that  anniversary, September 2020. The project , dubbed the Mayflower Autonomous Ship  (MAS) was launched as an occasion for IBM to introduce IBM Edge Computing in a dramatic way.

IBM defines edge computing as decentralized data and application processing across hundreds to millions of endpoints residing outside of a traditional datacenter or public cloud. MAS relies on IBM’s Edge Computing

“You take the human factor out of ships and it allows you to completely reimagine the design. You can focus purely on the mechanics and function of the ship,” writes Brett Phaneuf, Managing Director of MAS. The idea was to create an autonomous and crewless vessel that would cross the Atlantic, tracing the route of the original 1620 Mayflower and performing vital research along the way.

For MAS to survive the voyage, he continued,  the team opted for a trimaran design, which is both hydro- and aero-dynamic Using aluminum and composite materials, MAS will be lightweight, about 5 tons and 15 meters in length. That’s half the length and less than 3 percent of the weight of the original Mayflower, which took almost two months for a voyage that the MAS team planned to complete in less than two weeks.

For power, MAS will use solar panels to charge on-board batteries, which will power MAS’s motor – even at night. A single wingsail will allow MAS to harness wind power as well as make it more visible to other ships. MAS will be able to clock speeds of around 20 knots, compared to the original Mayflower’s 2.5 knots.

When it comes to modern technologies, the original Mayflower used a ship’s compass for navigation. To measure speed, it towed a ‘log-line’ – a wooden board attached to a hemp line with knots tied in it at uniform intervals (hence the term ‘knots’ still used to measure a ship’s speed today).

MAS, however, will have a state-of-the-art inertial navigation and precision GNSS positioning system. It will have a full suite of the latest oceanographic and meteorological instruments, a satellite communications system, and 2D LIDAR and RADAR sensors.

With no crew on board, MAS needs to make its own decisions at sea. MAS’ mission control system will be built on IBM Power Systems servers. MAS is currently using real data from Plymouth Sound to train IBM PowerAI Vision technology to recognize ships, debris, whales and other hazards which come into view on MAS’s on-board video cameras.

When a hazard is detected, MAS will use IBM Operational Decision Manager software to decide what to do. It may change course, or, in case of emergencies, speed out of the way by drawing additional power from its on-board back-up generator. Connectivity in the middle of the Atlantic is patchy, so MAS will use edge devices on board to store and process data locally when need be. Every time it gets  a connection, the ship will connect to the IBM Cloud and put the systems back into sync.

MAS will carry three research pods that carry scientific instrumentation to ensure scientists can gather the data they need to understand and protect the ocean, especially in the face of threats from pollution and global warming.

By leveraging AI, machine learning, and other new technologies IBM hopes it will start a new era of marine exploration. Through the University of Birmingham’s Human Interface Technologies Team, MAS plans to open the experience of the mission to millions of other ‘virtual pilgrims’ around the world via a mixed reality experience that uses the latest Virtual and Augmented Reality technologies. Bon Voyage!

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at 

Power9 Summit Fights COVID-19

March 16, 2020

IBM has unleashed its currently top-rated supercomputer, Summit, to simulate 8,000 chemical compounds in a matter of days in a hunt for something that will impact the COVID-19 infection process by binding to the virus’s spike, a key early step in coming up with an effective vaccine or cure. In the first few days Summit already identified 77 small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

POWER9 Summit Supercomputer battles COVID-19


The US Dept of Energy turned to the IBM Summit supercomputer to help in the fight against COVID-19 that appears almost unstoppable as it has swept through 84 countries on every continent except Antarctica, according to IBM. The hope is that by quickly culling the most likely initial chemical candidates, the lab researchers could get an early jump on the search for an effective cure.

As IBM explains it, viruses infect cells by binding to them and using a ‘spike’ to inject their genetic material into the host cell. When trying to understand new biological compounds, like viruses, researchers in wet labs grow the micro-organism and see how it reacts in real-life to the introduction of new compounds, but this can be a slow process without computers that can perform fast digital simulations to narrow down the range of potential variables.And even then there are challenges. 

Computer simulations can examine how different variables react with different viruses, but when each of these individual variables can be comprised of millions or even billions of unique pieces of data and compounded by the need to be run in multiple simulations this isn’t trivial. Very quickly this can become a very time-intensive process, especially  if you are using commodity hardware. 

But, IBM continued, by using Summit, researchers were able to simulate 8,000 compounds in a matter of days to model which bone might impact that infection process by binding to the virus’s spike. As of last week, they have identified dozens of small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

“Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, Governor’s Chair at the University of Tennessee, director of the UT/ORNL Center for Molecular Biophysics, and principal researcher in the study. “Our results don’t mean that we have found a cure or treatment for COVID-19. But we are very hopeful  that our computational findings will both inform future studies and provide a framework that the subsequent researchers can use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.” 

After the researchers turn over the most likely possibilities to the medical scientists they are still a long way from finding a cure.  The medical folks will take them into the physical wet lab and do whatever they do to determine whether a compound might work or not.  

Eventually, if they are lucky,  they will end up with something promising, which then has to be tested against the coronavirus and COVID-19. Published experts suggest this can take a year or two or more. 

Summit gave the researchers a jump start with its massive data processing capability, enabled through its 4,608 IBM Power Systems AC922 server nodes, each equipped with two IBM POWER9 CPUs and six NVIDIA Tensorcore V100 GPUs, giving it a peak performance of 200 petaflops, in effect more powerful than one million high-end laptops. 

Might quantum computing have sped up the process even more? IBM didn’t report throwing one of its quantum machines at the problem, relying instead on Summit, which has already been acclaimed as the world’s fastest supercomputer.

Nothing stays the same in the high performance computing world. HEXUS reports that when time is of the essence and lives are at stake, the value of supercomputers is highly evident. Now a new one, is being touted as  the world’s first 2 Exaflops+ supercomputer, is set to begin operations in 2023. This AMD-powered giant, HEXUS notes, is claimed to be about 10x faster than Summit. That’s good to know, but let’s hope the medical researchers have already beaten the Coronavirus and COVID-19  by then.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

Syncsort Drives IBMi Security with AI

May 2, 2019

The technology security landscape looks increasingly dangerous  The problem revolves around the possible impact of AI. the impact of which is not fully clear. The hope, of course, is that AI will make security more efficient and effective.  However, the security bad actors can also jump on AI to advance their own schemes. Like a cyber version of the nuclear arms race, this has been an ongoing battle for decades. The industry has to cooperate and, specifically, share information and hope the good guys can stay a step ahead.

In the meantime, vendors like IBM and most recently Syncsort have been stepping up to  the latest challengers. Syncsort, for example, earlier this month launched its Assure Security to address the increasing sophistication of cyber attacks and expanding data privacy regulations.  In surprising ways, it turns out, data privacy and AI are closely related in the AI security battle.

Syncsort, a leader in Big Iron-to-Big Data software, announced Assure Security, which combines access control, data privacy, compliance monitoring, and risk assessment into a single product. Together, these capabilities help security officers, IBMi administrators, and Db2 administrators address critical security challenges and comply with new regulations meant to safeguard and protect the privacy of data.

And it clearly is coming at the right time.  According to Privacy Rights Clearinghouse, a non-profit corporation with a mission to advocate for data privacy there were 828 reported security incidents in 2018 resulting in the exposure of over 1.37 billion records of sensitive data. As regulations to help protect consumer and business data become stricter and more numerous, organizations must build more robust data governance and security programs to keep the data from being exploited by bad security actors for nefarious purposes.  The industry already has scrambled to comply with GDPR and the New York Department of Financial Services Cybersecurity regulations and they now must prepare for the GDPR-like California Consumer Privacy Act, which takes effect January 1, 2020.

In its own survey Syncsort found security is the number one priority among IT pros with IBMi systems. “Given the increasing sophistication of cyber attacks, it’s not surprising 41 percent of respondents reported their company experienced a security breach and 20 percent more were unsure if they even had been breached,” said David Hodgson, CPO, Syncsort. The company’s new Assure Security product leverages the wealth of IBMi security technology and the expertise to help organizations address their highest-priority challenges. This includes protecting against vulnerabilities introduced by new, open-source methods of connecting to IBMi systems, adopting new cloud services, and complying with expanded government regulations.

Of course, IBM hasn’t been sleeping through this. The company continues to push various permutations of Watson to tackle the AI security challenge. For example, IBM leverages AI to gather insights and use reasoning to identify relationships between threats, such as malicious files, suspicious IP addresses,  or even insiders. This analysis takes seconds or minutes, allowing security analysts to respond to threats up to 60 times faster.

It also relies on AI to eliminate time-consuming research tasks and provides curated analysis of risks, which reduces the amount of time security analysts require to make the critical decisions and launch an orchestrated response to counter each threat. The result, which IBM refers to as cognitive security, combines the strengths of artificial intelligence and human intelligence.

Cognitive AI in effect, learns with each interaction to proactively detect and analyze threats and provides actionable insights to security analysts making informed decisions. Such cognitive security, let’s hope, combines the strengths of artificial intelligence with human judgement.

Syncsort’s Assure Security, specifically brings together best-in-class IBMi security capabilities acquired by Syncsort into an all-in-one solution, with the flexibility for customers to license individual modules. The resulting product includes:

  • Assure  Compliance Monitoring quickly identifies security and compliance issues with real-time alerts and reports on IBMi system activity and database changes.
  • Assure Access Control provides control of access to IBMi systems and their data through a varied bundle of capabilities.
  • Assure Data Privacy protects IBMi data at-rest and in-motion from unauthorized access and theft through a combination of NIST-certified encryption, tokenization, masking, and secure file transfer capabilities.
  • Assure Security Risk Assessment examines over a dozen categories of security values, open ports, power users, and more to address vulnerabilities.

It probably won’t surprise anyone but the AI security situation is not going to be cleared up soon. Expect to see a steady stream of headlines around security hits and misses over the next few years. Just hope will get easier to separate the good guys from the bad actors and the lessons will be clear.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

Meet SUSE Enterprise Linux Server 12

February 25, 2019

A surprising amount of competition has emerged lately for Linux on the mainframe, but SUSE continues to be among the top of the heap.  With the newest release last fall, SUSE Linux Enterprise 12, should secure its position for some time to come.

SUSE touts SLE 12 as the latest version of its reliable, scalable and secure platform for efficiently deploying and managing highly available enterprise-class IT services in physical, virtual, or cloud environments. New products based on SLE 12 feature enhancements should allow for better system uptime, improved operational efficiency, and accelerated innovation. As the foundation for all SUSE data center operating systems and extensions, according to the company, SUSE Linux Enterprise meets the performance requirements of data centers with mixed IT environments while reducing the risk of technological obsolescence and vendor lock-in.

With SLE 12 the company also introduces an updated customer portal, SUSE Customer Center, to make it easier for customers to manage their subscriptions, access patches and updates, and communicate with SUSE customer support. It promises a new way to manage a SUSE account and subscriptions via one interface, anytime, anywhere.

Al Gillen, program vice president for servers and system software at IDC, said, “The industry is seeing growing movement of mission-critical workloads to Linux, with that trend expected to continue well into the future.” For Gillen, the modular design of SLE 12, as well as other mission-critical features like full system rollback and live kernel patching, helps address some of the key reservations customers express, and should help accelerate the adoption of Linux on z.

It’s about time. Linux has been available on the z for 20 years. Only with the introduction of IBM LinuxONE a couple of years ago has IBM gotten serious about Linux on z.  Around that time IBM also ported the Go programming language to LinuxOne. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. Taking it even further, following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z.

And it didn’t stop there. IBM ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. As expected IBM has contributed code to the Go community.

Then IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services – all of which are available today, and can now be integrated with just a few lines of code. As soon as Apple introduced Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z, which has already been released.

With SUSE Linux Enterprise Server for x86_64, IBM Power Systems, and IBM System SUSE ES 12 has boosted its versatility, able to deliver business-critical IT services in a variety of physical, virtual, and cloud environments. New features like full system rollback, live kernel patching, and software modules increase data center uptime, improve operational efficiency, and accelerate the adoption of open source innovation. ES 12 further builds on SUSE’s leadership with Linux Containers technology and adds the Docker framework, which is now included as an integral part of the operating system.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

IBM Enhances Storage for 2019

February 14, 2019

It has been a while since DancingDinosaur last looked closely at IBM’s storage efforts. The latest 4Q18 storage briefing, actually was held on Feb. 5, 2019 but followed by more storage announcements 2/11 and 2/12 For your sake, this blog will not delve into each of these many announcements. You can, however, find them at the previous link.

Sacramento-San Joaquin River Delta–IBM RESEARCH

As IBM likes to say whenever it is trying to convey the value of data: “data is more valuable than oil.”  Maybe it is time to update this to say data is more valuable than fresh, clean water, which is quickly heading toward becoming the most precious commodity on earth.

IBM CEO Ginny Rometty, says it yet another way: “80% of the world’s data, whether it’s decades of underwriting, pricing, customer experience, risk in loans… That is all with our clients. You don’t want to share it. That is gold,” maybe more valuable even, say, the value of fresh water. But whatever metaphor you choose to use—gold, clean water, oil, something else you perceive as priceless, this represents to IBM the value of data. To preserve the value it represents this data must be economically stored, protected, made accessible, analyzed, and selectively shared. That’s where IBM’s storage comes in.

And IBM storage has been on a modest multi-year storage growth trend.  Since 2016, IBM reports shipping 700 new NVMe systems, 850 VeraStack systems, 3000 DS8880 systems, 5500 PB of capacity, attracted 6,800 new IBM Spectrum (virtualized) storage customers, and sold 3,000 Storwize All-flash system along with 12,000 all-flash arrays shipped.

The bulk of the 2/5 storage announcements fell into 4 areas:

  1. IBM storage for containers and cloud
  2. AI storage
  3. Modern data protection
  4. Cyber resiliency

Except for modern data protection, much of this may be new to Z and Power data centers. However, some of the new announcements will interest Z shops. In particular, 219-135 –Statement of direction: IBM intends to deliver Managed-from-Z, a new feature of IBM Cloud Private for Linux on IBM Z. This will enable organizations to run and manage IBM Cloud Private applications from IBM Linux on Z or LinuxONE platforms. The new capability furthers IBM’s commitment to deliver multi-cloud and multi-architecture cloud-native technologies on the platform of the customer’s choice. Watson, too, will now be available on more platforms through newly announced Watson Anywhere—a version of IBM’s cognitive platform that can run Watson on-premises, in IBM’s cloud, or any other cloud, be it private or public.

Another interesting addition to the IBM storage line, the FlashSystem 9100. IBM FlashSystem 9100, as IBM explains it, combines the performance of flash and Non-Volatile Memory Express (NVMe) end-to-end with the reliability and innovation of IBM FlashCore technology and the rich features of IBM Spectrum Virtualize, — all packed into a 2U enterprise-class storage system. Providing intensive data driven multi-cloud storage capacity, FlashSystem 9100 is deeply integrated with the software defined (virtualized) capabilities of IBM Spectrum Storage, allowing organizations to easily add multi-cloud solutions that best support their business..

Finally, 219-029 –IBM Spectrum Protect V8.1.7 and IBM Spectrum Protect Plus V10.1.3 deliver new application support and optimization for long term data retention. Think of it this way: as the value of data increases, you will want to retain and protect it in more data in more ways for longer and longer. For this you will want the kind of flexible and cost-efficient storage available through Spectrum Protect.

In addition, at Think, IBM announced Watson Anywhere, a version of Watson that runs on-premises, in IBM’s cloud, or any other cloud, be it private or public.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

Are Quantum Computers Even Feasible

November 29, 2018

IBM has toned down its enthusiasm for quantum computing. Even last spring it already was backing off a bit at Think 2018. Now the company is believes that quantum computing will augment classical computing to potentially open doors that it once thought would remain locked indefinitely.

First IBM Q computation center

With its Bristlecone announcement Google trumped IBM with 72 qubits. Debating a few dozen qubits more or less may prove irrelevant. A number of quantum physics researchers have recently been publishing papers that suggest useful quantum computing may be decades away.

Mikhail Dyakonov writes in his piece titled: The Case Against Quantum Computing, which appeared last month in Spectrum Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France.

As Dyakonov explains: In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. But you already know this because DancingDinosaur covered it here and several times since.

But this is what you might not know: With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described as a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and, according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

Dyakonov continues: In contrast to a classical bit a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the statement that a qubit can exist simultaneously in both of its ↑ and ↓ states. Yes, quantum mechanics often defies intuition.

So while IBM, Google, and other classical computer providers quibble about 50 qubits or 72 or even 500 qubits, to Dyakonov this is ridiculous. The real number of qubits will be astronomical as he explains: Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That’s a very big number indeed; much greater than the number of subatomic particles in the observable universe.

Just in case you missed the math, he repeats: A useful quantum computer [will] need to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

Before you run out to invest in a quantum computer with the most qubits you can buy you would be better served joining IBM’s Q Experience and experimenting with it on IBM’s nickel. Let them wrestle with the issues Dyakonov brings up.

Then, Dyakonov concludes: I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems.  I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never.

I hope my high school science teacher who enthusiastically introduced me to quantum physics has long since retired or, more likely, passed on. Meanwhile, DancingDinosaur expects to revisit quantum regularly in the coming months or even years.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at




GAO Blames Z for Government Inefficiency

October 19, 2018

Check out the GAO report from May 2016 here.  The Feds spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M). In a related report, the IRS reported it used assembly language code and COBOL, both developed in the 1950s, for IMF and IDRS. Unfortunately, the GAO conflates the word “mainframe” to refer to outdated UNISYS mainframes with the modern, supported, and actively developed IBM Z mainframes, notes Ross Mauri, IBM general manager, Z systems.

Mainframes-mobile in the cloud courtesy of Compuware

The GAO repeatedly used “mainframe” to refer to outdated UNISYS mainframes alongside the latest advanced IBM Z mainframes.  COBOL, too, maintains active skills and training programs at many institutions and receives investment across many industries. In addition to COBOL, the IBM z14 also runs Java, Swift, Go, Python and other open languages to enable modern application enhancement and development. Does the GAO know that?

The GAO uses the word “mainframe” to refer to outdated UNISYS mainframes as well as modern, supported, and actively developed IBM Z mainframes. In a recent report, the GAO recommends moving to supported modern hardware. IBM agrees. The Z, however, does not expose mainframe investments to a rise in procurement and operating costs, nor to skilled staff issues, Mauri continued.

Three investments the GAO reviewed in the operations and maintenance clearly appear as legacy investments facing significant risks due to their reliance on obsolete programming languages, outdated hardware, and a shortage of staff with critical skills. For example, IRS reported that it used assembly language code and COBOL (both developed in the 1950s) for IMF and IDRS. What are these bureaucrats smoking?

The GAO also seems confused over the Z and the cloud. IBM Cloud Private is designed to run on Linux-based Z systems to take full advantage of the cloud through open containers while retaining the inherent benefits of Z hardware—security, availability,  scalability, reliability; all the ities enterprises have long relied on the z for. The GAO seems unaware that the Z’s automatic pervasive encryption immediately encrypts everything at rest or in transit. Furthermore, the GAO routinely addresses COBOL as a deficiency while ISVs and other signatories of the Open Letter consider it a modern, optimized, and actively supported programming language.

The GAO apparently isn’t even aware of IBM Cloud Private. IBM Cloud Private is compatible with leading IT systems manufacturers and has been optimized for IBM Z. All that you need to get started with the cloud is the starter kit available for IBM OpenPOWER LC (Linux) servers, enterprise Power Systems, and Hyperconverged Systems powered by Nutanix. You don’t even need a Z; just buy a low cost OpenPOWER LC (Linux) server online and configure it as desired.

Here is part of the letter that Compuware sent to the GAO, Federal CIOs, and members of Congress. It’s endorsed by several dozen members of the IT industry. The full letter is here:

In light of a June 2018 GAO report to the Internal Revenue Service suggesting the agency’s mainframe- and COBOL-based systems present significant risks to tax processing, we the mainframe IT community—developers, scholars, influencers and inventors—urge the IRS and other federal agencies to:

  • Reinvest in and modernize the mainframe platform and the mission-critical applications which many have long relied upon.
  • Prudently consider the financial risks and opportunity costs associated with rewriting and replacing proven, highly dependable mainframe applications, for which no “off-the-shelf” replacement exists.
  • Understand the security and performance requirements of these mainframe applications and data and the risk of migrating to platforms that were never designed to meet such requirements.

The Compuware letter goes on to state: In 2018, the mainframe is still the world’s most reliable, performant and securable platform, providing the lowest cost high-transaction system of record. Regarding COBOL it notes that since 2017 IBM z14 supports COBOL V6.2, which is optimized bi-monthly.

Finally, about attracting new COBOL workers: COBOL is as easy to work with it as any other language. In fact, open source Zowe has demonstrated appeal to young techies, providing solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. What don’t they get?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at

%d bloggers like this: