Posts Tagged ‘IBM’

Z Open Terminal Emulation

September 25, 2020

You can spend a lot of time working with the Z and not find much new in terminal emulation. But there actually are a few new things, mainly because times change and people work differently, using different devices and doing new things. Sure, it all goes back to the mainframe, but it is a new world.

Terminal emulator screen

Rocket Software’s latest wrinkle in terminal emulation is BlueZone Web, which promises to simplify using the mainframe by enabling users to access host-based applications anywhere and on any type of device. It is part of a broader initiative Rocket calls Open AppDev for Z. From DancingDinosaur’s perspective its strength lies in being Zowe-compliant, an open source development environment from the Open Mainframe Project.This makes IBM Z a valuable open platform for an enterprise DevOps infrastructure.

Zowe is the first open source framework for z/OS. It facilitates DevOps teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Launched in a collaboration of initial contributors IBM, CA Technologies, and Rocket Software, and supported by the Open Mainframe Project. The goal is to cultivate the next generation of mainframe developers, whether or not they have Z experience. Zowe promotes a faster team on-ramp to productivity, collaboration, knowledge sharing, and communication.

This is the critical thing about Zowe: you don’t need Z platform experience. Open source developers and programmers can use a wide range of popular open source tools, languages, and technologies–the tools they already know. Sure it’d be nice to find an experienced zOS developer  but that is increasingly unlikely, making Zowe a much better bet.   

According to the Open Source Project, IBM’s initial contribution to Zowe was an extensible z/OS framework that provides REST-based services and APIs that will allow even inexperienced developers to rapidly use new technology, tools, languages, and modern workflows with z/OS. 

IBM continues to invest in the open source environment through Zowe and other open source initiatives.  Zowe also has help from Rocket Software, which provides a web user interface, and CA, which handles the Command Line Interface. You can find more about zowe here.

IBM introduced Linux, a leading open source technology, to the Z over 20 years ago. In time it has expanded the range of the Z through open-source tools that can be combined with products developed by different communities. This does create unintentional regulatory and security risks. Rocket Open AppDev for Z helps mitigate these risks, offering a solution that provides developers with a package of open tools and languages they want, along with the security, easy management, and support IBM Z customers require.

“We wanted to solve three common customer challenges that have prevented enterprises from leveraging the flexibility and agility of open software within their mainframe environment: user and system programmer experience, security, and version latency,” said Peter Fandel, Rocket’s Product Director of Open Software for Z. “With Rocket Open AppDev for Z, we believe we have provided an innovative secure path forward for our customers,” he adds. Businesses can now extend the mainframe’s capabilities through the adoption of open source software, making IBM Z a valuable platform for their DevOps infrastructure.”

But there is an even bigger question here that Rocket turned to IDC to answer. The question: whether businesses that run mission-critical workloads on IBM Z or IBMi should remain on these platforms and modernize them by leveraging the innovative tools that exist today or replatform by moving to an alternative on-premises solution, typically x86 or the cloud.

IDC investigated more than 440 businesses that have either modernized the IBM Z or IBMi or replatformed. The results: modernizers incur lower costs for their modernizing initiative than the replatformers.  Modernizers were more satisfied with the new capabilities of their modernized platform than replatformers; and the modernizers achieved a new baseline for which they paid less in hardware, software, and staffing. There is much more of interest in this study, which DancingDinosaur will explore in the weeks or months ahead.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Pushing Quantum Onto the Cloud

September 4, 2020

Did you ever imagine the cloud would become your quantum computing platform, a place where you would run complex quantum algorithms requiring significant specialized processing across multi-qubit machines available at a click? But that is exactly what is happening.

IBM started it a few years back by making their small qubit machines available in the cloud and even larger ones now. Today Xanadu is offering 8-qubit or 12-qubit chips, and even a 24-qubit chip in the next month or so, according to the Toronto-based company.

Xanadu quantum processor

As DancingDinosaur has previously reported, there are even more: Google reports a quantum computer lab with five machines and Honeywell has six quantum machines. D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.

D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.In September, Xanadu introduced its quantum cloud platform. This allows developers to access its gate-based photonic quantum processors with 8-qubit or 12-qubit chips across the cloud.

Photonics-based quantum machines have certain advantages over other platforms, according to the company. Xanadu’s quantum processors operate at room temperature, not low Kelvin temperatures. They can easily integrate into an existing fiber optic-based telecommunication infrastructure, enabling quantum computers to be networked. It also offers scalability and fault tolerance, owing to error-resistant physical qubits and flexibility in designing error correction codes. Xanadu’s type of qubit is based on squeezed states – a special type of light generated by its own chip-integrated silicon photonic devices, it claims.

DancingDinosaur recommends you check out Xanadu’s documentation and details. It does not have sufficient familiarity with photonics, especially as related to quantum computing, to judge any of the above statements. The company also notes it offers a cross-platform Python library for simulating and executing programs on quantum photonic hardware. Its open source tools are available on GitHub.

Late in August IBM has unveiled a new milestone on its quantum computing road map, achieving the company’s highest Quantum Volume to date. By following the link, you see that Quantum Value is a metric conceived by IBM to measure and compare quantum computing power. DancingDinosaur is not aware of any other quantum computing vendors using it, which doesn’t mean anything of course. Quantum computing is so new and so different and with many players joining in with different approaches it will be years before anadu see what metrics prove most useful. 

To come up with its Quantum Volume rating, IBM  combined a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit, systems to achieve the high Quantum Volume rating. The company has made a total of 28 quantum computers available over the last four years through the IBM Quantum Experience, which companies join to gain access to its quantum machines and tools, including its software development toolset, 

Do not confuse Quantum Volume with Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer versus a conventional one. Quantum Advantage will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume, notes IBM, measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve its Quantum Volume milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit. The IBM Quantum team has shared details on the technical improvements made across the full stack to reach Quantum Volume 64 in a preprint released on arXiv, today.

What is most exciting is that the latest quantum happenings are things quantum you can access over the cloud without having to cool your data center to near zero Kelvin temperatures. If you try any of these, DancingDinosaur would love to hear how it goes.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

5G Will Accelerate a New Wave of IoT Applications and Z

August 10, 2020

Even before the advent of 5G DancingDinosaur, which had ghostwritten a top book on IoT, believed that IoT and smartphones would lead back to the Z eventually, somehow. Maybe the arrival of 5G and smart edge computing might slow the path to the Z. Or maybe not.

Even transactions and data originating and being processed at the edge will need to be secured, backed up, stored, distributed to the cloud, to other servers and systems, to multiple clouds, on premises, and further  processed and reprocessed in numerous ways. Along the way, they will find their ways back to a Z somehow and somewhere, sooner or later.

an edge architecture

5G is driving change in the Internet of Things (IoT). It’s a powerful enabling technology for a new generation of use cases that will leverage edge computing to make IoT more effective and efficient,” writes Rishi Vaish and Sky Matthews. Rishi Vaish is CTO and VP, IBM AI Applications; Sky Matthews is CTO, Engineering Lifecycle Management at IBM.  DancingDinosaur completely agrees, adding only that it won’t just stop there.

Vaish and Matthews continue: “In many ways, the narrative of 5G is the interaction between two inexorable forces: the rise in highly reliable, high-bandwidth communications, and the rapid spread of available computing power throughout the network. The computing power doesn’t just end at the network, though. End-point devices that connect to the network are also getting smarter and more powerful.” 

True enough, the power does not just end there; neither does it start there. There is a long line of powerful systems, the z15 and generations of Z before it that handle and enhance everything that happens in whatever ways are desired at that moment or, as is often the case, later. 

And yes, there will be numerous ways to create comparable services using similarly smart and flexible edge devices. But experience has shown that it takes time to work out the inevitable kinks that invariably will surface, often at the least expected and most inopportune moment. Think of it as just the latest manifestation of Murphy’s Law moved to the edge and 5G.

The increasingly dynamic and powerful computational environment that’s taking shape as telcos begin to redesign their networks for 5G will accelerate the uptake of IoT applications and services throughout industry,  Vaish and Matthews continue. We expect that 5G will enable new use cases in remote monitoring and visual inspection, autonomous operations in large-scale remote environments such as mines, connected vehicles, and more.

This rapidly expanding range of computing options, they add,  requires a much more flexible approach to building and deploying applications and AI models that can take advantage of the most cost-efficient compute resources available.

IBM chimes in: There are many ways that this combination of 5G and edge computing can enable new applications and new innovations in various industries. IBM and Verizon, for example, are developing potential 5G and edge solutions like remote-controlled robotics, near real-time video analysis, and other kinds of factory-floor automation.

The advantage comes from smart 5G edge devices doing the analytics immediately, at the spot where decisions may be best made. Are you sure that decisions made at the edge immediately are always the best? DancingDinosaur would like to see a little more data on that.

In that case, don’t be surprised to discover that there will be other decisions that benefit from being made later, with the addition of other data and analysis. There is too much added value and insight packed into the Z data center to not take advantage of it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

AI Hardware Accelerators and AI Models

August 3, 2020

At times it seems like IBM is just dallying with AI, but at a late July briefing IBM showed just how seriously it is pursuing AI and how difficult the challenge is. It starts with today’s computers and supercomputers, essentially bit processors. Can we say that’s the easy stuff; at least that’s the stuff we are most familiar with.

Neurons come next in the IBM scheme. Here biology and information are tapped for today’s rudimentary AI systems. Next you throw in qubits, which combine physics and information. Now we’re entering the realm of quantum machines.

DancingDinosaur is mesmerized by quantum computing but only understands it at the level of his 40-year old physics course. It starts with today’s computers and supercomputers, essentially bit processors. Can we say this is the easy stuff; at least that’s the stuff we are most familiar with.

Where all this is going is not toward some mesmerizing future of quantum systems dazzling us with nearly instant solutions to seemingly impossible problems. No, it seems, at one level, more prosaic than that, according to Jeffrey Burns.  IBM’s Director, AI Compute.

As Burns  puts it: IBM is building the future of computing. That future, he continues, is a pipeline of innovation for the future of Hybrid Cloud and AI. We should have known that except various IBMers have been saying it for several years at least and it just sounded too simple.

Burns breaks it down into four areas: Core Technology, Innovation for AI, Innovation for Hybrid Cloud, and Foundational Scientific Discovery. 

It is tempting to jump right to the Foundational Scientific Discovery stuff; that’s the sexy part. It includes new devices and materials, breakthrough data communications, computational, secured storage, and  persistent memory architectures.

At the other end is what Burns calls core technology. This encompasses semiconductor devices, processor architecture,  novel memory, MRAM, and advanced packaging.

Among the innovations for AI are AI hardware, real-time AI for transaction processing,  HW and SW for federated AI learning to enhance security and privacy.

Finally, there are innovations for hybrid cloud. These include Red Hat RHEL integration,  storage and data recovery,  high speed networking,  security, and heterogeneous system architecture for hybrid cloud.  

But, AI and Hybrid Cloud can advance only as far as hardware can take them, notes Burns. The processing demands at even the first two steps are significant. For example image recognition training with a dataset of 22K requires 4 GPUs, takes 16 days, and consumes 385 kWh. If you want it faster, you can throw 256 GPUs at it for 7 hours, which still consumes 450 kWh. Or think of it another way, he suggests: 1 model training run eats the equivalent of ~2 weeks of home energy consumption.

And we’ve just been talking about narrow AI. Broad AI, Burns continues, brings even more computational demands and greater functionality requirements at the edge.If you’re thinking of trying this with your data center, none of this is trivial. Last year IBM invested $2B  to create an  Artificial Intelligence Hardware Center. Twelve organizations have joined it and it continues to grow, Burns reports. You’re welcome to join.

IBM’s idea is to innovate and lead in AI accelerators for training and inferencing, leverage partnerships to drive AI leadership from materials and devices through software, and generate AI application demonstrators with an industry leading roadmap. 

Here is where Burns wants to take it: Extending performance by 2.5X/year through 2025.  Apply approximate computing principles to Digital AI cores with reduced precision, as well as Analog AI Cores (Remember analog? Burns sees it playing a big energy-saving role.), which could potentially offer another 100x in energy-efficiency.

If you want to try your hand at AI at this level, DancingDinosaur would love to know and throw some digital ink your way.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

July 13, 2020

IBM IBV Sees Shift in Pandemic Consumer Attitude

June 25, 2020

Do you wonder how this pandemic is going to end? Or when. Or what the world will be like when it actually does or if it does, and how we will even know.

IBM quantum computing researcher

IBM’s Institute of Business Value (IBV), an IBM research group, was asking similar questions. It polled more than 18,000 U.S. adults in May and early June to understand how COVID-19 has affected their perspectives on topics that include remote work; the return to the workplace; where they want to live; how they want to shop; and more. 

IBV’s results are not exactly encouraging. For example, it found that consumers are preparing themselves for more permanent changes in behavior because of the pandemic and their fears about future outbreaks. Two of every three respondents said they were concerned about a second wave of COVID-19 hitting later in 2020. More than 60 percent said they believed there were likely to be more global pandemic events like COVID-19 in the future.

The research also suggests that organizations in every industry must pay attention to their customers’ shifting preferences. And they must respond with agility: by adopting technology, rethinking processes and, most importantly, addressing culture in order to emerge from the pandemic smarter and stronger, say the researchers.

DancingDinosaur is not nearly as methodical as the researchers at IBV. But having spent nearly four months being bombarded with solicitations for almost anything that can be squeezed into Zoom I have been able to form some opinions. The first is how ingenious and creative a lot of marketers have become in repackaging their previously tedious messages for what has almost overnight emerged as a virtual Zoom-like world. 

For decades DancingDinosaur has dodged meetings like a plague, or maybe a pandemic. But some have managed to tease me into attending a few virtual zooms, which, surprisingly, were informative and useful and concise. When the pandemic is finally done and gone, marketers may never get DancingDinosaur into a convention center or seminar venue again. Not when it is so easy to click in and, as importantly, how convenient it is to click leave the meeting.

IBV’s research appears to have uncovered some interesting behaviors. For instance, nearly one in five urban residents indicated they would definitely relocate or would consider moving to suburban or rural areas as a result of the pandemic. Fewer than 1 in 10 indicated they now found living in an urban area more appealing. 

That makes sense. If DancingDinosaur was quarantined in a 1 bedroom or studio condo for weeks or months he’d never do that again and hope you wouldn’t either, no matter how tempting the restaurants might have been when you could actually go into them.

Another set of IBV data points bodes badly for combating climate change. Young climate change activist Greta Thunberg, please forgive them. The researchers found 25 percent of respondents said they would use their personal vehicles exclusively as their mode of transport, and an additional 17 percent said they’d use them more than before. A full 60 percent of those who want to use a personal vehicle but don’t own one said they would buy one. The remainder in this group said they would rent a vehicle until they felt safe using shared mobility.

IBV also looked at work-from-home. Before COVID-19 containment measures went into effect, less than 11% of respondents worked from home. As of June 4, that percentage had grown to more than 45%. What’s more, 81% of respondents—up from 75% in April—indicated they want to continue working remotely at least some of the time.  More than half—61%—would like this to become their primary way of working. 

DancingDinosaur spent his entire career working from home. It can be a great life. Of course,  I didn’t have to educate my children at home or on short notice with minimal guidance. They went to public school and summer camp. When they came home from school each day, it made a great excuse for me to take a cookie break with them. I do miss not having my cookie break partners. They married great guys and, if I set any kind of proper example, they now have cookie breaks with them instead.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghost-writer still working from home in the Boston area. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Wazi cloud-native devops for Z

June 12, 2020

In this rapidly evolving world of hybrid and multicloud systems, organizations are required to quickly evolve their processes and tooling to address business needs. Foremost among that are development environments that include IBM Z as part of their hybrid solution face, says Sanjay Chandru, Director, IBM Z DevOps.

IBM’s goal, then  is to provide a cloud native developer experience for the IBM Z that is consistent and familiar to all developers. And that requires cross platform consistency in tooling for application programmers on Z who will need to deliver innovation faster and without the backlogs that have been expected in the past.

Wazi, along with OpenShift,  is another dividend from IBM purchase of Red Hat. Here is where IBM Wazi for Red Hat CodeReady Workspaces comes in: an add-on to IBM Cloud Pak for Applications. It allows developers to use an industry standard integrated development environment (IDE),  such as Microsoft Visual Studio Code (VS Code) or Eclipse, to develop and test IBM z/OS applications in a containerized, virtual z/OS environment on Red Hat OpenShift running on x86 hardware. The container creates a sandbox. 

The combination of IBM Cloud Pak for Applications goes beyond what Zowe offers as an open source framework for z/OS and the OpenProject to enable Z development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Developers who are not used to z/OS and IBM Z, which are most developers, now can  become productive faster in a familiar and accessible working environment, effectively  improving DevOps adoption across the enterprise

As IBM explained: Wazi integrates seamlessly into a standard, Git-based open tool chain to enable continuous integration and continuous delivery (CI/CD) as part of a fully hybrid devops process encompassing distributed and z systems.

IBM continues: Wazi is offered with deployment choices so that organizations can flexibly rebalance entitlement over time based on its business needs. In short, the organization can 

protect and leverage its IBM Z investments with robust and standard development capabilities that encompasses IBM Z and multicloud platforms.

The payoff comes as developers who are NOT used to z/OS and IBM Z, which is most of the developer world, can become productive faster in a familiar and accessible working environment while  improving DevOps adoption across the enterprise. IBM Wazi integrates seamlessly into a standard, Git-based open tool chain to deliver CI/CD and is offered with deployment choices so that any organization can flexibly rebalance over time based on its business needs. In short, you are protecting and leveraging your IBM Z investments with robust and standard development capabilities that encompass the Z and multicloud platforms.

As one large IBM customer put it: “We want to make the mainframe accessible. Use whatever tool you are comfortable with – Eclipse / IDz / Visual Studio Code. All of these things we are interested in to accelerate our innovation on the mainframe” 

An IT service provider added in IBM’s Wazi announcement: “Our colleagues in software development have been screaming for years for a dedicated testing environment that can be created and destroyed rapidly.” Well, now they have it in Wazi.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work athttp://technologywriter.com/

BMC Finalizes Compuware Acquisition 

June 4, 2020

On June 1 BMC completed its acquisition of Compuware. Both were leading mainframe independent software vendors (ISV) and leading providers of mainframe application development, delivery, and performance solutions. Recently the mainframe ISV space has picked up the action. Just a week ago DancingDinosaur was writing about the renaming of Syncsort to Precisely after completing its acquisition of Pitney Bowes, a company best known for its postage metering.


Given IBM’s lackluster performance as a mainframe software application vendor, albeit somewhat constrained by legalities, a healthy mainframe ISV market is good for everyone that wants to thrive in the mainframe space. And there are others DancingDinosaur hasn’t covered recently, such as DataKinetics, a mainframe performance and optimization provider, and Software Diversified Services (SDS), which specializes in mainframe security.

In some ways DancingDinosaur is saddened that the number of independent mainframe ISVs has dropped by one, but is hopeful that those that remain are going to be stronger, more innovative, and better for the mainframe space overall. As BMC says in its announcement: Customers to benefit from an integrated DevOps toolchain with mainframe operations management and agile application development and delivery. Everybody with a stake in the mainframe space should wish them success.

As BMC puts it: the strategic combination of the two companies builds on the success of BMC’s Automated Mainframe Intelligence (AMI) and Compuware’s Topaz suite, ISPW technology, and classic product portfolios to modernize mainframe environments. BMC with Compuware now enables automation and intelligent operations with agile development and delivery – empowering the next generation of mainframe developers and operations teams to excel when working with mainframe programming languages, applications, data, infrastructure, and security.

And the industry analysts say in the announcement: “Adding Compuware’s Topaz software development environment to the BMC portfolio is another step in the direction of targeting the enterprise developer. With Topaz, developers take a modern approach to building, testing, and deploying mainframe applications. This move should allow BMC to spread the word that modern tools matter for the mainframe engineer,” wrote Christopher Condo, Chris Gardner, and Diego Lo Giudice at Forrester Research.

In addition: fifty percent of respondents in a 2019 Forrester study reported that they plan to grow their use of the mainframe over the next two years and 93% of respondents in the 2019 BMC Mainframe Survey believe in the long-term and new workload strength of the platform.

For the mainframe shop, the newly unified portfolio will enable enterprises to:

  • Leverage the processing power, stability, security, and agile scalability of the mainframe
  • Scale Agile and DevOps methods with a fully integrated DevOps toolchain – allowing for mainframe applications to get to market more quickly and efficiently without compromising quality.
  • Combine the self-analyzing, self-healing, and self-optimizing power of the BMC AMI suite of products to increase mainframe availability, efficiency, and security while mitigating risk; along with the Compuware Topaz suite, to empower the next generation of developers to build, analyze, test, deploy, and manage mainframe applications
  • Create a customer experience to meet the business demands of the digital age – jumpstarting their Autonomous Digital Enterprise journey

BMC’s AMI brings an interesting twist. Specifically, it aims to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. Key elements of such a self-managing mainframe in the areas of security for advanced network and system security include improved adherence to PCI DSS, HIPAA, SOX, FISMA, GDPR, ISO 27001, IRS Pub. 1075, NERC, and other industry standards for protecting data. Most helpful should be BMC AMI for Security to execute out-of-the-box scorecards for frequently audited areas. 

Similarly, AMI can address areas like  capacity management to optimize mainframe capacity by addressing bottlenecks before they occur, boost staff productivity, and deliver a right-sized, cost-optimized mainframe environment. Or DevOps AMI for the mainframe through application orchestration tools to automatically capture database changes and communicate them to the database administrator (DBA) while enforcing DevOps best practices.

ISVs also can ignite a spark under IBM, especially now that it has Red Hat, as is the case of IBM enabling Wazi, a cloud native devop tool for the z. That’s why we want a strong ISV community.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Supercomputers Battle COVID-19

May 22, 2020

When the world finally defeats the coronavirus and Covad-19, a small part in the victory will go to massive computer power. As Dario Gil, Director of IBM Research, noted; 16 systems with more than 400 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting are among the firepower.

Back in March DancingDinosaur reported here that IBM’s Summit, which the company was boasting as the world’s most powerful supercompter was able to simulate 8,000 chemical compounds in a matter of days in a hunt for something that could impact the COVID-19 infection. 

Writing this today, late in May, we already know that teams of medical researchers, scientists, technology experts, and a vast array of talents are working almost non-stop to find, develop, test, and mass produce a cure, with luck in the form of a vaccine. We should also note all the countless nurses, doctors, aides, assistants and various hospital and food and logistics staff of all types and outside support roles who are involved in keeping things working, feeding staff, wheeling patients around, and otherwise helping to save lives.

As Gil explains: high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling–all the required science disciplines that need to be involved in whatever success is ultimately achieved. You can probably throw in chemistry and a few other areas of electronics and engineering as well. Without massive computer horsepower these experiments would take years to complete if worked by hand, or months if handled on slower, traditional computing platforms.

These machines—more than 25 U.S.-based supercomputers with more than 400 petaflops of computing power—are now available for free to those working toward a vaccine or treatment against the virus, through the COVID-19 High Performance Computing Consortium.

It was created with government, academia and industry—including competitors, working side by side. IBM is co-leading the effort with the U.S. Department of Energy, which operates the National Laboratories of the United States. Google, Microsoft, Amazon, and Hewlett Packard Enterprise have joined, as well as NASA, the National Science Foundation, Pittsburgh Supercomputing Center, and six National Labs—Lawrence Livermore, Lawrence Berkeley, Argonne, Los Alamos, Oak Ridge, and Sandia, and others. And then there are academic institutions, including MIT, Rensselaer Polytechnic Institute, the University of Texas, Austin, and the University of California, San Diego.

The White House has been getting deservedly bashed for its slowness, incompetence, and narrow-minded bungling. However, Gil reports the White House’s Office of Science and Technology Policy has taken up an effort that can make a real difference. He adds; I want to offer this promise: IBM will continue to explore everything in our power to use our technology and expertise to drive meaningful progress in this global fight.

The first thing Gil’s team did was to spread the word to people who might be working on this on any or all fronts—from drug discovery and development with AI-led simulations to genomics, epidemiology and health systems.

He goes on:  We need to understand the whole life cycle of this virus, all the gearboxes that drive it—how it encounters and infects the host cell and replicates inside it, preventing it from producing vital particles. We need to know the molecular components, the proteins involved in the virus’ biochemistry, and then to use computational modeling to see how we can interrupt the cycle. That’s the standard scientific methodology of drug discovery, but we want to amplify it and speed it up.

The virus has been exploding in humans for months, providing an abundance of samples for computer modeling and analysis, Gil continued. Scientists already are depositing samples into public data sources such as GenBank and Protein Data Bank. There are many unknowns and assumptions but a lot of proposals involve using the available protein structures to come up with potential molecular compounds that could lead to a therapeutic treatment or a vaccine. Let’s hope they have great success, the sooner the better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/


%d bloggers like this: