AI Hardware Accelerators and AI Models

August 3, 2020

At times it seems like IBM is just dallying with AI, but at a late July briefing IBM showed just how seriously it is pursuing AI and how difficult the challenge is. It starts with today’s computers and supercomputers, essentially bit processors. Can we say that’s the easy stuff; at least that’s the stuff we are most familiar with.

Neurons come next in the IBM scheme. Here biology and information are tapped for today’s rudimentary AI systems. Next you throw in qubits, which combine physics and information. Now we’re entering the realm of quantum machines.

DancingDinosaur is mesmerized by quantum computing but only understands it at the level of his 40-year old physics course. It starts with today’s computers and supercomputers, essentially bit processors. Can we say this is the easy stuff; at least that’s the stuff we are most familiar with.

Where all this is going is not toward some mesmerizing future of quantum systems dazzling us with nearly instant solutions to seemingly impossible problems. No, it seems, at one level, more prosaic than that, according to Jeffrey Burns.  IBM’s Director, AI Compute.

As Burns  puts it: IBM is building the future of computing. That future, he continues, is a pipeline of innovation for the future of Hybrid Cloud and AI. We should have known that except various IBMers have been saying it for several years at least and it just sounded too simple.

Burns breaks it down into four areas: Core Technology, Innovation for AI, Innovation for Hybrid Cloud, and Foundational Scientific Discovery. 

It is tempting to jump right to the Foundational Scientific Discovery stuff; that’s the sexy part. It includes new devices and materials, breakthrough data communications, computational, secured storage, and  persistent memory architectures.

At the other end is what Burns calls core technology. This encompasses semiconductor devices, processor architecture,  novel memory, MRAM, and advanced packaging.

Among the innovations for AI are AI hardware, real-time AI for transaction processing,  HW and SW for federated AI learning to enhance security and privacy.

Finally, there are innovations for hybrid cloud. These include Red Hat RHEL integration,  storage and data recovery,  high speed networking,  security, and heterogeneous system architecture for hybrid cloud.  

But, AI and Hybrid Cloud can advance only as far as hardware can take them, notes Burns. The processing demands at even the first two steps are significant. For example image recognition training with a dataset of 22K requires 4 GPUs, takes 16 days, and consumes 385 kWh. If you want it faster, you can throw 256 GPUs at it for 7 hours, which still consumes 450 kWh. Or think of it another way, he suggests: 1 model training run eats the equivalent of ~2 weeks of home energy consumption.

And we’ve just been talking about narrow AI. Broad AI, Burns continues, brings even more computational demands and greater functionality requirements at the edge.If you’re thinking of trying this with your data center, none of this is trivial. Last year IBM invested $2B  to create an  Artificial Intelligence Hardware Center. Twelve organizations have joined it and it continues to grow, Burns reports. You’re welcome to join.

IBM’s idea is to innovate and lead in AI accelerators for training and inferencing, leverage partnerships to drive AI leadership from materials and devices through software, and generate AI application demonstrators with an industry leading roadmap. 

Here is where Burns wants to take it: Extending performance by 2.5X/year through 2025.  Apply approximate computing principles to Digital AI cores with reduced precision, as well as Analog AI Cores (Remember analog? Burns sees it playing a big energy-saving role.), which could potentially offer another 100x in energy-efficiency.

If you want to try your hand at AI at this level, DancingDinosaur would love to know and throw some digital ink your way.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Elastic Storage System 5000–AI Utility Storage Option

July 27, 2020

The newest storage from IBM is the Elastic Storage System 5000. It promises, as  you would expect, leading performance, density, and scalability, but that’s not the most interesting part of IBM’s July and Aug. storage offerings. In conjunction with the new storage hardware, IBM is tapping AI through IBM Storage Spectrum, a familiar product to create a smart utility storage option. This allows you to define your current and future storage needs on day one, deploy it all that day while paying only for the capacity you are actually using and activate more only when you need it, paying for it only when you have activated it.

AI Elastic Storage Systems via Spectrum 

Will that save you much money? Maybe, but what it mainly will save you is time. That comes from not having to go through the entire process of ordering and installing the extra capacity. DancingDinosaur guesses it will save you money if you usually over-ordered what you needed initially and paid for it then. IBM says some customers actually do that, but certainly there is no good reason to ever do that–paying for it in advance–at least no longer.

Yes, data volumes continue to grow at explosive rates. And yes, a prudent IT manager does not want to suddenly find the company in a position when a lack of sufficient storage is constraining the performance of critical applications. 

But a prudent data center manager should never be in that position. DancingDinosaur was always taught that the selective use of data compression can free up some top tier storage capacity, even on very short notice. And who doesn’t also have at least a few old storage arrays hanging around that can’t be put back into use to ease a sudden crunch, even if only briefly? OK, it won’t be the fastest or best storage but it could work for a short time at least.

Of course, IBM puts a somewhat different spin on the situation. It explains: For the last 30 years, the standard method has been for organizations to calculate their current application capacity needs and then guess the rest—hoping the guess is enough to meet future needs. The organization then works up an RFQ, RFP or some other tortuous procurement method to get the capacity they need as quickly as possible. The organization then invites all the vendors it knows and some it doesn’t to pitch their solution—at which point the organization usually finds it will need much more capacity than originally thought—or budgeted. 

And IBM continues: Then, only a few months later, the organization realizes its needs have changed—and that what it originally requested is no longer adequate. A year past the initial start, your new capacity is finally in place—and the organization hopes it won’t have to go through the same process again next year. 

Does that sound like you? DancingDinosaur hopes not, at least not in 2020. 

Maybe if your storage environment is large, one with more than 250 TB of storage and is growing, IBM notes. And depending how you specified it initially, additional capacity through its elastic storage program, is instantly available by simply provisioning what you need. From a billing standpoint, it allows you to move what would otherwise be high upfront costs to a predictable quarterly charge directly related to your business activity.

DancingDinosaur has long felt that IBM Spectrum Storage delivered a nifty set of capabilities even before AI became a current fad. If you can take advantage of it to shortcut the storage acquisition and provisioning process while holding onto a few bucks for a little longer what’s not to like. 

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

July 13, 2020

IBM IBV Sees Shift in Pandemic Consumer Attitude

June 25, 2020

Do you wonder how this pandemic is going to end? Or when. Or what the world will be like when it actually does or if it does, and how we will even know.

IBM quantum computing researcher

IBM’s Institute of Business Value (IBV), an IBM research group, was asking similar questions. It polled more than 18,000 U.S. adults in May and early June to understand how COVID-19 has affected their perspectives on topics that include remote work; the return to the workplace; where they want to live; how they want to shop; and more. 

IBV’s results are not exactly encouraging. For example, it found that consumers are preparing themselves for more permanent changes in behavior because of the pandemic and their fears about future outbreaks. Two of every three respondents said they were concerned about a second wave of COVID-19 hitting later in 2020. More than 60 percent said they believed there were likely to be more global pandemic events like COVID-19 in the future.

The research also suggests that organizations in every industry must pay attention to their customers’ shifting preferences. And they must respond with agility: by adopting technology, rethinking processes and, most importantly, addressing culture in order to emerge from the pandemic smarter and stronger, say the researchers.

DancingDinosaur is not nearly as methodical as the researchers at IBV. But having spent nearly four months being bombarded with solicitations for almost anything that can be squeezed into Zoom I have been able to form some opinions. The first is how ingenious and creative a lot of marketers have become in repackaging their previously tedious messages for what has almost overnight emerged as a virtual Zoom-like world. 

For decades DancingDinosaur has dodged meetings like a plague, or maybe a pandemic. But some have managed to tease me into attending a few virtual zooms, which, surprisingly, were informative and useful and concise. When the pandemic is finally done and gone, marketers may never get DancingDinosaur into a convention center or seminar venue again. Not when it is so easy to click in and, as importantly, how convenient it is to click leave the meeting.

IBV’s research appears to have uncovered some interesting behaviors. For instance, nearly one in five urban residents indicated they would definitely relocate or would consider moving to suburban or rural areas as a result of the pandemic. Fewer than 1 in 10 indicated they now found living in an urban area more appealing. 

That makes sense. If DancingDinosaur was quarantined in a 1 bedroom or studio condo for weeks or months he’d never do that again and hope you wouldn’t either, no matter how tempting the restaurants might have been when you could actually go into them.

Another set of IBV data points bodes badly for combating climate change. Young climate change activist Greta Thunberg, please forgive them. The researchers found 25 percent of respondents said they would use their personal vehicles exclusively as their mode of transport, and an additional 17 percent said they’d use them more than before. A full 60 percent of those who want to use a personal vehicle but don’t own one said they would buy one. The remainder in this group said they would rent a vehicle until they felt safe using shared mobility.

IBV also looked at work-from-home. Before COVID-19 containment measures went into effect, less than 11% of respondents worked from home. As of June 4, that percentage had grown to more than 45%. What’s more, 81% of respondents—up from 75% in April—indicated they want to continue working remotely at least some of the time.  More than half—61%—would like this to become their primary way of working. 

DancingDinosaur spent his entire career working from home. It can be a great life. Of course,  I didn’t have to educate my children at home or on short notice with minimal guidance. They went to public school and summer camp. When they came home from school each day, it made a great excuse for me to take a cookie break with them. I do miss not having my cookie break partners. They married great guys and, if I set any kind of proper example, they now have cookie breaks with them instead.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghost-writer still working from home in the Boston area. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

D-Wave and NEC Advance Quantum Computing

June 22, 2020

IBM boasts of 18 quantum computer models, based on the number of qbits, but it isn’t the only player staking out the quantum market. Last week D-Wave, another early shipper of quantum systems, announced a joint quantum product development and marketing initiative with NEC, which made a $10 million investment in D-Wave.

D-Wave NEC Qauntum Leap

The two companies, according to the announcement,  will work together on the development of hybrid quantum/classical technologies and services that combine the best features of classical computers and quantum computers; the development of new hybrid applications that make use of those services; and joint marketing and sales go-to-market activities to promote quantum computing. Until quantum matures, expect to see more combinations of quantum and classical computing as companies try to figure out how these seemingly incompatible technologies can work together.

For example the two companies suggest that NEC and D-Wave will create practical business and scientific quantum applications in fields ranging from transportation to materials science to machine learning, using D-Wave’s Leap with new joint hybrid services. Or, the two companies might apply D-Wave’s collection of over 200 early customer quantum applications to six markets identified by NEC, such as finance, manufacturing and distribution.

“We are very excited to collaborate with D-Wave. This announcement marks the latest of many examples where NEC has partnered with universities and businesses to jointly develop various applications and technologies. This collaborative agreement aims to leverage the strengths of both companies to fuel quantum application development and business value today,” said Motoo Nishihara, Executive Vice President and CTO, NEC.

Also, NEC and D-Wave intend to create practical business and scientific quantum applications in fields ranging from transportation to materials science to machine learning, using Leap and the new joint hybrid services. The two companies also will apply D-Wave’s collection of over 200 early customer applications to six markets identified by NEC, such as finance, manufacturing and distribution. The two companies will also explore the possibility of enabling the use of NEC’s supercomputers on D-Wave’s Leap quantum cloud service.

“By combining efforts with NEC, we believe we can bring even more quantum benefit to the entire Japanese market that is building business-critical hybrid quantum applications in both the public and private sectors,” said Alan Baratz, CEO of D-Wave. He adds: ” We’re united in the belief that hybrid software and systems are the future of commercial quantum computing. Our joint collaboration will further the adoption of quantum computing in the Japanese market and beyond.”

IBM continues to be the leader in quantum computing, boasting 18 quantum computers of various qubit counts. And they are actually available for use via the Internet, where IBM keeps them running and sufficiently cold–a few degrees above absolute zero–to ensure computational stability. Quantum computers clearly are not something you want to buy for your data center.

But other companies are rushing into the market. Google operates a quantum computer lab with five machines and Honeywell has six quantum machines, according to published reports. Others include Microsoft and Intel. Plus there are startups: IonQ, Quantum Circuits, and Rigetti Computing. All of these have been referenced previously in earlier DancingDinosaur, which just hopes to live long enough to see useful quantum computing come about.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Wazi cloud-native devops for Z

June 12, 2020

In this rapidly evolving world of hybrid and multicloud systems, organizations are required to quickly evolve their processes and tooling to address business needs. Foremost among that are development environments that include IBM Z as part of their hybrid solution face, says Sanjay Chandru, Director, IBM Z DevOps.

IBM’s goal, then  is to provide a cloud native developer experience for the IBM Z that is consistent and familiar to all developers. And that requires cross platform consistency in tooling for application programmers on Z who will need to deliver innovation faster and without the backlogs that have been expected in the past.

Wazi, along with OpenShift,  is another dividend from IBM purchase of Red Hat. Here is where IBM Wazi for Red Hat CodeReady Workspaces comes in: an add-on to IBM Cloud Pak for Applications. It allows developers to use an industry standard integrated development environment (IDE),  such as Microsoft Visual Studio Code (VS Code) or Eclipse, to develop and test IBM z/OS applications in a containerized, virtual z/OS environment on Red Hat OpenShift running on x86 hardware. The container creates a sandbox. 

The combination of IBM Cloud Pak for Applications goes beyond what Zowe offers as an open source framework for z/OS and the OpenProject to enable Z development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Developers who are not used to z/OS and IBM Z, which are most developers, now can  become productive faster in a familiar and accessible working environment, effectively  improving DevOps adoption across the enterprise

As IBM explained: Wazi integrates seamlessly into a standard, Git-based open tool chain to enable continuous integration and continuous delivery (CI/CD) as part of a fully hybrid devops process encompassing distributed and z systems.

IBM continues: Wazi is offered with deployment choices so that organizations can flexibly rebalance entitlement over time based on its business needs. In short, the organization can 

protect and leverage its IBM Z investments with robust and standard development capabilities that encompasses IBM Z and multicloud platforms.

The payoff comes as developers who are NOT used to z/OS and IBM Z, which is most of the developer world, can become productive faster in a familiar and accessible working environment while  improving DevOps adoption across the enterprise. IBM Wazi integrates seamlessly into a standard, Git-based open tool chain to deliver CI/CD and is offered with deployment choices so that any organization can flexibly rebalance over time based on its business needs. In short, you are protecting and leveraging your IBM Z investments with robust and standard development capabilities that encompass the Z and multicloud platforms.

As one large IBM customer put it: “We want to make the mainframe accessible. Use whatever tool you are comfortable with – Eclipse / IDz / Visual Studio Code. All of these things we are interested in to accelerate our innovation on the mainframe” 

An IT service provider added in IBM’s Wazi announcement: “Our colleagues in software development have been screaming for years for a dedicated testing environment that can be created and destroyed rapidly.” Well, now they have it in Wazi.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work athttp://technologywriter.com/

BMC Finalizes Compuware Acquisition 

June 4, 2020

On June 1 BMC completed its acquisition of Compuware. Both were leading mainframe independent software vendors (ISV) and leading providers of mainframe application development, delivery, and performance solutions. Recently the mainframe ISV space has picked up the action. Just a week ago DancingDinosaur was writing about the renaming of Syncsort to Precisely after completing its acquisition of Pitney Bowes, a company best known for its postage metering.


Given IBM’s lackluster performance as a mainframe software application vendor, albeit somewhat constrained by legalities, a healthy mainframe ISV market is good for everyone that wants to thrive in the mainframe space. And there are others DancingDinosaur hasn’t covered recently, such as DataKinetics, a mainframe performance and optimization provider, and Software Diversified Services (SDS), which specializes in mainframe security.

In some ways DancingDinosaur is saddened that the number of independent mainframe ISVs has dropped by one, but is hopeful that those that remain are going to be stronger, more innovative, and better for the mainframe space overall. As BMC says in its announcement: Customers to benefit from an integrated DevOps toolchain with mainframe operations management and agile application development and delivery. Everybody with a stake in the mainframe space should wish them success.

As BMC puts it: the strategic combination of the two companies builds on the success of BMC’s Automated Mainframe Intelligence (AMI) and Compuware’s Topaz suite, ISPW technology, and classic product portfolios to modernize mainframe environments. BMC with Compuware now enables automation and intelligent operations with agile development and delivery – empowering the next generation of mainframe developers and operations teams to excel when working with mainframe programming languages, applications, data, infrastructure, and security.

And the industry analysts say in the announcement: “Adding Compuware’s Topaz software development environment to the BMC portfolio is another step in the direction of targeting the enterprise developer. With Topaz, developers take a modern approach to building, testing, and deploying mainframe applications. This move should allow BMC to spread the word that modern tools matter for the mainframe engineer,” wrote Christopher Condo, Chris Gardner, and Diego Lo Giudice at Forrester Research.

In addition: fifty percent of respondents in a 2019 Forrester study reported that they plan to grow their use of the mainframe over the next two years and 93% of respondents in the 2019 BMC Mainframe Survey believe in the long-term and new workload strength of the platform.

For the mainframe shop, the newly unified portfolio will enable enterprises to:

  • Leverage the processing power, stability, security, and agile scalability of the mainframe
  • Scale Agile and DevOps methods with a fully integrated DevOps toolchain – allowing for mainframe applications to get to market more quickly and efficiently without compromising quality.
  • Combine the self-analyzing, self-healing, and self-optimizing power of the BMC AMI suite of products to increase mainframe availability, efficiency, and security while mitigating risk; along with the Compuware Topaz suite, to empower the next generation of developers to build, analyze, test, deploy, and manage mainframe applications
  • Create a customer experience to meet the business demands of the digital age – jumpstarting their Autonomous Digital Enterprise journey

BMC’s AMI brings an interesting twist. Specifically, it aims to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. Key elements of such a self-managing mainframe in the areas of security for advanced network and system security include improved adherence to PCI DSS, HIPAA, SOX, FISMA, GDPR, ISO 27001, IRS Pub. 1075, NERC, and other industry standards for protecting data. Most helpful should be BMC AMI for Security to execute out-of-the-box scorecards for frequently audited areas. 

Similarly, AMI can address areas like  capacity management to optimize mainframe capacity by addressing bottlenecks before they occur, boost staff productivity, and deliver a right-sized, cost-optimized mainframe environment. Or DevOps AMI for the mainframe through application orchestration tools to automatically capture database changes and communicate them to the database administrator (DBA) while enforcing DevOps best practices.

ISVs also can ignite a spark under IBM, especially now that it has Red Hat, as is the case of IBM enabling Wazi, a cloud native devop tool for the z. That’s why we want a strong ISV community.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Syncsort Now Precisely After Pitney Bowes Acquisition

May 29, 2020

After announcing its acquisition of Pitney Bowes last August and completing the deal in December, Syncsort earlier this month rebranded itself as Precisely. The company, a long established mainframe ISV, is trying to position Precisely as a major player among enterprises seeking to handle quantities of data in various ways.

Precisely’s combined and updated Syncsort and Pitney Bowes product lines to span what the rebranded operation now describes as  “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools.

The rebranded company’s solution portfolio spans five areas based on the use case. 

  • Integrate is its data integration line that features Precisely Connect, Ironstream, Assure, and Syncsort.
  • Verify unit of data quality tools includes Precisely Spectrum Quality, Spectrum Context, and Trillium.
  • Location intelligence (Locate) touts Precisely Spectrum Spatial, Spectrum Geocoding, MapInfo, and Confirm
  • Enrich features Precisely Streets, Boundaries, Points Of Interest, Addresses, and Demographics. 
  • Engage unit aims to create seamless, personalized and omnichannel communications on any medium, anytime

According to the company, the updated product line will span what it describes as “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools. Adds Josh Rogers, CEO, Syncsort, now Precisely,  “With the combination of Syncsort and Pitney Bowes software and data, we are creating in Precisely a new company that is focused on helping enterprises advance their use of data through expertise across data domains, disciplines and platforms.”

Rogers continued: “Advancements in storage, compute, analytics, and machine learning have opened up a world of possibilities for enhanced decision-making, but inaccuracies and inconsistencies in data have held back innovation and stifled value creation. Achieving data integrity is the next business imperative. Put simply, better data means better decisions, and Precisely offers the industry’s most complete portfolio of data integrity products, providing the link between data sources and analytics that helps companies realize the value of their data and investments.”

Precisely may again be onto something by emphasizing the quality of data for decision making, which is just an amplification of the old GIGO (Garbage In Garbage Out), especially now as the volume, variety, and availability of data skyrockets. When edge devices begin generating new and different data it will further compound these challenges. Making data-driven decisions already has become increasingly complex for even the largest enterprises.

Despite the proliferation of cloud-based analytics tools, according to published studies in Forbes, Harvard Business Review, and elsewhere CEOs found that 84 percent do not trust the data they are basing decisions on, and with good reason, as another study found almost half of newly created data records have at least one critical error. Meanwhile, the cost of noncompliance with new governmental regulations, including GDPR and CCPA, has created an even greater urgency for trusted data.

Out of the gate, Precisely has more than 2,000 employees and 12,000 customers in more than 100 countries, with 90 of those part of the Fortune 100. The company boasts annual revenue of over $600 million.

Prior to its acquisition Pitney Bowes delivered solutions, analytics, and APIs in the areas of ecommerce fulfillment, shipping and returns; cross-border ecommerce; office mailing and shipping; presort services; and financing.

Syncsort provides data integration and optimization software alongside location Intelligence, data enrichment, customer information management, and engagement solutions. Together, the two companies serve more than 11,000 enterprises and hundreds of channel partners worldwide.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Supercomputers Battle COVID-19

May 22, 2020

When the world finally defeats the coronavirus and Covad-19, a small part in the victory will go to massive computer power. As Dario Gil, Director of IBM Research, noted; 16 systems with more than 400 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting are among the firepower.

Back in March DancingDinosaur reported here that IBM’s Summit, which the company was boasting as the world’s most powerful supercompter was able to simulate 8,000 chemical compounds in a matter of days in a hunt for something that could impact the COVID-19 infection. 

Writing this today, late in May, we already know that teams of medical researchers, scientists, technology experts, and a vast array of talents are working almost non-stop to find, develop, test, and mass produce a cure, with luck in the form of a vaccine. We should also note all the countless nurses, doctors, aides, assistants and various hospital and food and logistics staff of all types and outside support roles who are involved in keeping things working, feeding staff, wheeling patients around, and otherwise helping to save lives.

As Gil explains: high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling–all the required science disciplines that need to be involved in whatever success is ultimately achieved. You can probably throw in chemistry and a few other areas of electronics and engineering as well. Without massive computer horsepower these experiments would take years to complete if worked by hand, or months if handled on slower, traditional computing platforms.

These machines—more than 25 U.S.-based supercomputers with more than 400 petaflops of computing power—are now available for free to those working toward a vaccine or treatment against the virus, through the COVID-19 High Performance Computing Consortium.

It was created with government, academia and industry—including competitors, working side by side. IBM is co-leading the effort with the U.S. Department of Energy, which operates the National Laboratories of the United States. Google, Microsoft, Amazon, and Hewlett Packard Enterprise have joined, as well as NASA, the National Science Foundation, Pittsburgh Supercomputing Center, and six National Labs—Lawrence Livermore, Lawrence Berkeley, Argonne, Los Alamos, Oak Ridge, and Sandia, and others. And then there are academic institutions, including MIT, Rensselaer Polytechnic Institute, the University of Texas, Austin, and the University of California, San Diego.

The White House has been getting deservedly bashed for its slowness, incompetence, and narrow-minded bungling. However, Gil reports the White House’s Office of Science and Technology Policy has taken up an effort that can make a real difference. He adds; I want to offer this promise: IBM will continue to explore everything in our power to use our technology and expertise to drive meaningful progress in this global fight.

The first thing Gil’s team did was to spread the word to people who might be working on this on any or all fronts—from drug discovery and development with AI-led simulations to genomics, epidemiology and health systems.

He goes on:  We need to understand the whole life cycle of this virus, all the gearboxes that drive it—how it encounters and infects the host cell and replicates inside it, preventing it from producing vital particles. We need to know the molecular components, the proteins involved in the virus’ biochemistry, and then to use computational modeling to see how we can interrupt the cycle. That’s the standard scientific methodology of drug discovery, but we want to amplify it and speed it up.

The virus has been exploding in humans for months, providing an abundance of samples for computer modeling and analysis, Gil continued. Scientists already are depositing samples into public data sources such as GenBank and Protein Data Bank. There are many unknowns and assumptions but a lot of proposals involve using the available protein structures to come up with potential molecular compounds that could lead to a therapeutic treatment or a vaccine. Let’s hope they have great success, the sooner the better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/


%d bloggers like this: