IBM Grows Quantum Ecosystem

April 27, 2018

It is good that you aren’t dying to deploy quantum computing soon because IBM readily admits that it is not ready for enterprise production now or in several weeks or maybe several months. IBM, however, continues to assemble the building blocks you will eventually need when you finally feel the urge to deploy a quantum application that can address a real problem that you need to resolve.

cryostat with prototype of quantum processor

IBM is surprisingly frank about the state of quantum today. There is nothing you can do at this point that you can’t simulate on a conventional or classical computer system. This situation is unlikely to change anytime soon either. For years to come, we can expect hybrid quantum and conventional compute environments that will somehow work together to solve very demanding problems, although most aren’t sure exactly what those problems will be when the time comes. Still at Think earlier this year IBM predicted quantum computing will be mainstream in 5 years.

Of course, IBM has some ideas of where the likely problems to solve will be found:

  • Chemistry—material design, oil and gas, drug discovery
  • Artificial Intelligence—classification, machine learning, linear algebra
  • Financial Services—portfolio optimization, scenario analysis, pricing

It has been some time since the computer systems industry had to build a radically different kind of compute discipline from scratch. Following the model of the current IT discipline IBM began by launching the IBM Q Network, a collaboration with leading Fortune 500 companies and research institutions with a shared mission. This will form the foundation of a quantum ecosystem.  The Q Network will be comprised of hubs, which are regional centers of quantum computing R&D and ecosystem; partners, who are pioneers of quantum computing in a specific industry or academic field; and most recently, startups, which are expected to rapidly advance early applications.

The most important of these to drive growth of quantum are the startups. To date, IBM reports eight startups and it is on the make for more. Early startups include QC Ware, Q-Ctrl, Cambridge Quantum Computing (UK), which is working on a compiler for quantum computing, 1Qbit based in Canada, Zapata Computing located at Harvard, Strangeworks, an Austin-based tool developer, QxBranch, which is trying to apply classical computing techniques to quantum, and Quantum Benchmark.

Startups get membership in the Q network and can run experiments and algorithms on IBM quantum computers via cloud-based access; provide deeper access to APIs and advanced quantum software tools, libraries, and applications; and have the opportunity to collaborate with IBM researchers and technical SMEs on potential applications, as well as with other IBM Q Network organizations. If it hasn’t become obvious yet, the payoff will come from developing applications that solve recognizable problems. Also check out QISKit, a software development kit for quantum applications available through GitHub.

The last problem to solve is the question around acquiring quantum talent. How many quantum scientists, engineers, or programmers do you have? Do you even know where to find them? The young people excited about computing today are primarily interested in technologies to build sexy apps using Node.js, Python, Jupyter, and such.

To find the people you need to build quantum computing systems you will need to scour the proverbial halls of MIT, Caltech, and other top schools that produce physicists and quantum scientists. A scan of salaries for these people reveals $135,000- $160,000, if they are available at all.

The best guidance from IBM on starting is to start small. The industry is still at the building block stage; not ready to throw specific application at real problems. In that case sign up for IBM’s Q Network and get some of your people engaged in the opportunities to get educated in quantum.

When DancingDinosaur first heard about quantum physics he was in a high school science class decades ago. It was intriguing but he never expected to even be alive to see quantum physics becoming real, but now it is. And he’s still here. Not quite ready to sign up for QISKit and take a small qubit machine for a spin in the cloud, but who knows…

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IT Security Enters the Cooperative Era

April 20, 2018

Ever hear of the cybersecurity tech accord?  It was  announced on Tuesday. Microsoft, Facebook, and 32 other companies signed aboard.  Absent from the signing were Apple, Alphabet and Amazon. Also missing was IBM. Actually, IBM was already at the RSA Conference making its own security announcement of an effort to help cybersecurity teams collaborate just like the attackers they’re defending against do via the dark web by sharing information among themselves.

IBM security control center

Tuesday’s Cybersecurity Tech Accord amounted to a promise to work together on cybersecurity issues. Specifically, the companies promise to work against state sponsored cyberattacks. The companies also agreed to collaborate on stronger defense systems and protect against the tampering of their products, according to published reports.

Giving importance to the accord is the financial impact of cybersecurity attacks on businesses and organizations, which is projected to reach $8 trillion by 2022. Other technology leaders, including Cisco, HP, Nokia, Oracle also joined the accord.

A few highly visible and costly attacks were enough to galvanize the IT leaders. In May, WannaCry ransomware targeted more than 300,000 computers in 150 countries, including 48 UK medical facilities. In a bid to help, Microsoft issued patches for old Windows systems, even though it no longer supports them, because so many firms run old software that was vulnerable to the attack, according to published reports. The White House attributed the attack to North Korea.

In June, NotPetya ransomware, which initially targeted computers in Ukraine before spreading, infected computers, locked down their hard drives, and demanded a $300 ransom to be paid in bitcoin. Even victims that paid weren’t able to recover their files, according to reports. The British government said Russia was behind the global cyberattack.

The Cybersecurity Tech Accord is modeled after a digital Geneva Convention, with a long-term goal of updating international law to protect people in times of peace from malicious cyberattacks, according to Microsoft president Brad Smith.

Github’s chief strategy officer Julio Avalos wrote in a separate blog post that “protecting the Internet is becoming more urgent every day as more fundamental vulnerabilities in infrastructure are discovered—and in some cases used by government organizations for cyberattacks that threaten to make the Internet a theater of war.” He continued: “Reaching industry-wide agreement on security principles and collaborating with global technology companies is a crucial step toward securing our future.”

Added Sridhar Muppidi, Co-CTO of IBM Security about the company’s efforts to help cybersecurity teams collaborate like the attackers they’re working against, in a recent published interview: The good guys have to collaborate with each other so that we can provide a better and more secure and robust systems. So we talk about how we share the good intelligence. We also talk about sharing good practices, so that we can then build more robust systems, which are a lot more secure.

It’s the same concept of open source model, where you provide some level of intellectual capital with an opportunity to bring in a bigger community together so that we can take the problem and solve it better and faster. And learn from each other’s mistakes and each other’s advancement so that it can help, individually, each of our offerings. So, end of the day, for a topic like AI, the algorithm is going to be an algorithm. It’s the data, it’s the models, it’s the set of things which go around it which make it very robust and reliable, Muppidi continued.

IBM appears to be practicing what it preaches by facilitating the collaboration of people and machines in defense of cyberspace. Last year at RSA, IBM introduced Watson to the cybersecurity industry to augment the skills of analysts in their security investigations. This year investments and artificial intelligence (AI), according to IBM, were made with a larger vision in mind: a move toward “automation of response” in cybersecurity.

At RSA, IBM also announced the next-generation IBM Resilient Incident Response Platform (IRP) with Intelligent Orchestration. The new platform promises to accelerate and sharpen incident response by seamlessly combining incident case management, orchestration, automation, AI, and deep two-way partner integrations into a single platform.

Maybe DancingDinosaur, which has spent decades acting as an IT-organization-of-one, can finally turn over some of the security chores to an intelligent system, which hopefully will do it better and faster.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Introduces Skinny Z Systems

April 13, 2018

Early this week IBM unveiled two miniaturized mainframe models, dubbed skinny mainframes, it said are easier to deploy in a public or private cloud facility than their more traditional, much bulkier predecessors. Relying on all their design tricks, IBM engineers managed to pack each machine into a standard 19-inch rack with space to spare, which can be used for additional components.

Z14 LinuxONE Rockhopper II, 19-inch rack

The first new mainframe introduced this week, also in a 19-inch rack, is the Z14 model ZR1. You can expect subsequent models to increment the model numbering.  The second new machine is the LinuxONE Rockhopper II, also in a 19-inch rack.

In the past, about a year after IBM introduced a new mainframe, say the z10, it was introduced what it called a Business Class (BC) version. The BC machines were less richly configured, less expandable but delivered comparable performance with lower capacity and a distinctly lower price.

In a Q&A analyst session IBM insisted the new machines would be priced noticeably lower, as were the BC-class machines of the past. These are not comparable to the old BC machines. Instead, they are intended to attract a new group of users who face new challenges. As such, they come cloud-ready. The 19-inch industry standard, single-frame design is intended for easy placement into existing cloud data centers alongside other components and private cloud environments.

The company, said Ross Mauri, General Manager IBM Z, is targeting the new machines toward clients seeking robust security with pervasive encryption, cloud capabilities and powerful analytics through machine learning. Not only, he continued, does this increase security and capability in on-premises and hybrid cloud environments for clients, IBM will also deploy the new systems in IBM public cloud data centers as the company focuses on enhancing security and performance for increasingly intensive data loads.

In terms of security, the new machines will be hard to beat. IBM reports the new machines capable of processing over 850 million fully encrypted transactions a day on a single system. Along the same lines, the new mainframes do not require special space, cooling or energy. They do, however, still provide IBM’s pervasive encryption and Secure Service Container technology, which secures data serving at a massive scale.

Ross continued: The new IBM Z and IBM LinuxONE offerings also bring significant increases in capacity, performance, memory and cache across nearly all aspects of the system. A complete system redesign delivers this capacity growth in 40 percent less space and is standardized to be deployed in any data center. The z14 ZR1 can be the foundation for an IBM Cloud Private solution, creating a data-center-in-a-box by co-locating storage, networking and other elements in the same physical frame as the mainframe server.  This is where you can utilize that extra space, which was included in the 19-inch rack.

The LinuxONE Rockhopper II can also accommodate a Docker-certified infrastructure for Docker EE with integrated management and scale tested up to 330,000 Docker containers –allowing developers to build high-performance applications and embrace a micro-services architecture.

The 19-inch rack, however, comes with tradeoffs, notes Timothy Green writing in The Motley Fool. Yes, it takes up 40% less floor space than the full-size Z14, but accommodates only 30 processor cores, far below the 170 cores supported by a full size Z14, , which fills a 24-inch rack. Both new systems can handle around 850 million fully encrypted transactions per day, a fraction of the Z14’s full capacity. But not every company needs the full performance and capacity of the traditional mainframe. For companies that don’t need the full power of a Z14 mainframe, notes Green, or that have previously balked at the high price or massive footprint of full mainframe systems, these smaller mainframes may be just what it takes to bring them to the Z. Now IBM needs to come through with the advantageous pricing they insisted they would offer.

The new skinny mainframe are just the latest in IBM’s continuing efforts to keep the mainframe relevant. It began over a decade ago with porting Linux to the mainframe. It continued with Hadoop, blockchain, and containers. Machine learning and deep learning are coming right along.  The only question for DancingDinosaur is when IBM engineers will figure out how to put quantum computing on the Z and squeeze it into customers’ public or private cloud environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Shouldn’t Forget Its Server Platforms

April 5, 2018

The word coming out of IBM brings a steady patter about cognitive, Watson, and quantum computing, for which IBM predicted quantum would be going mainstream within five years. Most DancingDinosaur readers aren’t worrying about what’s coming in 2023 although maybe they should. They have data centers to run now and are wondering where they are going to get the system horsepower they will need to deliver IoT or Blockchain or any number of business initiatives clamoring for system resources today or tomorrow and all they’ve got are the z14 and the latest LinuxONE. As powerful as they were when first announced, do you think that will be enough tomorrow?

IBM’s latest server, the Z

Timothy Prickett Morgan, analyst at The Next Platform, apparently isn’t so sure. He writes in a recent piece how Google and the other hyperscalers need to add serious power to today’s server options. The solution involves “putting systems based on IBM’s Power9 processor into production.” This shouldn’t take anybody by surprise; almost as soon as IBM set up the Open Power consortium Rackspace, Google, and a handful of others started making noises about using Open POWER for a new type of data center server. The most recent announcements around Power9, covered here back in Feb., promise some new options with even more coming.

Writes Morgan: “Google now has seven applications that have more than 1 billion users – adding Android, Maps, Chrome, and Play to the mix – and as the company told us years ago, it is looking for any compute, storage, and networking edge that will allow it to beat Moore’s Law.” Notice that this isn’t about using POWER9 to drive down Intel’s server prices; Google faces a more important nemesis, the constraints of Moore’s Law.

Google has not been secretive about this, at least not recently. To its credit Google is making its frustrations known at appropriate industry events:  “With a technology trend slowdown and growing demand and changing demand, we have a pretty challenging situation, what we call a supply-demand gap, which means the supply on the technology side is not keeping up with this phenomenal demand growth,” explained Maire Mahony, systems hardware engineer at Google and its key representative at the OpenPower Foundation that is steering the Power ecosystem. “That makes it hard to for us to balance that curve we call performance per TCO dollar. This problem is not unique to Google. This is an industry-wide problem.” True, but the majority of data centers, even the biggest ones, don’t face looming multi-billion user performance and scalability demands.

Morgan continued: “Google has absolutely no choice but to look for every edge. The benefits of homogeneity, which have been paramount for the first decade of hyperscaling, no longer outweigh the need to have hardware that better supports the software companies like Google use in production.”

This isn’t Intel’s problem alone although it introduced a new generation of systems, dubbed Skylake, to address some of these concerns. As Morgan noted recently, “various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers.” So can AMD’s Epyc X86 processors. Similarly, the Open Power consortium offers an alternative in POWER9.

Morgan went on: IBM differentiated the hardware with its NVLink versions and, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Still, it is not apparent to Morgan how POWER9 will compete.

Success may come down to a battle of vendor ecosystems. As Morgan points out: aside from the POWER9 system that Google co-engineered with Rackspace Hosting, the most important contributions that Google has made to the OpenPower effort is to work with IBM to create the OPAL firmware, the OpenKVM hypervisor, and the OpenBMC baseboard management controller, which are all crafted to support little endian Linux, as is common on x86.

Guess this is the time wade into the endian morass. Endian refers to the byte ordering that is used, and IBM chips and a few others do them in reverse of the x86 and Arm architectures. The Power8 chip and its POWER9 follow-on support either mode, big or little endian. By making all of these changes, IBM has made the Power platform more palatable to the hyperscalers, which is why Google, Tencent, Alibaba, Uber, and PayPal all talk about how they make use of Power machinery, particularly to accelerate machine learning and generic back-end workloads. But as quickly as IBM jumped on the problem recently after letting it linger for years, it remains one more complication that must be considered. Keep that in mind when a hyperscaler like Google talks about performance per TCO dollar.

Where is all this going? Your guess is as good as any. The hyperscalers and the consortia eventually should resolve this and DancingDinosaur will keep watching. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Mainframe ISVs Advance the Mainframe While IBM Focuses on Think

March 30, 2018

Last week IBM reveled in the attention of upwards of 30,000 visitors to its Think conference, reportedly a record for an IBM conference. Meanwhile Syncsort and Compuware stayed home pushing new mainframe initiatives. Specifically, Syncsort introduced innovations to deliver mainframe log and application data in real-time directly to Elastic for deeper next generation analytics through like Splunk, Hadoop and the Elastic Stack.

Syncsort Ironstone for next-gen analytics

Compuware reported that the percentage of organizations running at least half their business-critical applications on the mainframe expected to increase next year, although the loss of skilled mainframe staff, and the failure to subsequently fill those positions pose significant threats to application quality, velocity and efficiency. Compuware has been taking the lead in modernizing the mainframe developer experience to make it compatible with the familiar x86 experience.

According to David Hodgson, Syncsort’s chief product officer, many organizations are using Elastic’s Kibana to visualize Elasticsearch data and navigate the Elastic Stack. These organizations, like others, are turning to tools like Hadoop and Splunk to get a 360-degree view of their mainframe data enterprise-wide. “In keeping with our proven track record of enabling our customers to quickly extract value from their critical data anytime, anywhere, we are empowering enterprises to make better decisions by making mission-critical mainframe data available in another popular analytics platform,” he adds.

For cost management, Syncsort now offers Ironstream with the flexibility of MSU-based (capacity) or Ingestion-based pricing.

Compuware took a more global view of the mainframe. The mainframe, the company notes, is becoming more important to large enterprises as the percentage of organizations running at least half their business-critical applications on that platform expected to increase next year. However, the loss of skilled mainframe staff, and the failure to subsequently fill those positions, pose significant threats to application quality, velocity and efficiency.

These are among the findings of research and analysis conducted by Forrester Consulting on behalf of Compuware.  According to the study, “As mainframe workload increases—driven by modern analytics, blockchain and more mobile activity hitting the platform—customer-obsessed companies should seek to modernize application delivery and remove roadblocks to innovation.”

The survey of mainframe decision-makers and developers in the US and Europe also revealed the growing mainframe importance–64 percent of enterprises will run more than half of their critical applications on the platform within the next year, up from 57 percent this year. And just to ratchet up the pressure a few notches, 72 percent of customer-facing applications at these enterprises are completely or very reliant on mainframe processing.

That means the loss of essential mainframe staff hurts, putting critical business processes at risk. Overall, enterprises reported losing an average of 23 percent of specialized mainframe staff in the last five years while 63 percent of those positions have not been filled.

There is more to the study, but these findings alone suggest that mainframe investments, culture, and management practices need to evolve fast in light of the changing market realities. As Forrester puts it: “IT decision makers cannot afford to treat their mainframe applications as static environments bound by long release cycles, nor can they fail to respond to their critical dependence with a retiring workforce. Instead, firms must implement the modern tools necessary to accelerate not only the quality, but the speed and efficiency of their mainframe, as well as draw [new] people to work on the platform.”

Nobody has 10 years or even three years to cultivate a new mainframer. You need to attract and cultivate talented x86 or ARM people now, equip each—him or her—with the sexiest, most efficient tools, and get them working on the most urgent items at the top of your backlog.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

IBM Boosts AI at Think

March 23, 2018

Enterprise system vendors are racing to AI along with all the others. Writes Jeffrey Burt, an analyst at The Next Platform, “There continues to be an ongoing push among tech vendors to bring artificial intelligence (AI) and its various components – including deep learning and machine learning – to the enterprise. The technologies are being rapidly adopted by hyperscalers and in the HPC space, and enterprises stand to reap significant benefits by also embracing them.” Exactly what those benefits are still need to be specifically articulated and, if possible, quantified.

IBM Think Conference this week

For enterprise data centers running the Z or Power Systems, the most obvious quick payoff will be fast, deeper, more insightful data analytics along with more targeted guidance on actions to take in response. After that there still remains the possibility of more automation of operations but the Z already is pretty thoroughly automated and optimized. Just give it your operational and performance parameters and it will handle the rest.  In addition, vendors like Compuware and Syncsort have been making the mainframe more graphical and intuitive. The days of needing deep mainframe experience or expertise have passed. Even x86 admins can quickly pick up a modern mainframe today.

In a late 2016 study by Accenture that modeled the impact of AI for 12 developed economies. The research compared the size of each country’s economy in 2035 in a baseline scenario, which shows expected economic growth under current assumptions and an AI scenario reflecting expected growth once the impact of AI has been absorbed into the economy. AI was found to yield the highest economic benefits for the United States, increasing its annual growth rate from 2.6 percent to 4.6 percent by 2035, translating to an additional USD $8.3 trillion in gross value added (GVA). In the United Kingdom, AI could add an additional USD $814 billion to the economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9 percent. Japan has the potential to more than triple its annual rate of GVA growth by 2035, and Finland, Sweden, the Netherlands, Germany and Austria could see their growth rates double. You can still find the study here.

Also coming out of Think this week was the announcement of an expanded Apple-IBM partnership around AI and machine learning (ML). The resulting AI service is intended for corporate developers to build apps themselves. The new service, Watson Services for Core ML, links Apple’s Core ML tools for developers that it unveiled last year with IBM’s Watson data crunching service. Core ML helps coders build machine learning-powered apps that more efficiently perform calculations on smartphones instead of processing those calculations in external data centers. It’s similar to other smartphone-based machine learning tools like Google’s TensorFlow Lite.

The goal is to help enterprises reimagine the way they work through a combination of Core ML and Watson Services to stimulate the next generation of intelligent mobile enterprise apps. Take the example of field technicians who inspect power lines or machinery. The new AI field app could feed images of electrical equipment to Watson to train it to recognize the machinery. The result would enable field technicians to scan the electrical equipment they are inspecting on their iPhones or iPads and automatically detect any anomalies. The app would eliminate the need to send that data to IBM’s cloud computing data centers for processing, thus reducing the amount of time it takes to detect equipment issues to near real-time.

Apple’s Core ML toolkit could already be used to connect with competing cloud-based machine learning services from Google, Amazon, and Microsoft to create developer tools that more easily link the Core ML service with Watson. For example, Coca-Cola already is testing Watson Services for Core ML to see if it helps its field technicians better inspect vending machines. If you want try it in your shop, the service will be free to developers to use now. Eventually, developers will have to pay.

Such new roll-your-own AI services represent a shift for IBM. Previously you had to work with IBM consulting teams. Now the new Watson developer services are intended to be bought in an “accessible and bite size” way, according to IBM, and sold in a “pay as you go” model without consultants.  In a related announcement at Think, IBM announced it is contributing the core of Watson Studio’s Deep Learning Service as an open source project called Fabric for Deep Learning. This will enable developers and data scientists to work together on furthering the democratization of deep learning.

Ultimately, the democratization of AI is the only way to go. When intelligent systems speak together and share insights everyone’s work will be faster, smarter. Yes, there will need to be ways to compensate distinctively valuable contributions but with over two decades of open source experience, the industry should be able to pretty easily figure that out.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Leverages Strategic Imperatives to Win in Cloud

March 16, 2018

Some people may have been ready to count out IBM in the cloud. The company, however, is clawing its way back into contention faster than many imagined. In a recent Forbes Magazine piece, IBM credits 16,000 AI engagements, 400 blockchain engagements, and a couple of quantum computing pilots as driving its return as a serious cloud player.

IBM uses blockchain to win the cloud

According to Fortune, IBM has jumped up to third in cloud revenue with $17 billion, ranking behind Microsoft with $18.6 billion and Amazon, with $17.5. Among other big players, Google comes in seventh with $3 billion

In the esoteric world of quantum computing IBM is touting live projects underway with JPMorganChase, Daimler, and others. Bob Evans, a respected technology writer and now the principle of Evans Strategic Communications, notes that the latest numbers “underscore not only IBM’s aggressive moves into enterprise IT’s highest-potential markets,” but also the legitimacy of the company’s claims that it has joined the top ranks of the competitive cloud-computing marketplace alongside Microsoft and Amazon.

As reported in the Fortune piece, CEO Ginni Rometty, speaking to a quarterly analyst briefing, declared: “While IBM has a considerable presence in the public-cloud IaaS market because many of its clients require or desire that, it intends to greatly differentiate itself from the big IaaS providers via higher-value technologies such as AI, blockchain, cybersecurity and analytics.” These are the areas that Evans sees as driving IBM into the cloud’s top tier.

Rometty continued; “I think you know that for us the cloud has never been about having Infrastructure-as-a-Service-only as a public cloud, or a low-volume commodity cloud; Frankly, Infrastructure-as-a-Service is almost just a dialtone. For us, it’s always been about a cloud that is going to be enterprise-strong and of which IaaS is only a component.”

In the Fortune piece she then laid out four strategic differentiators for the IBM Cloud, which in 2017 accounted for 22% of IBM’s revenue:

  1. “The IBM Cloud is built for “data and applications anywhere,” Rometty said. “When we say you can do data and apps anywhere, it means you have a public cloud, you have private clouds, you have on-prem environments, and then you have the ability to connect not just those but also to other clouds. That is what we have done—all of those components.”
  2. The IBM Cloud is “infused with AI,” she continued, alluding to how most of the 16,000 AI engagements also involve the cloud. She cited four of the most-popular ways in which customers are using AI: customer service, enhancing white-collar work, risk and compliance, and HR.
  3. For securing the cloud IBM opened more than 50 cybersecurity centers around the world to ensure “the IBM Cloud is secure to the core,” Rometty noted.
  4. “And perhaps this the most important differentiator—you have to be able to extend your cloud into everything that’s going to come down the road, and that could well be more cyber analytics but it is definitely blockchain, and it is definitely quantum because that’s where a lot of new value is going to reside.”

You have to give Rometty credit: She bet big that IBM’s strategic imperatives, especially blockchain and, riskiest of all, quantum computing would eventually pay off. The company had long realized it couldn’t compete in high volume, low margin businesses. She made her bet on what IBM does best—advanced research—and stuck with it.  During those 22 consecutive quarters of revenue losses she stayed the course and didn’t publicly question the decision.

As Fortune observed: In quantum, IBM’s leveraging its first-mover status and has moved far beyond theoretical proposals. “We are the only company with a 50-qubit system that is actually working—we’re not publishing pictures of photos of what it might look like, or writings that say if there is quantum, we can do it—rather, we are scaling rapidly and we are the only one working with clients in development working on our quantum,” Rometty said.

IBM’s initial forays into commercial quantum computing are just getting started: JPMorganChase is working on risk optimization and portfolio optimization using IBM quantum computing;  Daimler is using IBM’s quantum technology to explore new approaches to logistics and self-driving car routes; and JSR is doing computational chemistry to create entirely new materials. None of these look like the payback is right around the corner. As DancingDinosaur wrote just last week, progress with quantum has been astounding but much remains to be done to get a functioning commercial ecosystem in place to support the commercialization of quantum computing for business on a large scale.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

The Rush to Quantum Computing

March 9, 2018

Are you excited about quantum computing? Are you taking steps to get ready for it? Do you have an idea of what you would like to do with quantum computing or a plan for how to do it? Except for the most science-driven organizations or those with incomprehensively complex challenges to solve DancingDinosaur can’t imagine this is the most pressing IT issue you are facing today.

Yet leading IT-based vendors are making astounding gains in moving quantum computing forward further and faster than the industry was even projecting a few months ago. This past Nov. IBM announced a 50 qubit system. Earlier this month Google announced Bristlecone, which claims to top that. With Bristlecone Google trumps IBM for now with 72 qubits. However, that may not be the most important metric to focus on.

Never heard of quantum supremacy? You are going to hear a lot about it in the coming weeks, months, and even years as the vendors battle for the quantum supremacy title. Here is how Wikipedia defines it: Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers cannot. In computational complexity-theoretic terms, this generally means providing a super-polynomial speedup over the best known or possible classical algorithm. If this doesn’t send you racing to dig out your old college math book you were a better student than DancingDinosaur. In short, supremacy means beating the current best conventional algorithms. But you can’t just beat them; you have to do it using less energy or faster or some way that will demonstrate your approach’s advantage.

The issue resolves around the instability of qubits; the hardware needs to be sturdy to run them. Industry sources note that quantum computers need to keep their processors extremely cold (Kelvin levels of cold) and protect them from external shocks. Even accidental sounds can cause the computer to make mistakes. To operate in even remotely real-world settings, quantum processors also need to have an error rate of less than 0.5 percent for every two qubits. Google’s best came in at 0.6 percent using its much smaller 9-qubit hardware. Its latest blog post didn’t state Bristlecone’s error rate, but Google promised to improve on its previous results. To drop the error rate for any qubit processor, engineers must figure out how software, control electronics, and the processor itself can work alongside one another without causing errors.

50 cubits currently is considered the minimum number for serious business work. IBM’s November announcement, however, was quick to point out that “does not mean quantum computing is ready for common use.” The system IBM developed remains extremely finicky and challenging to use, as are those being built by others. In its 50-qubit system, the quantum state is preserved for 90 microseconds—record length for the industry but still an extremely short period of time.

Nonetheless, 50 qubits have emerged as the minimum number for a (relatively) stable system to perform practical quantum computing. According to IBM, a 50-qubit machine can do things that are extremely difficult to simulate without quantum technology.

The problem touches on one of the attributes of quantum systems.  As IBM explains, where normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena—entanglement and superposition—to process information differently.  Conventional computers store numbers as sequences of 0 and 1 in memory and process the numbers using only the simplest mathematical operations, add and subtract.

Quantum computers can digest 0 and 1 too but have a broader array of tricks. That’s where entanglement and superposition come in.  For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum systems can handle multiple, contradictory states. That can be very helpful if you are trying to solve huge data- and compute-intensive problems like a Monte Carlo simulation. After working at quantum computing for decades the new 50-cubit system finally brings something IBM can offer to businesses which face complex challenges that can benefit from quantum’s superposition capabilities.

Still, don’t bet on using quantum computing to solve serious business challenges very soon.  An entire ecosystem of programmers, vendors, programming models, methodologies, useful tools, and a host of other things have to fall into place first. IBM and Google and others are making stunningly rapid progress. Maybe DancingDinosaur will actually be alive to see quantum computing as just another tool in a business’s problem-solving toolkit.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Dinosaurs Strike Back in IBM Business Value Survey

March 2, 2018

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing.  Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet.  The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset.  Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found:  Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Brings NVMe to Revamped Storage

February 23, 2018

The past year has been good for IBM storage and it’s not only that the company rang up four consecutive quarters of positive storage revenue. Over that period and starting somewhat earlier, the company embarked on a thorough revamping of its storage lineup, adding all the hot goodies from flash to software defined storage (Spectrum) to NVMe (Non-Volatile Memory express) in 2018. NVMe represents a culmination of sorts by allowing the revamped storage products to actually deliver on the low latency and parallelism promises of the latest technology.

Hyper-Scale Manager for IBM FlashSystem (Jared Lazarus/Feature Photo Service for IBM)

The revamp follows changes in the way organizations are deploying technology. They now are wrestling with exponential volumes of data growth and the need to quickly modernize their traditional IT infrastructures by taking advantage of multi-cloud, analytics, and cognitive/AI workloads going forward.

This is not just a revamp of existing products. IBM has added innovations and enhancements across the storage portfolio to expand the range of data types supported, deliver new function, and enable new technology deployment.

This week, IBM Storage — the #2 storage software vendor by revenue market share according to IDC—announced a wide-ranging set of innovations to its software-defined storage (SDS), data protection, and storage systems portfolio. Continuing IBM investments in enhancing its SDS (Spectrum), data protection, and storage systems capabilities, these announcements demonstrate its commitment to IBM storage solutions as the foundation for multi-cloud and cognitive/AI applications and workloads.

With these enhancements, IBM is aiming to transform on-premises infrastructure to meet these new business imperatives. Recent innovations and enhancements across the IBM Storage portfolio expand the range of data types supported, deliver new function, and enable new technology deployment. For example, IBM Spectrum NAS delivers enterprise capabilities and SDS simplicity with cost benefits for common file workloads, including support for Microsoft environments. Or, IBM Spectrum Protect still addresses data security concerns but just added General Data Protection Regulation (GDPR) and automated detection and alerting of ransomware.

Along the same lines, IBM Spectrum Storage Suite brings a complete solution for software-defined storage needs while gaining expanded range and value through the inclusion of IBM Spectrum Protect Plus at no additional charge. Similarly, IBM Spectrum Virtualize promises lower data storage costs through new and better performing data reduction technologies for the IBM Storwize family, IBM SVC, and IBM FlashSystem V9000, as well as for over 440 non-IBM vendor storage systems.

Finally, IBM Spectrum Connect simplifies management of complex server environments by providing a consistent experience when provisioning, monitoring, automating, and orchestrating IBM storage in containerized VMware and Microsoft PowerShell environments. Orchestration is critical in increasingly complex container environments.

The newest part of the IBM storage announcements is NVM Express (NVMe). This is an open logical device interface specification for accessing non-volatile storage media attached via a PCIe bus. The non-volatile memory referred to is flash memory, typically in the form of solid-state drives (SSDs). NVMe provides a logical device interface designed from the ground up to capitalize on the low latency and internal parallelism of flash-based storage devices, essentially mirroring the parallelism of modern CPUs, platforms and applications.

By its design, NVMe allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVMe reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple, long command queues, and reduced latency. (The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a lengthy delay in response exists between a request and the corresponding data receipt due to much slower data speeds than RAM speeds could generate a fault.

NVMe devices exist both in the form of standard PCIe expansion card and as 2.5-inch form-factor devices that provide a four-lane PCIe interface through the U.2 connector (formerly known as SFF-8639) and SATA storage devices and the M.2 specification for internally mounted computer expansion cards also support NVMe as the logical device interface.

Maybe NVMe sounds like overkill now but it won’t the next time you upgrade your IT infrastructure. Don’t plan on buying more HDD or going back to IPv3. With IoT, cognitive computing, blockchain, and more your users will have no tolerance for a slow infrastructure.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: