Posts Tagged ‘hybrid computing’

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Introduces New Flash Storage Family

February 14, 2020

IBM describes this mainly as a simplification move. The company is eliminating 2 current storage lines, Storwize and Flash Systems A9000, and replacing them with a series of flash storage systems that will scale from entry to enterprise. 

Well, uh, not quite enterprise as Dancing Dinosaur readers might think of it. No changes are planned for the DS8000 storage systems, which are focused on the mainframe market, “All our existing product lines, not including our mainframe storage, will be replaced by the new FlashSystem family,” said Eric Herzog, IBM’s chief marketing officer and vice president of worldwide storage channel in a published report earlier this week

The move will rename two incompatible storage lines out of the IBM product lineup and replace them with a line that provides compatible storage software and services from entry level to the highest enterprise, mainframe excluded, Herzog explained. The new flash systems family promises more functions, more features, and lower prices, he continued.

Central to the new Flash Storage Family is NVMe, which comes in multiple flavors.  NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

At the top of the new family line is the NVMe and multicloud ultra-high throughput storage system. This is a validated system with IBM implementation. IBM promises unmatched NVMe performance, SCM, and  IBM FlashCore technology. In addition it brings the features of IBM Spectrum Virtualize to support the most demanding workloads.

Image result for IBM flash storage family

IBM multi-cloud flash storage family system

Get NVMe performance, SCM and  IBM FlashCore technology, and the rich features of IBM Spectrum Virtualize to support your most demanding workloads.

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

Next up are the IBM FlashSystem 9200 and IBM FlashSystem 9200R, IBM tested and validated rack solutions designed for the most demanding environments. With the extreme performance of end-to-end NVMe, the IBM FlashCore technology, and the ultra-low latency of Storage Class Memory (SCM). It also brings IBM Spectrum Virtualize and AI predictive storage management with proactive support by Storage Insights. FlashSystem 9200R is delivered assembled, with installation and configuration completed by IBM to ensure a working multicloud solution.

Gain the performance of all-flash and NVMe with SCM support for flash acceleration and the reliability and innovation of IBM FlashCore technology, plus the rich features of IBM Spectrum Virtualize — all in a powerful 2U storage system.

Combine the performance of flash and NVMe with the reliability and innovation of IBM FlashCore® and the rich features of IBM Spectrum Virtualize™, bringing high-end capability to clients needing enterprise mid-range storage.

In the middle of the family is the IBM FlashSystem 7200 and FlashSystem 7200H. As IBM puts it, these offer end-to-end NVMe, the innovation of IBM FlashCore technology, the ultra-low latency of Storage Class Memory (SCM), the flexibility of IBM Spectrum Virtualize, and the AI predictive storage management and proactive support of Storage Insights. It comes in a powerful 2U storage all flash or hybrid flash array. The IBM FlashSystem 7200 brings mid-range storage while allowing the organization to add  multicloud technology that best supports the business.

At the bottom of the line is the NVMe entry enterprise all flash storage solution, which brings  NVMe end-to-end capabilities and flash performance to the affordable FlashSystem 5100. As IBM describes it, the FlashSystem® 5010 and IBM FlashSystem 5030 (formerly known as IBM Storwize V5010E and Storwize V5030E–they are still there, just renamed) are all-flash or hybrid flash solutions intended to provide enterprise-grade functionalities without compromising affordability or performance. Built with the flexibility of IBM Spectrum Virtualize and AI-powered predictive storage management and proactive support of Storage Insights. IBM FlashSystem 5000 helps make modern technologies such as artificial intelligence accessible to enterprises of all sizes. In short, these promise entry-level flash storage solutions designed to provide enterprise-grade functionality without compromising affordability or performance

IBM likes the words affordable and affordability in discussing this new storage family. But, as is typical with IBM, nowhere will you see a price or a reference to cost/TB or cost/IOPS or cost of anything although these are crucial metrics for evaluating any flash storage system. DancingDinosaur expects this after 20 years of writing about the z. Also, as I wrote at the outset, the z is not even included in this new flash storage family so we don’t even have to chuckle if they describe z storage as affordable.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Meet IBM’s New CEO

February 6, 2020

Have to admire Ginny Rometty. She survived 19 consecutive losing quarters (one quarter shy of 5 years), which DancingDinosaur and the rest of the world covered with monotonous regularity, and she was not bounced out until this January. Memo to readers: Keep that in mind if you start feeling performance heat from top management. Can’t imagine another company that would tolerate it but what do I know.

Arvind Krishna becomes the Chief Executive Officer and a member of the I BM Board of Directors effective April 6, 2020. Krishna is currently IBM Senior Vice President for Cloud and Cognitive Software, and was a principal architect of the company’s acquisition of Red Hat. The cloud/Red Hat strategy has only just started to show signs of payback.

As IBM writes: Under Rometty’s leadership, IBM acquired 65 companies, built out key capabilities in hybrid cloud, security, industry and data, and AI both organically and inorganically, and successfully completed one of the largest technology acquisitions in history (Red Hat).  She reinvented more than 50% of IBM’s portfolio, built a $21 billion hybrid cloud business and established IBM’s leadership in AI, quantum computing, and blockchain, while divesting nearly $9 billion in annual revenue to focus the portfolio on IBM’s high value, integrated offerings. Part of that was the approximately $34 billion Red Hat acquisition, IBM’s, and possibly the IT industry’s, biggest to date. Rometty isn’t going away all that soon; she continues in some executive Board position.

It is way too early to get IBM 1Q2020 results, which will be the last quarter of Rometty’s reign. The fourth quarter of 2019, at least was positive, especially after all those quarters of revenue loss. The company reported  $21.8 billion in revenue, up 0.1 percent. Red Hat revenue was up 24 percent. Cloud and cognitive systems were up 9 percent while systems, which includes the z, was up 16 percent. 

Total cloud revenue, the new CEO Arvind Krishna’s baby, was up 21 percent. Even with z revenue up more than cloud and cognitive systems, it is probably unlikely IBM will easily find a buyer for the z soon. If IBM dumps it, they will probably have to pay somebody to take it despite the z’s faithful, profitable blue chip customer base. 

Although the losing streak has come to an end Krishna still faces some serious challenges.  For example, although DancingDinosaur has been enthusiastically cheerleading quantum computing as the future there is no proven business model there. Except for some limited adoption by a few early adopters, there is no widespread groundswell of demand for quantum computing and the technology has not yet proven itself useful. Also there is no ready pool of skilled quantum talent. If you wanted to try quantum computing would you even know what to try or where to find skilled people?

Even in the area of cloud computing where IBM finally is starting to show some progress the company has yet to penetrate the top tier of players. These players–Amazon, Google, Microsoft/Azur–are not likely to concede market share.

So here is DancingDinosaur’s advice to Krishna: Be prepared to scrap for every point of cloud share and be prepared to spin a compelling case around quantum computing. Finally, don’t give up the z until the accountants and lawyers force you, which they will undoubtedly insist on.To the contrary, slash the z prices and make it an irresistible bargain. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

October 8, 2019

z15 LinuxONE III for Hybrid Cloud

 

It didn’t take long following the introduction of the z15 for a LinuxONE to arrive. Meet the LinuxONE III, a z15 machine with dedicated built-in Linux. And it comes with the primary goodies that the z15 offers: automatic pervasive compression of everything along with a closely related privacy capability, Data Passport.

3-frame LinuxONE III

Z-quality security, privacy, and availability, it turns out, has become central to the mission of the LinuxONE III.The reason is simple: Cloud. According to IBM, only 20% of workloads have been moved to cloud. Why? Companies need assurance that their data privacy and security will not be breached. To many IT pros and business executives, the cloud remains the wild, wild west where bad guys roam looking to steal whatever they can.

IBM is touting the LinuxONE III, which is built on its newly introduced z15, for hybrid clouds. The company has been preaching the gospel of clouds and, particularly, hybrid clouds for several years, which was its primary reason for acquiring Red Hat. Red Hat Linux is built into the LinuxONE III, probably its first formal appearance since IBM closed its acquisition of Red Hat this spring. 

With Red Hat and z15 IBM is aiming to cash in on what it sees as a big opportunity in hybrid clouds. While the Cloud brings the promise of flexibility, agility and openness, only 20% of workloads have been moved to cloud, notes IBM. Why? Companies need assurance that their data privacy and security will not be breached. LinuxONE III also promises cloud native development.

By integrating the new IBM LinuxONE III as a key element in an organization’s hybrid cloud strategy, it adds another level of security and stability and availability to its cloud infrastructure. It gives the organization both agile deployment and unbeatable levels of uptime, reliability, and security. While the cloud already offers appealing flexibility and costs, the last three capabilities–uptime, reliability, security–are not usually associated with cloud computing. By security, IBM means 100% data encryption automatically, from the moment the data arrives or is created. And it remains encrypted for the rest of its life, at rest or in transit.

Are those capabilities important? You bet. A Harris study commissioned by IBM found that 64 percent of all consumers have opted not to work with a business out of concerns over whether that business could keep their data secure. However, that same study found 76 percent of respondents would be more willing to share personal information if there was a way to fully take back and retrieve that data at any time. Thus the importance of the z15’s pervasive encryption and the new data passports.

IBM has previously brought out its latest z running dedicated Linux. Initially it was a way to expand the z market through a reduced cost z.  DancingDinosaur doesn’t know the cost of the LinuxONE III. In the past they have been discounted but given the $34 billion IBM spent to acquire Red Hat the new machines might not be such a bargain this time.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Syncsort Drives IBMi Security with AI

May 2, 2019

The technology security landscape looks increasingly dangerous  The problem revolves around the possible impact of AI. the impact of which is not fully clear. The hope, of course, is that AI will make security more efficient and effective.  However, the security bad actors can also jump on AI to advance their own schemes. Like a cyber version of the nuclear arms race, this has been an ongoing battle for decades. The industry has to cooperate and, specifically, share information and hope the good guys can stay a step ahead.

In the meantime, vendors like IBM and most recently Syncsort have been stepping up to  the latest challengers. Syncsort, for example, earlier this month launched its Assure Security to address the increasing sophistication of cyber attacks and expanding data privacy regulations.  In surprising ways, it turns out, data privacy and AI are closely related in the AI security battle.

Syncsort, a leader in Big Iron-to-Big Data software, announced Assure Security, which combines access control, data privacy, compliance monitoring, and risk assessment into a single product. Together, these capabilities help security officers, IBMi administrators, and Db2 administrators address critical security challenges and comply with new regulations meant to safeguard and protect the privacy of data.

And it clearly is coming at the right time.  According to Privacy Rights Clearinghouse, a non-profit corporation with a mission to advocate for data privacy there were 828 reported security incidents in 2018 resulting in the exposure of over 1.37 billion records of sensitive data. As regulations to help protect consumer and business data become stricter and more numerous, organizations must build more robust data governance and security programs to keep the data from being exploited by bad security actors for nefarious purposes.  The industry already has scrambled to comply with GDPR and the New York Department of Financial Services Cybersecurity regulations and they now must prepare for the GDPR-like California Consumer Privacy Act, which takes effect January 1, 2020.

In its own survey Syncsort found security is the number one priority among IT pros with IBMi systems. “Given the increasing sophistication of cyber attacks, it’s not surprising 41 percent of respondents reported their company experienced a security breach and 20 percent more were unsure if they even had been breached,” said David Hodgson, CPO, Syncsort. The company’s new Assure Security product leverages the wealth of IBMi security technology and the expertise to help organizations address their highest-priority challenges. This includes protecting against vulnerabilities introduced by new, open-source methods of connecting to IBMi systems, adopting new cloud services, and complying with expanded government regulations.

Of course, IBM hasn’t been sleeping through this. The company continues to push various permutations of Watson to tackle the AI security challenge. For example, IBM leverages AI to gather insights and use reasoning to identify relationships between threats, such as malicious files, suspicious IP addresses,  or even insiders. This analysis takes seconds or minutes, allowing security analysts to respond to threats up to 60 times faster.

It also relies on AI to eliminate time-consuming research tasks and provides curated analysis of risks, which reduces the amount of time security analysts require to make the critical decisions and launch an orchestrated response to counter each threat. The result, which IBM refers to as cognitive security, combines the strengths of artificial intelligence and human intelligence.

Cognitive AI in effect, learns with each interaction to proactively detect and analyze threats and provides actionable insights to security analysts making informed decisions. Such cognitive security, let’s hope, combines the strengths of artificial intelligence with human judgement.

Syncsort’s Assure Security, specifically brings together best-in-class IBMi security capabilities acquired by Syncsort into an all-in-one solution, with the flexibility for customers to license individual modules. The resulting product includes:

  • Assure  Compliance Monitoring quickly identifies security and compliance issues with real-time alerts and reports on IBMi system activity and database changes.
  • Assure Access Control provides control of access to IBMi systems and their data through a varied bundle of capabilities.
  • Assure Data Privacy protects IBMi data at-rest and in-motion from unauthorized access and theft through a combination of NIST-certified encryption, tokenization, masking, and secure file transfer capabilities.
  • Assure Security Risk Assessment examines over a dozen categories of security values, open ports, power users, and more to address vulnerabilities.

It probably won’t surprise anyone but the AI security situation is not going to be cleared up soon. Expect to see a steady stream of headlines around security hits and misses over the next few years. Just hope will get easier to separate the good guys from the bad actors and the lessons will be clear.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet SUSE Enterprise Linux Server 12

February 25, 2019

A surprising amount of competition has emerged lately for Linux on the mainframe, but SUSE continues to be among the top of the heap.  With the newest release last fall, SUSE Linux Enterprise 12, should secure its position for some time to come.

SUSE touts SLE 12 as the latest version of its reliable, scalable and secure platform for efficiently deploying and managing highly available enterprise-class IT services in physical, virtual, or cloud environments. New products based on SLE 12 feature enhancements should allow for better system uptime, improved operational efficiency, and accelerated innovation. As the foundation for all SUSE data center operating systems and extensions, according to the company, SUSE Linux Enterprise meets the performance requirements of data centers with mixed IT environments while reducing the risk of technological obsolescence and vendor lock-in.

With SLE 12 the company also introduces an updated customer portal, SUSE Customer Center, to make it easier for customers to manage their subscriptions, access patches and updates, and communicate with SUSE customer support. It promises a new way to manage a SUSE account and subscriptions via one interface, anytime, anywhere.

Al Gillen, program vice president for servers and system software at IDC, said, “The industry is seeing growing movement of mission-critical workloads to Linux, with that trend expected to continue well into the future.” For Gillen, the modular design of SLE 12, as well as other mission-critical features like full system rollback and live kernel patching, helps address some of the key reservations customers express, and should help accelerate the adoption of Linux on z.

It’s about time. Linux has been available on the z for 20 years. Only with the introduction of IBM LinuxONE a couple of years ago has IBM gotten serious about Linux on z.  Around that time IBM also ported the Go programming language to LinuxOne. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. Taking it even further, following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z.

And it didn’t stop there. IBM ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. As expected IBM has contributed code to the Go community.

Then IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services – all of which are available today, and can now be integrated with just a few lines of code. As soon as Apple introduced Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z, which has already been released.

With SUSE Linux Enterprise Server for x86_64, IBM Power Systems, and IBM System SUSE ES 12 has boosted its versatility, able to deliver business-critical IT services in a variety of physical, virtual, and cloud environments. New features like full system rollback, live kernel patching, and software modules increase data center uptime, improve operational efficiency, and accelerate the adoption of open source innovation. ES 12 further builds on SUSE’s leadership with Linux Containers technology and adds the Docker framework, which is now included as an integral part of the operating system.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet IBM Q System One

February 1, 2019

A couple of weeks ago, IBM slipped in a new quantum machine at CES. The new machine, dubbed IBM Q System One, is designed for both scientific and commercial computing. IBM described it as the first integrated universal approximate quantum computing system.

Courtesy of IBM

Approximate refers to the short coherence time of the qubits, explains Michael Houston, manager, Analyst Relations. Or, to put it another way: how long the qubits remain stable enough to run reliable and repeatable calculations. IBM Q systems report an industry-best average of 100 microseconds. That’s not enough time for a round of golf, but probably long enough to start running some serious quantum analytics.

As described by IBM, the new machine family, the Q systems, are designed to one day tackle problems that are currently seen as too complex or too exponential in scale for classical (conventional) systems to handle. Such Q Systems may use quantum computing to find new ways to model financial data or isolate key global risk factors to make better investments or find the optimal path across global systems for ultra-efficient logistics or optimizing fleet operations for improved deliveries.

The design of IBM Q System One includes a 9x9x9 cube case constructed of half-inch thick borosilicate glass to form a sealed, airtight enclosure that opens effortlessly using roto-translation, a motor-driven rotation around two displaced axes engineered to simplify the system’s maintenance and upgrade process while minimizing downtime. Overall, the entire system was intended to enable the most stable qubits, which allows for the machine to deliver the reliable commercial use.

A series of independent aluminum and steel frames not only unify, but also decouple the system’s cryostat, control electronics, and exterior casing, helping to avoid potential vibration interference that leads to phase jitter and qubit decoherence.

The object of all of this, Houston explains, is to deliver a sophisticated, modular, and compact design optimized for stability, reliability, and continuous commercial use. For the first time ever, IBM Q System One enables universal approximate superconducting quantum computers to operate beyond the confines of the research lab.

In effect, think of the Q System One as bringing the quantum machine to the data center, starting with Q System’s design that squeezes all the quantum computing electronics, controllers, and other components into a 9x9x9 foot cube made of half-inch thick glass to create a sealed, airtight enclosure that will allow the system to cool the qubits to low Kelvin temperatures and keep them cold enough and undisturbed from any interference for long enough to perform meaningful work. All the Q System One’s components and control mechanisms are intended to keep the qubits at 10 mK  (-442F) to operate

This machine, notes IBM, should look familiar to conventional computer data center managers. Maybe, if you think a 9x9x9, half-inch thick borosilicate glass cube is a regular feature of any data center you have worked in

In effect, IBM is applying the same approach to quantum computing that it has followed for decades with its conventional computers–providing everything you need to get it operating in your data center. Just plan to bring in some trained quantum technicians, specialists, and, don’t forget, a handful of people who can program such a machine.

Other than that, the IBM Q System One consists of a number of custom components that work together–remember they said integrated: Specifically, the new machine will include:

  • Quantum hardware designed to be stable and auto-calibrated to give repeatable and predictable high-quality qubits;
  • Cryogenic engineering that delivers a continuous cold and isolated quantum environment;
  • High precision electronics in compact form factors to tightly control large numbers of qubits;
  • Quantum firmware to manage the system health and enable system upgrades without downtime for users

Are you up for it? Maybe you’d prefer to try before you buy. The IBM Q Quantum Computation Center, opening later this year in Poughkeepsie, extends the IBM Q Network to commercial quantum computing programs,

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Factsheets for AI

December 21, 2018

Depending on when you check in on the IBM website the primary technology trend for 2019 is quantum computing or hybrid clouds or blockchain, or artificial intelligence or any of a handful of others. Maybe IBM does have enough talented people, resources, and time to do it all well now. But somehow DancingDinosuar is dubious.

There is an old tech industry saying: you can have it right, fast, cheap—pick 2. When it comes to AI depending on your choices or patience you could win an attractive share of the projected $83 billion AI industry by 2021 or a share of the estimated $200 billion AI market by 2025, according to venturebeat.

IBM sees the technology industry at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people.

But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM believes part of the problem is a lack of standard practices.

As a result, there’s no consistent, agreed-upon way AI services should be created, tested, trained, deployed, and evaluated, observes Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program. To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets or as more formally called Supplier’s Declaration of Conformity (DoC). The goal: increasing the transparency of particular AI services and engendering trust in them.

Such factsheets alone could enable a competitive advantage to AI offers in the marketplace. Such factsheets could provide explain-ability around susceptibility to adversarial attacks—issues that must be addressed in order for AI services to be trusted along with fairness and robustness, Mojsilovic continued. Factsheets take away the black box perception of AI and render the AI system understandable by both researchers and developers.

Several core pillars form the basis for trust in AI systems: fairness, robustness, and explain-ability, the first 3 pillars.  Late in her piece, Mojsilovic introduces a fourth pillar — lineage — which concerns AI systems’ history. Factsheets would answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset. But in Mojsilovic’s view, documents detailing the ins and outs of systems would go a long way to maintaining the public’s faith in AI.

For natural language processing algorithms specifically, the researchers propose data statements that would show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Natural language processing systems aren’t as fraught with controversy as, say, facial recognition, but they’ve come under fire for their susceptibility to bias.  IBM, Microsoft, Accenture, Facebook, and others are actively working on automated tools that detect and minimize bias, and companies like Speechmatics and Nuance have developed solutions specifically aimed at minimizing the so-called accent gap — the tendency of voice recognition models to skew toward speakers from certain regions. But in Mojsilovic’s view, documents detailing the ins and outs of systems—factsheets–would go a long way to restoring the public’s faith in AI.

Fairness, safety, reliability, explain-ability, robustness, accountability — all agree that they are critical. Yet, to achieve trust in AI, making progress on these issues alone will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions, she wrote. Understanding and evaluating AI systems is an issue of utmost importance for the AI community, an issue IBM believes the industry, academia, and AI practitioners should be working on together.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 


%d bloggers like this: