Posts Tagged ‘Cloud’

Pushing Quantum Onto the Cloud

September 4, 2020

Did you ever imagine the cloud would become your quantum computing platform, a place where you would run complex quantum algorithms requiring significant specialized processing across multi-qubit machines available at a click? But that is exactly what is happening.

IBM started it a few years back by making their small qubit machines available in the cloud and even larger ones now. Today Xanadu is offering 8-qubit or 12-qubit chips, and even a 24-qubit chip in the next month or so, according to the Toronto-based company.

Xanadu quantum processor

As DancingDinosaur has previously reported, there are even more: Google reports a quantum computer lab with five machines and Honeywell has six quantum machines. D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.

D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.In September, Xanadu introduced its quantum cloud platform. This allows developers to access its gate-based photonic quantum processors with 8-qubit or 12-qubit chips across the cloud.

Photonics-based quantum machines have certain advantages over other platforms, according to the company. Xanadu’s quantum processors operate at room temperature, not low Kelvin temperatures. They can easily integrate into an existing fiber optic-based telecommunication infrastructure, enabling quantum computers to be networked. It also offers scalability and fault tolerance, owing to error-resistant physical qubits and flexibility in designing error correction codes. Xanadu’s type of qubit is based on squeezed states – a special type of light generated by its own chip-integrated silicon photonic devices, it claims.

DancingDinosaur recommends you check out Xanadu’s documentation and details. It does not have sufficient familiarity with photonics, especially as related to quantum computing, to judge any of the above statements. The company also notes it offers a cross-platform Python library for simulating and executing programs on quantum photonic hardware. Its open source tools are available on GitHub.

Late in August IBM has unveiled a new milestone on its quantum computing road map, achieving the company’s highest Quantum Volume to date. By following the link, you see that Quantum Value is a metric conceived by IBM to measure and compare quantum computing power. DancingDinosaur is not aware of any other quantum computing vendors using it, which doesn’t mean anything of course. Quantum computing is so new and so different and with many players joining in with different approaches it will be years before anadu see what metrics prove most useful. 

To come up with its Quantum Volume rating, IBM  combined a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit, systems to achieve the high Quantum Volume rating. The company has made a total of 28 quantum computers available over the last four years through the IBM Quantum Experience, which companies join to gain access to its quantum machines and tools, including its software development toolset, 

Do not confuse Quantum Volume with Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer versus a conventional one. Quantum Advantage will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume, notes IBM, measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve its Quantum Volume milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit. The IBM Quantum team has shared details on the technical improvements made across the full stack to reach Quantum Volume 64 in a preprint released on arXiv, today.

What is most exciting is that the latest quantum happenings are things quantum you can access over the cloud without having to cool your data center to near zero Kelvin temperatures. If you try any of these, DancingDinosaur would love to hear how it goes.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

5G Will Accelerate a New Wave of IoT Applications and Z

August 10, 2020

Even before the advent of 5G DancingDinosaur, which had ghostwritten a top book on IoT, believed that IoT and smartphones would lead back to the Z eventually, somehow. Maybe the arrival of 5G and smart edge computing might slow the path to the Z. Or maybe not.

Even transactions and data originating and being processed at the edge will need to be secured, backed up, stored, distributed to the cloud, to other servers and systems, to multiple clouds, on premises, and further  processed and reprocessed in numerous ways. Along the way, they will find their ways back to a Z somehow and somewhere, sooner or later.

an edge architecture

5G is driving change in the Internet of Things (IoT). It’s a powerful enabling technology for a new generation of use cases that will leverage edge computing to make IoT more effective and efficient,” writes Rishi Vaish and Sky Matthews. Rishi Vaish is CTO and VP, IBM AI Applications; Sky Matthews is CTO, Engineering Lifecycle Management at IBM.  DancingDinosaur completely agrees, adding only that it won’t just stop there.

Vaish and Matthews continue: “In many ways, the narrative of 5G is the interaction between two inexorable forces: the rise in highly reliable, high-bandwidth communications, and the rapid spread of available computing power throughout the network. The computing power doesn’t just end at the network, though. End-point devices that connect to the network are also getting smarter and more powerful.” 

True enough, the power does not just end there; neither does it start there. There is a long line of powerful systems, the z15 and generations of Z before it that handle and enhance everything that happens in whatever ways are desired at that moment or, as is often the case, later. 

And yes, there will be numerous ways to create comparable services using similarly smart and flexible edge devices. But experience has shown that it takes time to work out the inevitable kinks that invariably will surface, often at the least expected and most inopportune moment. Think of it as just the latest manifestation of Murphy’s Law moved to the edge and 5G.

The increasingly dynamic and powerful computational environment that’s taking shape as telcos begin to redesign their networks for 5G will accelerate the uptake of IoT applications and services throughout industry,  Vaish and Matthews continue. We expect that 5G will enable new use cases in remote monitoring and visual inspection, autonomous operations in large-scale remote environments such as mines, connected vehicles, and more.

This rapidly expanding range of computing options, they add,  requires a much more flexible approach to building and deploying applications and AI models that can take advantage of the most cost-efficient compute resources available.

IBM chimes in: There are many ways that this combination of 5G and edge computing can enable new applications and new innovations in various industries. IBM and Verizon, for example, are developing potential 5G and edge solutions like remote-controlled robotics, near real-time video analysis, and other kinds of factory-floor automation.

The advantage comes from smart 5G edge devices doing the analytics immediately, at the spot where decisions may be best made. Are you sure that decisions made at the edge immediately are always the best? DancingDinosaur would like to see a little more data on that.

In that case, don’t be surprised to discover that there will be other decisions that benefit from being made later, with the addition of other data and analysis. There is too much added value and insight packed into the Z data center to not take advantage of it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

July 13, 2020

IBM Wazi cloud-native devops for Z

June 12, 2020

In this rapidly evolving world of hybrid and multicloud systems, organizations are required to quickly evolve their processes and tooling to address business needs. Foremost among that are development environments that include IBM Z as part of their hybrid solution face, says Sanjay Chandru, Director, IBM Z DevOps.

IBM’s goal, then  is to provide a cloud native developer experience for the IBM Z that is consistent and familiar to all developers. And that requires cross platform consistency in tooling for application programmers on Z who will need to deliver innovation faster and without the backlogs that have been expected in the past.

Wazi, along with OpenShift,  is another dividend from IBM purchase of Red Hat. Here is where IBM Wazi for Red Hat CodeReady Workspaces comes in: an add-on to IBM Cloud Pak for Applications. It allows developers to use an industry standard integrated development environment (IDE),  such as Microsoft Visual Studio Code (VS Code) or Eclipse, to develop and test IBM z/OS applications in a containerized, virtual z/OS environment on Red Hat OpenShift running on x86 hardware. The container creates a sandbox. 

The combination of IBM Cloud Pak for Applications goes beyond what Zowe offers as an open source framework for z/OS and the OpenProject to enable Z development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Developers who are not used to z/OS and IBM Z, which are most developers, now can  become productive faster in a familiar and accessible working environment, effectively  improving DevOps adoption across the enterprise

As IBM explained: Wazi integrates seamlessly into a standard, Git-based open tool chain to enable continuous integration and continuous delivery (CI/CD) as part of a fully hybrid devops process encompassing distributed and z systems.

IBM continues: Wazi is offered with deployment choices so that organizations can flexibly rebalance entitlement over time based on its business needs. In short, the organization can 

protect and leverage its IBM Z investments with robust and standard development capabilities that encompasses IBM Z and multicloud platforms.

The payoff comes as developers who are NOT used to z/OS and IBM Z, which is most of the developer world, can become productive faster in a familiar and accessible working environment while  improving DevOps adoption across the enterprise. IBM Wazi integrates seamlessly into a standard, Git-based open tool chain to deliver CI/CD and is offered with deployment choices so that any organization can flexibly rebalance over time based on its business needs. In short, you are protecting and leveraging your IBM Z investments with robust and standard development capabilities that encompass the Z and multicloud platforms.

As one large IBM customer put it: “We want to make the mainframe accessible. Use whatever tool you are comfortable with – Eclipse / IDz / Visual Studio Code. All of these things we are interested in to accelerate our innovation on the mainframe” 

An IT service provider added in IBM’s Wazi announcement: “Our colleagues in software development have been screaming for years for a dedicated testing environment that can be created and destroyed rapidly.” Well, now they have it in Wazi.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work athttp://technologywriter.com/

Apps and Ecosystem Critical for 5G Edge Success

May 18, 2020

According to the gospel of IBM, Edge computing with 5G creates opportunities in every industry. It brings computation and data storage closer to where data is generated, enabling better data control, reduced costs, faster insights and actions, and continuous operations.

Edge computing IBM Cloud Architecture

By 2025, 75% of enterprise data will be processed more efficiently on devices at the edge, compared to only 10% today. It will eliminate the need to relay data acquired, and often used for decision making in the field back to a data center for processing and storage. 

In short, the combination of 5G and smart devices on the edge aids this growing flow of data and processing through the proliferation of a variety of clouds: private, public, multi, and hybrid. But more is needed.

To get things rolling, IBM announced a handful of applications and tools and an edge ecosystem. As IBM notes: organizations across industries can now fully realize the benefits of edge computing, including running AI and analytics at the edge to achieve insights closer to where the work is done and the results applied. These new solutions include:

  • IBM Edge Application Manager – an autonomous management tool to enable AI, analytics and IoT enterprise workloads to be deployed and remotely managed, delivering real-time analysis and insight at scale. It aims to enable the management of up to 10,000 edge nodes simultaneously by a single administrator. It is the first to be powered by Open Horizon, which is folded into the Linux Foundation. 
  • IBM Telco Network Cloud Manager – runs on Red Hat OpenShift and Red Hat Open Stack,  a cloud computing platform that virtualizes resources from industry-standard hardware, organizes them into clouds, and manages them to provide new services now and going forward as 5G adoption expands.
  • A portfolio of edge-enabled applications and services, including IBM Visual Insights, IBM Production Optimization, IBM Connected Manufacturing, IBM Asset Optimization, IBM Maximo Worker Insights and IBM Visual Inspector. All aim to deliver the flexibility to deploy AI and cognitive applications and services at the edge and at scale. 
  • Red Hat OpenShift, which manages containers with automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes cluster services, and applications—on any cloud.
  • Dedicated IBM Services teams for edge computing and telco network clouds that draw on IBM’s expertise to deliver 5G and edge-enabled capabilities across all industries.

In addition, IBM is announcing the IBM Edge Ecosystem, through which an increasingly broad set of ISVs, GSIs and more will be helping enterprises capture the opportunities of edge computing with a variety of solutions built upon IBM’s technology. IBM is also creating the IBM Telco Network Cloud Ecosystem, bringing together a set of partners across the telecommunications industry that offer a breadth of network functionality that helps providers deploy their network cloud platforms. 

These open ecosystems of equipment manufacturers, networking and IT providers, and software providers include Cisco, Dell Technologies, Juniper Networks, Intel, NVIDIA, Samsung, Packet, Equinix Company, Hazelcast, Sysdig, Turbonomic, Portworx, Humio, Indra Minsait, Eurotech, Arrow Electronics, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks, and ADVA as members. 

Making the promise of edge computing a reality requires an open ecosystem with diverse participants. It also requires open standards-based, cloud native solutions that can be deployed and autonomously managed at massive scale throughout the edge and can move data and applications seamlessly between private data centers, hybrid multiclouds, and the edge. IBM has already enlisted dozens of organizations in what it describes as its open edge ecosystem.  You can try to join the IBM ecosystem or start organizing your own.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

5G Joins Edge Technology and Hybrid Multicloud

May 11, 2020

At IBM’s virtual Think Conference the first week in May the company made a big play for edge computing and 5G together. 

From connected vehicles to intelligent manufacturing equipment, the amount of data from devices has resulted in unprecedented volumes of data at the edge. IBM is convinced the data volumes will compound as 5G networks increase the number of connected mobile devices.

z15 T02  and the LinuxONE 111 LT2

Edge computing  and 5G networks promise to reduce latency while improving speed, reliability, and processing. This will deliver faster and more comprehensive data analysis, deeper insights, faster response times, and improved experiences for employees, customers, and their customers.

First gaining prominence with the Internet of Things (IoT) a few years back IBM defined edge computing as a distributed computing framework that brings enterprise applications closer to where data is created and often remains, where it can be processed. This is where decisions are made and actions taken.

5G stands for the Fifth Generation of cellular wireless technology. Beyond higher speed and reduced latency, 5G standards will have a much higher connection density, allowing networks to handle greater numbers of connected devices combined with network slicing to isolate and protect designated applications.

Today, 10% of data is processed at the edge, an amount IBM expects to grow to 75% by 2025. Specifically, edge computing enables:

  • Better data control and lower costs by minimizing data transport to central hubs and reducing vulnerabilities and costs
  • Faster insights and actions by tapping into more sources of data and processing that data there, at the edge
  • Continuous operations by enabling systems that run autonomously, reduce disruption, and lower costs because data can be processed by the devices themselves on the spot and where decisions can be made

In short: the growing number of increasingly capable devices, faster 5G processing, and the increased pressure to drive the edge computing market beyond what the initial IoT proponents, who didn’t have 5G yet, envisioned. They also weren’t in a position to imagine the growth in the processing capabilities of edge devices in just the past year or two.

But that is starting to happen now, according to IDC: By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today.

Also unimagined was the emergence of the hybrid multicloud, which IBM has only recently started to tout. The convergence of 5G, edge computing, and hybrid multicloud, according to the company, is redefining how businesses operate. As more embrace 5G and edge, the ability to modernize networks to take advantage of the edge opportunity is only now feasible. 

And all of this could play very well with the new z machines, the z15 T02  and LinuxONE lll LT2. These appear to be sufficiently capable to handle the scale of business edge strategies and hybrid cloud requirements for now. Or the enterprise class z15 if you need more horsepower.

By moving to a hybrid multicloud model, telcos can process data at both the core and network edge across multiple clouds, perform cognitive operations and make it easier to introduce and manage differentiated digital services. As 5G matures it will become the network technology that underpins the delivery of these services. 

Enterprises adopting a hybrid multicloud model that extends from corporate data centers (or public and private clouds) to the edge is critical to unlock new connected experiences. By extending cloud computing to the edge, enterprises can perform AI/analytics faster, run enterprise apps to reduce impacts from intermittent connectivity, and minimize data transport to central hubs for cost efficiency. 

Deploying a hybrid multicloud model from corporate data centers to the edge is central to capitalizing on  new connected experiences. By extending cloud computing to the edge, organizations can run AI/analytics faster  while minimizing data transport to central hubs for cost efficiency. By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today. It’s time to start thinking about making edge part of your computer strategy. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Mayflower Autonomous Ship

March 31, 2020

Growing up in Massachusetts, DancingDinosaur was steadily inundated with historical milestones: the Boston Massacre, Pilgrims landing at Plymouth Rock, we even had a special school holiday, Evacuation Day. It is only celebrated in Boston, to commemorate the day the colonials forced the British out of Boston. This year commemorates the 400th anniversary of the Mayflower’s arrival, which evolved into Thanksgiving and subsequently turned into a great day for high school football games.

IBM took the occasion as an  opportunity to build a completely autonomous ship and sail it from England to Massachusetts unmanned to mark that  anniversary, September 2020. The project , dubbed the Mayflower Autonomous Ship  (MAS) was launched as an occasion for IBM to introduce IBM Edge Computing in a dramatic way.

IBM defines edge computing as decentralized data and application processing across hundreds to millions of endpoints residing outside of a traditional datacenter or public cloud. MAS relies on IBM’s Edge Computing

“You take the human factor out of ships and it allows you to completely reimagine the design. You can focus purely on the mechanics and function of the ship,” writes Brett Phaneuf, Managing Director of MAS. The idea was to create an autonomous and crewless vessel that would cross the Atlantic, tracing the route of the original 1620 Mayflower and performing vital research along the way.

For MAS to survive the voyage, he continued,  the team opted for a trimaran design, which is both hydro- and aero-dynamic Using aluminum and composite materials, MAS will be lightweight, about 5 tons and 15 meters in length. That’s half the length and less than 3 percent of the weight of the original Mayflower, which took almost two months for a voyage that the MAS team planned to complete in less than two weeks.

For power, MAS will use solar panels to charge on-board batteries, which will power MAS’s motor – even at night. A single wingsail will allow MAS to harness wind power as well as make it more visible to other ships. MAS will be able to clock speeds of around 20 knots, compared to the original Mayflower’s 2.5 knots.

When it comes to modern technologies, the original Mayflower used a ship’s compass for navigation. To measure speed, it towed a ‘log-line’ – a wooden board attached to a hemp line with knots tied in it at uniform intervals (hence the term ‘knots’ still used to measure a ship’s speed today).

MAS, however, will have a state-of-the-art inertial navigation and precision GNSS positioning system. It will have a full suite of the latest oceanographic and meteorological instruments, a satellite communications system, and 2D LIDAR and RADAR sensors.

With no crew on board, MAS needs to make its own decisions at sea. MAS’ mission control system will be built on IBM Power Systems servers. MAS is currently using real data from Plymouth Sound to train IBM PowerAI Vision technology to recognize ships, debris, whales and other hazards which come into view on MAS’s on-board video cameras.

When a hazard is detected, MAS will use IBM Operational Decision Manager software to decide what to do. It may change course, or, in case of emergencies, speed out of the way by drawing additional power from its on-board back-up generator. Connectivity in the middle of the Atlantic is patchy, so MAS will use edge devices on board to store and process data locally when need be. Every time it gets  a connection, the ship will connect to the IBM Cloud and put the systems back into sync.

MAS will carry three research pods that carry scientific instrumentation to ensure scientists can gather the data they need to understand and protect the ocean, especially in the face of threats from pollution and global warming.

By leveraging AI, machine learning, and other new technologies IBM hopes it will start a new era of marine exploration. Through the University of Birmingham’s Human Interface Technologies Team, MAS plans to open the experience of the mission to millions of other ‘virtual pilgrims’ around the world via a mixed reality experience that uses the latest Virtual and Augmented Reality technologies. Bon Voyage!

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Brings Red Hat Ansible to Z

March 23, 2020

From the day IBM announced its $34 billion acquisition of Red Hat last October, DancingDinosaur had two questions:  1) how could the company recoup its investment in the open source software company and 2) what did it imply for the future of the z.


With about a billion dollars in open source revenue,  Red Hat was the leading open source software player, but to get from a billion dollars to $34 billion is a big leap. In Feb.  IBM announced Red Hat’s OpenShift middleware would work with the z and LinuxONE. OpenShift is a DevOps play for hybrid cloud environments, a big interest of IBM.

Along with the availability of OpenShift for z IBM also announced that Cloud Pak for Applications is available for the z and LinuxONE. In effect, this supports the modernization of existing apps and the building of new cloud-native apps. This will be further enhanced by the delivery of new Cloud Paks for the z and LinuxONE announced by IBM last summer. Clearly the z is not being abandoned now.

Last week, IBM announced the availability of Red Hat Ansible Certified Content for IBM Z, enabling Ansible users to automate IBM Z applications and IT infrastructure.This means that no matter what mix of infrastructure or clients you are working with, IBM is bringing automation for the z,  helping you manage and integrate it across the hybrid environment through a single control panel.

Ansible functionality for z/OS, according to IBM,  will empower z clients to simplify configuration and access resources, leverage existing automation, and streamline automation of operations using the same technology stack that they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution via Content Collections, Red Hat Ansible Certified Content for z provides easy automation building blocks to accelerate the automation of z/OS and z/OS-based software. These initial core collections include connection plugins, action plugin modules, and a sample playbook to automate tasks for z/OS such as creating data sets, retrieving job output, and submitting jobs.

For those not familiar with Ansible, as Wikipedia explains, Ansible  is an open-source software provisioning, configuration management, and application-deployment tool.  Find more on Ansible, just click https://en.wikipedia.org/wiki/Ansible_(software).

IBM needed to modify Ansible to work with z and hybrid clouds. Red Hat Ansible Certified Content for IBM Z, allows Ansible users to automate z applications and IT infrastructure. The Certified Content will be available in Automation Hub, with an upstream open source version offered on Ansible Galaxy. This means that no matter what mix of infrastructure or clients you  are working with, IBM is bringing automation for z to let you manage across this hybrid environment through a single control panel.

Ansible functionality for z/OS will empower z teams to simplify the configuration and access of resources, leverage existing automation, and streamline automation of operations using the same technology stack they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution with Content Collections, Red Hat Ansible Certified Content for Z allows easy automation building blocks that can accelerate the automation of z/OS and z/OS-based software.

Over the last several months, IBM improved the z developer experience by bringing DevOps and industry-standard tools like Git and Jenkins to the z. For instance it announced IBM Z Open Editor, IBM Developer for z/OS V14.2.1, and, Zowe, an open source framework for z/OS, which DancingDinosaur covered in Aug. 2018.  In February IBM announced the availability of Red Hat OpenShift on IBM Z, which enables developers to run, build, manage and modernize cloud native workloads on their choice of architecture.

Now, Ansible allows developers and operations to break down traditional internal and historical technology silos to centralize automation — while leveraging the performance, scale, control and security provided by the z. 

What more goodies for z will IBM pull from its Red Hat acquisition?  Stockholders should hope it is at least $34 billion worth or more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Introduces New Flash Storage Family

February 14, 2020

IBM describes this mainly as a simplification move. The company is eliminating 2 current storage lines, Storwize and Flash Systems A9000, and replacing them with a series of flash storage systems that will scale from entry to enterprise. 

Well, uh, not quite enterprise as Dancing Dinosaur readers might think of it. No changes are planned for the DS8000 storage systems, which are focused on the mainframe market, “All our existing product lines, not including our mainframe storage, will be replaced by the new FlashSystem family,” said Eric Herzog, IBM’s chief marketing officer and vice president of worldwide storage channel in a published report earlier this week

The move will rename two incompatible storage lines out of the IBM product lineup and replace them with a line that provides compatible storage software and services from entry level to the highest enterprise, mainframe excluded, Herzog explained. The new flash systems family promises more functions, more features, and lower prices, he continued.

Central to the new Flash Storage Family is NVMe, which comes in multiple flavors.  NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

At the top of the new family line is the NVMe and multicloud ultra-high throughput storage system. This is a validated system with IBM implementation. IBM promises unmatched NVMe performance, SCM, and  IBM FlashCore technology. In addition it brings the features of IBM Spectrum Virtualize to support the most demanding workloads.

Image result for IBM flash storage family

IBM multi-cloud flash storage family system

Get NVMe performance, SCM and  IBM FlashCore technology, and the rich features of IBM Spectrum Virtualize to support your most demanding workloads.

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

Next up are the IBM FlashSystem 9200 and IBM FlashSystem 9200R, IBM tested and validated rack solutions designed for the most demanding environments. With the extreme performance of end-to-end NVMe, the IBM FlashCore technology, and the ultra-low latency of Storage Class Memory (SCM). It also brings IBM Spectrum Virtualize and AI predictive storage management with proactive support by Storage Insights. FlashSystem 9200R is delivered assembled, with installation and configuration completed by IBM to ensure a working multicloud solution.

Gain the performance of all-flash and NVMe with SCM support for flash acceleration and the reliability and innovation of IBM FlashCore technology, plus the rich features of IBM Spectrum Virtualize — all in a powerful 2U storage system.

Combine the performance of flash and NVMe with the reliability and innovation of IBM FlashCore® and the rich features of IBM Spectrum Virtualize™, bringing high-end capability to clients needing enterprise mid-range storage.

In the middle of the family is the IBM FlashSystem 7200 and FlashSystem 7200H. As IBM puts it, these offer end-to-end NVMe, the innovation of IBM FlashCore technology, the ultra-low latency of Storage Class Memory (SCM), the flexibility of IBM Spectrum Virtualize, and the AI predictive storage management and proactive support of Storage Insights. It comes in a powerful 2U storage all flash or hybrid flash array. The IBM FlashSystem 7200 brings mid-range storage while allowing the organization to add  multicloud technology that best supports the business.

At the bottom of the line is the NVMe entry enterprise all flash storage solution, which brings  NVMe end-to-end capabilities and flash performance to the affordable FlashSystem 5100. As IBM describes it, the FlashSystem® 5010 and IBM FlashSystem 5030 (formerly known as IBM Storwize V5010E and Storwize V5030E–they are still there, just renamed) are all-flash or hybrid flash solutions intended to provide enterprise-grade functionalities without compromising affordability or performance. Built with the flexibility of IBM Spectrum Virtualize and AI-powered predictive storage management and proactive support of Storage Insights. IBM FlashSystem 5000 helps make modern technologies such as artificial intelligence accessible to enterprises of all sizes. In short, these promise entry-level flash storage solutions designed to provide enterprise-grade functionality without compromising affordability or performance

IBM likes the words affordable and affordability in discussing this new storage family. But, as is typical with IBM, nowhere will you see a price or a reference to cost/TB or cost/IOPS or cost of anything although these are crucial metrics for evaluating any flash storage system. DancingDinosaur expects this after 20 years of writing about the z. Also, as I wrote at the outset, the z is not even included in this new flash storage family so we don’t even have to chuckle if they describe z storage as affordable.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/


%d bloggers like this: