June 11, 2021

 

Modernize IT With Z

Should you modernize your IT with 50-year old technology? 

The system we refer to today as the IBM Z  is a direct, lineal descendant of System/360, announced in 1964, and the System/370 from the 1970s. Many applications written for these systems can still run unmodified on the newest IBM Z over five decades or more later, as noted by Wikipedia. And it gets better; you can run some of the most modern, containerized Python code with a Z today.

So, for starters, let’s not think of the Z as an old legacy machine. It can handle the kind of modern code  you need today. But if you might  have a piece of System/360 code you still need to run occasionally somebody can probably make that happen too.

Z and Linux–a great combination

IBM added Linux to the Z 20 years ago. DancingDinosaur had modest skill with open source distributed Linux but that first Linux on Z was nothing a Linux newbie could handle. It wasn’t until LinuxONE, which was finally something resembling open source Linux on Z. It took the acquisition of RedHat and porting OpenShift to the Z before Z shops  had something that delivered open, flexible software that works with lots of different code.

So, how can enterprises protect their investment of time and resources in IT?

Ask Suchitra Joshi, the IBM Alliance Manager and the answer is partner with Independent Software Vendors (ISVs) who offer pre-built applications that can accelerate a company’s transformation andISV ecosystems—in tandem with a company’s own developers.

ISVs with well-supported, industry-standard solutions, she continues, running on LinuxONE and IBM Z might be able to fast-track a company’s ability to run mission-critical applications in the hybrid cloud, taking the enterprise to a higher level of data privacy, security, resiliency and scalability, she continues. This sounds like something you probably have heard from IBM frequently but not always convincingly.

Joshi continues: IBM is working to expand the Linux on Z ecosystem…and work on the containerization of ISV applications to run on Red Hat OpenShift in conjunction with the launch of Red Hat Marketplace, an open cloud marketplace for enterprise companies.

Joshi concludes with a couple of apps;

  • Temenos Transact on IBM Z and LinuxONE enables financial institutions to modernize core operations to increase efficiency while remaining compliant with new and existing regulatory demands.
  • MongoDB on IBM Z and LinuxONE is a NoSQL database that supports high-performance data serving needs of enterprise businesses with better consistency and scalability, and reduced overhead.

Finally, Joshi notes: We facilitate the containerization of ISV applications to run on Red Hat OpenShift as well as the launch of Red Hat Marketplace, an open cloud marketplace for enterprise customers to discover, try, purchase, deploy, and manage certified container-based software across public, private, cloud, and on-premises environments. 

Yes, she is hired to promote IBM’s Z software relationships but for now, let’s give her the benefit of doubt. Still, feel welcome to contact me with any experiences you have had with the Z ISV world. Of course, it will remain anonymous at this end.

But she has a good story to tell; According to IBM, 75% of the top 20 global banks are running the newest z15 mainframe, and the IBM Systems Group reported a 68% gain in Q2 IBM Z mainframe revenue year-over-year.

At the heart of its current vitality is Linux—primarily in the form of Big Iron-based Red Hat OpenShift—and a variety of software such as IBM Cloud Paks and open source applications. The Linux-mainframe marriage is celebrating 20 years this month, and the incongruous mashup—certainly at the beginning anyway—has been a boon for the mainframe that, by most accounts, still has plenty of good years ahead of it.

DancingDinosaur applauded IBM’s initial announcement of Linux for Z but got a little impatient in the intervening years. OpenShift on Z has finally convinced me.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Power Systems Enhance Hybrid Cloud with Red Hat

June 4, 2021

Dancing Dinosaur has been writing considerably about IBM and hybrid cloud lately. Most recently about the Z and AI and hybrid cloud. And for good reason: that’s been IBM’s best system revenue maker these days.

IBM adds Linux Hybrid Cloud to Power

But Z isn’t IBM’s only system that can play with a hybrid cloud. Power Systems has joined the party too. The company’s  new pre-configured private cloud platform, innovative cloud-consumption payment model, and more Red Hat software is now supported on IBM Power Virtual Servers. Especially important is the availability of Red Hat software to help organizations modernize by developing cloud-native applications and deploy them into hybrid cloud environments.

“IBM’s latest expanded support of hybrid cloud and application modernization initiatives will help enable our customers to easily attain the efficiencies and flexibility of combining on-premises and cloud solutions using the latest open source software and tooling,” said Jim Dixon, Vice President, Software & IBM Power Systems at Mainline Information Systems, an IBM partner. “Availability of hybrid cloud credits along with new appliance-like options of hardware and Red Hat software, including Red Hat OpenShift to provide consistency between on-premises IBM Power Systems and off-premises clouds, can offer ease of entry,” he added.

Continues Dixon: “IBM’s latest expanded support of their hybrid cloud and application modernization initiatives will help enable organizations to easily attain the efficiencies and flexibility of combining on-premises and cloud solutions using the latest open source and tooling,” 

Making it all that more attractive, he added, is the availability of hybrid cloud credits. Plus there are new appliance-like options, hardware and Red Hat software, especially Red Hat OpenShift to provide consistency between on-premises IBM Power Systems and off-premises clouds. These facilitate ease of entry.” Just make sure to ask about any and all the extra goodies.

Among IBM’s recent announcements:

  • Expanded Red Hat Capabilities on IBM Power Systems – now featuring Red Hat OpenShift on IBM Power Virtual Server leveraging OpenShift’s baremetal installer, Red Hat Runtimes, and newly certified Red Hat Ansible Content Collections.
  • New IBM Power Private Cloud Rack Solution – provides an optimized, production-level OpenShift platform to modernize traditional environments with cloud-native applications, the IBM Power Private Cloud Rack combines on-premises hardware, a complete software stack of IBM and Red Hat technology, and installation from IBM Systems Lab Services to deliver 49% lower cost per request as compared to similarly equipped x86-based platforms.

Additional announcements:

  • Extended Dynamic Capacity – Enhancements to IBM Power System’s dynamic capacity to quickly scale compute capacity across the hybrid cloud on Linux, IBMi, and AIX.
  • New IBM Power Private Cloud Rack Solution – Providing clients an optimized, production-level OpenShift platform to modernize traditional environments with cloud-native applications, the IBM Power Private Cloud Rack combining on-premises hardware, a complete software stack of IBM and Red Hat technology,.
  • Extended Dynamic Capacity – Enhancements to IBM Power System’s dynamic capacity to quickly scale compute capacity across the hybrid cloud on Linux, IBMi, and AIX.

As noted above, availability of hybrid cloud credits along with new appliance-like options of hardware and Red Hat software, including Red Hat OpenShift to provide consistency between on-premises IBM Power Systems and off-premises clouds, can offer ease of entry into this new hybrid cloud world.

It is noteworthy how IBM continues to be obsessed with x86, rarely passing up an opportunity to compare its advantages to x86 (on everything but price). The x86 was first introduced in 1978. It is now 2021. If I do the arithmetic correctly, it has been 43 years. The x86 is not going away but neither is the System Z or the Power Systems. The Z and Power Systems keep getting better, faster, more capable, more energy efficient, new capabilities, more in every way but one. It will never be cheaper unless IBM experiences some mystical, magical accounting revelation.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Easy IBM Ramps to Quantum

May 28, 2021

Are you interested in test driving a quantum computer for free? IBM is eager to seed the future quantum computing market, even if it means giving some amount of it away for free. Along the same line the company  wants to make the quantum computing experience as similar as the traditional classical computing experience we all know and take advantage of every day.

molecular structure of beryllium hydride (BeH2)

And selling the quantum computing experience at this point follows the same way we buy classical computers. You know the classical computer in the cloud: 99.9% availability, sufficient redundancy, infinite backup capacity, and matched performance. “We’re not quite there yet,” concedes Rajeev Malik IBM Program Director, Quantum Systems Deployment & Engineering.

But the quantum team feels it is getting closer with every rev of the technology. The idea is to make the transition as similar to classical computing. Want Quality? You’ll get increasingly lower error rates, but not close to fault tolerant by a long shot, he admits. Want fast performance? How fast can circuits run? Complex workloads once took days. “We’re not near the point of common classical computing,” Malik notes, but they are making progress toward  real time computing. Initially error rates run about 6 percent. 

The latest runtime results show 10x improvement. Malik expects to get another 10x improvement at least with each rev. “Our goal is to double the quality and performance.” Already he reports people doing more sophisticated things. In fact, the most recent report showed IBM’s quantum cloud running 810 billion circuits, the equivalent of classical computing lines of code.

Despite steady progress IBM is thinking in terms of years. Quantum systems today typically are comprised of analog and digital systems. In that case, they are  not trivial to maintain 24×7 operation at high fidelity. Significant learning and experience has been incorporated over time in an effort to deliver stable systems with extremely high uptime. However, that requires  automated calibration and fidelity tracking to address temporal drift in qubit performance. For now that requires a  combination of automated and manual intervention to ensure highest system quality.

Real workloads sometimes are purely quantum. More likely, IBM suggests workloads will fall between quantum and classical systems. This isn’t simple. It requires interaction in real time within the coherence time of qubit. Keeping the qubits stable are a challenge in themselves. The solution involves extreme refrigeration requiring  helium to bring them to a low Kelvin temperature. Obviously, at this point the only place to run a quantum system is on a vendor’s cloud

So, where are you going to try a quantum system?  You’ll probably try it first on the IBM cloud, most likely in NYC, IBM’s first. There is now one in Germany, another in Japan at the University of Tokyo. Finally, one is being installed in the Cleveland Clinic. 

Altogether, the IBM cloud hosts 2 dozen quantum devices along with another 5 tailored quantum simulators for specific problems. But IBM realized that they have to do much more to make this technology usable. I’ll bet you don’t have a quantum computing developer on staff. Probably the science teacher who first introduced me to quantum mechanics in high school is no longer alive six or seven decades later.

So IBM is offering onramps. The easiest is probably the first, Composer, which has evolved considerably. It now enables you to build circuits graphically. It provides a step debugger and a button you can push to run it. Even I can probably do that.

Another easy onramp should be Qiskit, IBM’s quantum SDK. Qiskit supports Python 3.6 or later. However, both Python and Qiskit are evolving ecosystems, and sometimes when new releases occur in one or the other, there can be problems with compatibility. (Some technology things never change.) You can find more information on Qiskit here.

Apparently, IBM also recommends installing Anaconda, a cross-platform Python distribution for scientific computing. Jupyter, included in Anaconda, is recommended for interacting with Qiskit.

Qiskit is tested and supported on the following 64-bit systems:

  • Ubuntu 16.04 or later
  • macOS 10.12.6 or later
  • Windows 7 or later

Finally, IBM recommends using Python virtual environments to cleanly separate Qiskit from other applications and improve your experience. As a mediocre former developer, that sounds smart to me.

But there is even more. The company offers the IBM quantum lab, a preinstalled environment. You can dive straight in and be programming in minutes so they claim. If you try any of these approaches, please let DancingDinosaur know how it went.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Z Roadmap to Hybrid Cloud and AI

May 18, 2021

Last week was the second annual virtual Think conference. Before covid hit Think had been one of IBM’s most interesting conferences. DancingDinosaur actually now prefers it as a virtual conference. IBM (or any other tech vendor) may never get me to a physical event again. Sure, the camaraderie can be nice, even beneficial, but is it really worth all the annoyances of flying, especially these days. You might think differently but they can’t bribe me with enough frequent flyer miles.

Z, as presented at Think, is flying high. DancingDinosaur has long considered the Z an ideal cloud machine even before IBM caught the hybrid cloud bug and saw it as a path to sustainable revenue growth. IBM reports Z experiencing 3.5x growth in the cloud. IBM has even bolstered Z capabilities, further enhancing its hybrid cloud capabilities through AI and more.

IBM’s road to Think started in 2020 with  Red Hat OpenShift Container Platform for IBM Z and IBM LinuxONE. That brought together a comprehensive enterprise container and Kubernetes platform with the long heralded IBM Z and LinuxONE, to combine cloud native application development and deployment with a highly secure, scalable and resilient server infrastructure. The combination of Red Hat OpenShift and the Z enables Z and LinuxONE to fully participate in the hybrid multi-cloud world.

At Think IBM boasted of 350% growth of Z, 55% growth of Linux on Z, and 91% of Z customers characterized it as mission critical. Compared to the Z of the past, today’s Z is containerized-capable that can handle a variety of modern languages, such as Python, and tools like Jenkins and Ansible, an automation integration platform. In fact, in a recent study, Forrester Research identified Red Hat Ansible Automation Platform as a leader because of the solution’s integration capabilities along with its model editing capabilities.

Today’s Z has made strides with data virtualization, which allows organizations to access data at the source. In the past mainframe shops had to go through a cumbersome, slow and costly ETL (Extract Transform Load) process. You had to find, access, and load data on the Z while performing any required transformation. Many shops still do that today, but it is slow and costly. Just shipping volumes of data across the network would slow down whatever process you had and increase the cost.

”We didn’t have to do much to add hybrid cloud; it’s just expanding the ecosystem,” said Nathan Dotson, Program Manager, IBM Z Application Platform. But AI is a different case when the Z or a hybrid cloud are included. 

You can then deploy those models to an environment that has transactional and data affinity, specifically transactional processing on Z. 

Or as IBM puts it: train anywhere, deploy on Z. To enable such a strategy, IBM has architected solutions to ensure model portability to Z without requiring additional development efforts for deployment, which it dubs ONNX (Open Neural Network Exchange) technology, a standard format for representing AI models that allow a developer or data scientist to build and train a model in the framework of choice without worrying about the downstream inference implications. To enable deployment of ONNX models, IBM provides an ONNX model compiler that is optimized for Z.

To summarize, the Z mission now is to enable organizations to easily deploy AI workloads on Z and LinuxONE in order to deliver faster business insights while driving more value to the business. Declares IBM: We are enhancing IBM Z as a world-class inference platform. We aim to help clients accelerate deployment of AI on Z by investing in seamless model portability, with integration of AI into Z workloads, and in operationalizing AI with industry leading solutions such as IBM Cloud Pak for Data for more flexibility and choice in hybrid cloud deployments. 

IBM clearly is not stopping with Think 2021.IBM and Anaconda, a provider of the leading Python data science platform, are announcing the general availability of Anaconda for IBM Linux on Z & LinuxONE. This is just the latest step toward bringing popular data science frameworks and libraries to enterprise Z platforms through a consistent data science user experience across the hybrid cloud.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com

Storage for Hybrid Clouds and More

May 3, 2021

It has been a while since Dancing Dinosaur covered storage. My petty complaint with IBM storage is this: how many ways can one iterate on the name Spectrum? Gave up counting it below.

IBM is planning to launch Spectrum Fusion later in 2021. This solution will be designed to combine IBM’s general parallel file system technology and its data protection software to give businesses and their applications a simple and less complex approach to accessing data within the data center, at the edge, and across hybrid cloud environments. Hybrid clouds, in case you forgot, is IBM’s hottest of hot buttons.

In the same announcement, it introduced updates to its Elastic Storage System (ESS) family, described as highly scalable and easily deployed. This actually appears to be the revamped model ESS 5000, now reportedly delivering 10% greater storage capacity and a new ESS 3200, with double the performance of its predecessor.

The idea was to fuse IBM’s general parallel file system technology and its data protection software. The goal: a simpler approach to accessing data seamlessly wherever it is put..

The storage announcements don’t stop here. The company plans to launch new container-native software defined storage (SDS) later in 2021. It will fuse IBM’s general parallel file system technology and its data protection software to enable a simpler approach to accessing data.

In addition, IBM introduced updates to the Elastic Storage System (ESS) family that promises scalability and easy deployment and a revamped model ESS 5000 with 10% greater storage capacity. A new ESS 3200 doubles the read performance of its predecessor.

IBM also announced a new container-native SDS solution. The solution will be designed to fuse IBM’s general parallel file system technology and its data protection software to give businesses a simpler approach to accessing data and it seamlessly anywhere it is placed.

And it intends to  address its ESS family of high-performance solutions that are highly scalable and designed for easy deployment. The revamped model ESS 5000, should deliver 10% greater storage capacity and double the read performance of its predecessor..

But IBM is saving the best goodies for, you guessed it, hybrid clouds. As hybrid cloud adoption grows, so too does the need to manage the edge of the network. Like the digital universe, the edge continues to expand, creating ever more disassociated data sources and silos. 

According to a recent report from IDC, the number of new operational processes deployed on edge infrastructure will grow from less than 20% today to over 90% in 2024 as digital engineering accelerates IT/OT convergence. And by 2022, IDC estimates 80% of organizations that shift to a hybrid business will boost spend on AI-enabled and secure edge infrastructure to deliver business agility and insights in near real time.

“It’s clear that to build, deploy and manage applications requires advanced capabilities that enable rapid availability to data across the entire enterprise,from edge to cloud,” said Denis Kennelly, General Manager, IBM Storage Systems. 

Guardant Health, a leading precision oncology company, uses its data and high-performance computing platform turn massive amounts of genomic data into actionable insights for oncologists, researchers, and the biopharmaceutical industry, with unparalleled speed and throughput. 

IBM continues to enhance Spectrum Fusion, now planned to come in the form of a container-native hyperconverged infrastructure (HCI) system. When released later in 2021, it will integrate compute, storage and networking into a single solution and come equipped with Red Hat OpenShift to enable organizations to support environments for both virtual machines and containers and provide software defined storage for cloud, edge, and containerized data centers. In early 2022, IBM plans to release an SDS-only of Spectrum Fusion.

Through its integration of a fully-containerized version of IBM’s general parallel file system and data protection software, Spectrum Fusion is intended to provide a streamlined way to discover data from across the enterprise. Here organizations need manage only one copy of data.

It also will integrate with IBM Cloud Satellite to help businesses  manage cloud services at the edge, data center, or in the cloud through a single pane and integrate with Red Hat Advanced Cluster Manager (ACM) for managing multiple Red Hat OpenShift clusters.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com

What’s Kyndryl?

April 26, 2021

Can you make a sensible acronym out of Kyndryl, IBM’s new term for its collected IT services business? Kyndry combines selected pieces of IBM services and products? The IT industry loves acronyms, but IBM simply banged together seven letters in a seemingly meaningless way with maybe, a few hints of possible meaning. So if IBM with its hundreds of thousands of clever people couldn’t come up with any suitable acronym, unless maybe they think that already is an understandable acronym, I guess I’ll just have to do it myself. Here’s as far as I’ve gotten: Keep Your New Data Ready Young Laudable (KYNDRYL). OK, it’s not very good as a meaningful acronym. Please improve it any way you possibly can; I won’t be insulted.

KYNDRYL actually is IBM’s replacement for its lame initial effort that it used in its Fall announcement describing its latest restructuring. The creative name it used back in Oct 2020 was NewCo and the company attached trillion-dollar aspirations for it then. Might still .

The new name, as CNN reports: IBM says the “kyn” part of the name is derived from the word “kinship,” and “dryl” comes from tendril, which it said should bring “to mind new growth and the idea that … the business is always working toward advancing human progress.” OK, if they say so. But DancingDinosaur  thinks that even its acronym — Keep Your New Data READY Young Laudable (KYNDRYL–at least it might mean something that could make some sense to an IT buyer, although I would prefer to, at the least, drop the last two letters (YL), but IBM didn’t ask my advice. .

And what’s so bad with letting IBM stay as part of whatever name it wanted? Admittedly, DancingDinosaur isn’t a naming or even a marketing guy, but IBM must be one of the most recognized and respected corporate brand names worldwide, right alongside Coca Cola,GM, Apple, and a few others.

Here’s how Reuters explains it: International Business Machines (IBM) Corp said its managed infrastructure services business would now be called Kyndryl after a planned spinoff into a public company later this year. The plan to separate was announced in 2020, which DancingDinosaur reported at that time. It followed years of IBM trimming its legacy businesses as it increasingly focused on its cloud offerings to counter slowing software sales and seasonal demand for its mainframe servers–DancingDinosaur wouldn’t call it seasonal; sales increases followed each upgrade of the Z. BTW, those cloud offerings were specifically tagged for hybrid clouds, not that it matters. Kyndryl will be headquartered in New York City, IBM added.

So, CNN continued,  the 90,000 employees affected by the change will no longer say they work for IBM but instead for Kyndryl. How many times a day will they have to explain how to pronounce, spell, and explain it. It certainly seems like another corporate name that will join the ranks of failed, or at least widely mocked, brands. Anyone remember New Coke? Bernd Schmitt, a professor of marketing at Columbia University and the faculty director of the Center on Global Brand Leadership told CNN why he thinks it could work:  “Over time,” he said, even a name that may seem strange to consumers can be accepted and embraced. He points out that Häagen-Dazs are completely made-up words that mean nothing. And Verizon (VZ) — the name given the company formed by the 2000 merger of Bell Atlantic and GTE — is another made-up word that raised eyebrows at first, but it’s become an established brand that few think twice about today.

As IT folks DancingDinosaur readers, a name–BTW, which many laughed at initially–will likely be among the first to encounter and explain this name-change. Will you include it in a proposal, will you buy from it, will you grant it credibility. If it delivers the same high quality IT services you expect at a competitive price and in a timeframe that meets your needs why not give it a consideration? DancingDinosaur will be watching to see how other top tier IT services providers respond. Plan to work with them too to ensure you get the best IT services at the best price as well.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com

BMC Swings Wide and Deep on AI

April 20, 2021

Every software vendor is trying to embed artificial intelligence (AI) into their product lines. BMC seems to be taking it wider and deeper than others. It’s latest is BMC Helix, which targets AI for for service desks and employees, while quickly adding IT Operations and DevOps. That leaves a lot to deliver. 

To validate its claims it turned to Forrester to document its value for service desks and employees, with further impact when deployed for IT Operations and DevOps. You can access the full study here free. Expect to be asked for contact info.

For sure the initial summary results look compelling:

In the BMC commissioned study, Forrester Consulting interviewed and analyzed the results of 11 companies implementing BMC Helix solutions for their service and operations management. Covering a three year period, the independent analysis reveals significant financial benefits and business value when using the full BMC Helix solution set. The initial findings::

  • 361% positive ROI when migrating to BMC Helix ITSM
  •  free189% positive ROI with the adoption of the intelligent automation and orchestration capabilities of BMC Helix
  • $4.8M savings in improved end-user productivity
  • 33.5% deflected and 25% fully automated for service desk ticks   

Forrester continues: With a seemingly endless range of possibilities, customers find that BMC Helix’s core value proposition is its ability to empower employee speed, efficiency, innovation, and excellence on the path to becoming what BMC refers to as the Autonomous Digital Enterprise.

A few more findings BMC didn’t want anybody to miss: 

  • 20% increase in agent productivity: Helix solutions boost agent productivity through cognitive automation capabilities, a streamlined user interface/user experience and single-pane-of-glass view with multi-cloud integrations, and analytics for lower mean-time-to-repair.
  • 50% of tickets resolved through self-service and automation: Half of tickets are resolved before reaching the service desk through self-service with single consolidated service catalogs, knowledge articles and chatbots, and orchestrated automation triggered from self-service and/or self-remediation, reducing the cost of service delivery.
  • 33% of IT Operations resources “shifted-left” to free up skilled IT resources: By deploying the BMC Helix suite of solutions, organizations gained cloud benefits such as simplified management and regular updates, while gaining capabilities to monitor, manage, streamline, and orchestrate the increasingly complex IT estate. The full solution set automates 12% of IT Operations’ workloads, empowering employees to focus on programs that drive innovation, capability enhancement, and automation.
  • 90% reduction in incidents caused by DevOps changes: Orchestration and third-party integrations help avoid wasted time for data entry across systems, streamline testing, and allow DevOps teams to release more quickly and frequently. 

So, where does this leave you? According to Forrester, you’rein position to achieve  a growth-oriented autonomous digital enterprise capable of achieving differentiation through agility, customer centricity, and actionable insights. To be successful, however, organizations must develop new operating models enabled by new technologies.

A growth-oriented Autonomous Digital Enterprise that promises value with competitive differentiation enabled by agility, customer centricity, and actionable insights. But to actually be successful, BMC and Forrester notes, organizations must develop new operating models enabled by these new  key enabling technologies. Are you and your organization ready to put in the time, effort, and rethinking of the business to do this?

So where does that leave you? If you are careful you will find useful, intelligent automation everywhere which promises to lead to a transcendent customer experience, effective enterprise DevOps, and data-driven business outcome. If you play it as described and Forrester and BMC are right that should lead to nirvana. But DancingDinosuar’s experiences reporting on new technology suggests that it is always harder than it sounds. Let’s hope that it works for all of you as expected.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com

Update a Mainframe as a Modern Platform

April 13, 2021

When responding to those who say the mainframe is outdated  my knee-jerk response is to cite Linux on the mainframe. Sometimes I throw in the use of containers and even languages like Python or Node.js, but even these sound dated. 

SiliconANGLE has a much better answer by analyst Jason Bloomberg in his piece; How the mainframe became a surprising platform for innovation. What Bloomberg describes in the piece is how even more recent advances from a new generation of ISVs are updating the mainframe in new ways and with new capabilities.

Remember  the BUNCH, a collection of mainframe rivals, fell apart in the 1970s, IBM has had to carry the burden of mainframe innovation pretty much on its own with only occasional help from a few ISVs. The BUNCH, an acronym from the five surviving rivals at that time– Burroughs, UNIVAC, NCR, Control Data Corporation (CDC), and Honeywell–gradually gave up the chase. That a few have survived in some form this day is itselfa  marvel. A decade earlier, RCA and General Electric gave it a run. Along with the BUNCH they were referred collectively as the Seven Dwarfs.

So how does Bloomberg and  SiliconANGLE come off referring to the 50+year old mainframe as a modern platform for innovation? 

It actually starts with IBM and the cloud.  To open the door to innovation, “Big Blue has been rationalizing and lowering mainframe pricing over the last few years, bringing the platform into cost alignment with cloud alternatives”, Bloomberg writes.

But even more was needed. IBM began rolling out Wazi Developer for Red Hat CodeReady Workspaces, a cloud-native development experience for its venerable z/OS mainframe operating system. Wazi integrates into any enterprise-wide standard DevOps pipeline, giving developers a consistent and familiar development experience for IBM z/OS. “Wazi provides COBOL developers with exactly the same experience as Java or Node.js developers,” said Rosalind Radcliffe, chief architect for DevOps for z Systems. 

Bloomberg at SiliconAngle is digging deeper than just the familiar mainframe ISVs. He’s introducing new players with, surprise, new ideas.

Let’s not immediately reject familiar Z ISVs.  BMC Software Inc,which recently acquired Compuware, another mainframe ISV, has bolstered the  Z innovation story with DevOps-centricity across its customer base. “The mainframe is no longer behind the curtain. It has to be mainstream,” says April Hickel, vice president for Intelligent Z Optimization and Transformation at BMC. “The goal is to have the same DevOps pipeline for mainframe as the rest of development.”

Bloomberg is looking beyond familiar mainframe ISVs. For instance: GT Software addresses mainframe integration challenges by offering a secure abstraction layer between modern cloud applications and the mainframe. Furthermore, GT enables Z shops to generate APIs via a no-code, drag-and-drop interface to foster innovation. 

Another, Precisely Holdings, covered by DancingDinosaur last year, has a history of improving sorting algorithms on the mainframe.  Most recently it entered discussions with IBM to optimize its algorithms where the ‘bottlenecks in sorting occurred and how hardware can be used to accelerate that. 

 Model9 Ltd, another startup,  migrates data off the mainframe to either cloud or on-premises storage. The migrated data are in an encrypted, native binary format. Once at their destination, Model9 transforms the data as necessary for analytics or backup purposes.

This allows Model9 to replace tape or virtual tape for backup and restore, since it can return  native binaries to the mainframe if necessary. However, Bloomberg notes, supporting various analytical use cases off the mainframe is Model9’s primary purpose.

Another newcomer, VirtualZ Computing, redirects requests for mainframe applications to a single instance of that application in order to reduce licensing costs. But lowering costs is only part of the VirtualZ value proposition. “VirtualZ leaves data where it is,” said Vince Re, co-founder and chief technology officer of VirtualZ Computing. “That frees a customer to virtualize the application, letting it run wherever makes the most sense.

Will these revitalize the mainframe? It’s a start and more will surely follow their success. The Z is here to stay.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Celonis-IBM SI Deal

April 5, 2021

Is process mining new?  Celonis, like every good marketer on the make, suggests it is: Process mining, it explains, uses Celonis’ software to identify how work moves through an organization and suggests more efficient ways of getting the same work done, also known as process mining.

Abstract 3D rendered illustration of business or programming flowchart/block diagram.Similar pictures from my portfolio:

Process mining image credits: mbortolino / Getty Images

On April 1 Celonis  continued, “Before you can improve a workflow, you have to understand how work advances through a business, which is more complex than one might imagine inside a large enterprise.” That’s where Celonis comes in. It uses software to identify how work moves through an organization and suggests more efficient ways of getting the same work done, an approach it call process mining.

That day the company announced a significant partnership with IBM, by which IBM Global Services will train 10,000 consultants worldwide on Celonis. The deal gives Celonis, a company with around 1,200 employees, access to the massive selling and consulting power of IBM, while IBM gets a deep understanding of a piece of technology that is at the front end of what it describes as the workflow automation trend.

The chief revenue officer at Celonis, explains that “digitizing processes have been a trend for several years. It has sped up due to COVID, and it’s partly why the two companies have decided to work together.” Intelligent workflows, he continues,  or more broadly spoken workflows built to help companies execute better, are at the heart of this partnership and it’s at the heart of this trend now in the market,”

IBM’s view looks a little different: One view of this is that IBM now owns Red Hat, which it acquired in 2018 for $34 billion. The two companies believe that by combining the Celonis technology, which is cloud based, with Red Hat, which can span the hybrid world of on premises and cloud, the two together can provide a much more powerful solution to follow work wherever it happens.

“I do think that moving the [Celonis] software into the Red Hat OpenShift environment is powerful because it does allow what’s already a very open solution to now operate across this hybrid cloud world, leveraging the power of OpenShift, which can straddle the worlds of mainframe, private cloud and public cloud, writes Ron Miller, technology journalist at Tech Crunch.  “The data straddle those worlds and will continue to straddle those worlds,” adds Mark Foster, senior vice president at IBM Services.

Most importantly, it offers another way to leverage IBM’s stunning investment in Red Hat by creating another opportunity to use OpenShift, which is shaping up as the crown jewel of the Red Hat acquisition.

A lingering question arises: Why didn’t IBM, a multi-billion dollar company, just buy Celonis outright. It probably could have acquired it for what would amount for IBM  as petty cash.

Or maybe Celonis was not willing to jump for what it considered small money. Miguel Milano, chief revenue officer at Celonis, says that digitizing processes has been a trend for several years. It has sped up due to COVID, and it’s partly why the two companies have decided to work together. “Intelligent workflows, or more broadly spoken workflows built to help companies execute better, are at the heart of this partnership and it’s at the heart of this trend now in the market,” he insists..

The other part of this is that IBM believes that by combining the Celonis technology, which is cloud based, with Red Hat, which can span the multiple hybrid worlds including on premises and multiple clouds, the two together can provide a much more powerful solution to follow work wherever and however it happens.

Anyway, the companies report they had already been working together for some time prior to this formal announcement, and this partnership is the culmination of that. As this firmer commitment to one another goes into effect, the two companies will be working more closely to train thousands of IBM consultants on the technology, while moving the Celonis solution into Red Hat OpenShift in the coming months.

It’s clearly a big deal with the feel of an acquisition, but Milano says that this is about executing his company’s strategy to work with more systems integrators (SIs), and while IBM is a significant partner, it’s not the only one. Oh yeah? With IBM Global Services set to train 10,000 consultants worldwide on Celonis what SI is going to be bigger. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Qiskit Metal for Quantum Development

March 30, 2021

On 2/4/2021 IBM unveiled its Quantum Development Roadmap, which showcases the company’s integrated vision and timeline for full-stack quantum development, including hardware, software, and applications.

OGAWA, Tadashi on Twitter: "=> "IBM's Roadmap For Scaling Quantum  Technology", Sep 15, 2020 https://t.co/zPxPkiotCB 2020: 65-qubit IBM Quantum  Hummingbird processor 2021: 127-qubit IBM Quantum Eagle processor 2022:  433-qubit IBM Quantum Osprey

Last September (2020), IBM shared its roadmap to scale quantum technology, with a clear vision for how to get to its declared inflection point of 1,000+ qubits by 2023 – and quantum systems powerful enough to explore solutions to challenges impossible on classical machines, alone. The development roadmap gives the millions of professional developers more reason and opportunity to explore quantum computing within their industry and expertise – without the need to learn new tools or languages.

Earlier this month, IBM introduced Qiskit Metal, a quantum computing SDK that promises to be accessible to almost anyone (not me, I briefly tried). As IBM explains it: Qiskit Metal enables chip prototyping in a matter of minutes. 

Just start from a convenient Python Jupyter notebook and take advantage of its user-friendly graphical user interface (GUI). Choose from a library of predefined quantum components, such as transmon qubits and coplanar resonators, and customize their parameters in real-time to fit your needs. Use the built-ialn algorithms to automatically connect components. You even can easily implement new experimental components using Python templates and examples. (As noted above, it’s not as easy as it implies.)

Metal, IBM continues, starts you off with modeling your intended quantum system. Metal helps by automating the quantum electrodynamics modeling of quantum devices to predict their performance and parameters, such as qubit frequencies, harmonics, couplings, and dissipation. Metal’s vision is to provide the abstraction layer needed to seamlessly interconnect with your favorite electromagnetic analysis tool (HFSS, Sonnet, CST, AWR, Comsol, …), dynamically rendering and co-simulating your design, at the whim of a click.

Behind Qiskit Metal is IBM’s vision for quantum development. In short; designing quantum devices is the bedrock of the quantum ecosystem, but it is a difficult, multi-step process that connects traditionally disparate worlds. That’s where Metal comes in.  Metal promises to automate and streamline this otherwise complex process. IBM envisions  developing a community-driven universal platform capable of orchestrating quantum chip development from concept to fabrication in a simple and open framework.

Specifically, the company wants to accelerate and lower the barrier to innovation of quantum devices. At a recent gathering led by quantum physicist Zlatko Minev and developed with other IBM Quantum team members they introduced a suite of hardware design automation tools that can be used to devise and analyze superconducting devices, with a goal of being able to integrate the best tools into a quantum hardware designer’s workflow. This is what was just introduced as Qiskit Metal, a tool for quantum hardware development.

IBM hopes that the community bridges the gap between pieces of a superconducting metal on a quantum chip with the computational mathematics of Hamiltonian and Hilbert analytical  spaces  available to anyone with a curious mind and a laptop. The goal ultimately is to make quantum device design a streamlined process that automates the laborious hardware tasks as it does with conventional electronic device design.

To achieve that IBM designed the  software with built-in best practices and cutting-edge quantum analysis techniques, all while seamlessly leveraging the power of conventional EDA tools like Python. In short: the goal of Qiskit Metal is to allow for easy quantum hardware modeling with reduction of design-related errors plus increased speed.

With luck it should get better. By 2023, IBM’s development roadmap fills in the gap to take developers from operating at the kernel level to working with application modules, laying the foundation for quantum model services and frictionless workflows. This includes opening up the variety of quantum circuits to include dynamic circuits, bringing real-time classical computing to quantum circuits to improve accuracy and potentially reduce the required resources.

Developers exploring quantum computing today will be able to do more, faster, as IBM implements technologies designed on OpenShift to work alongside quantum computers. As a result, more developers from different industries will have more reasons and opportunities to explore quantum computing within their workflows and without any need to learn new tools or languages, except maybe Qiskit Metal. IBM is counting on you to be more development capable than me.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.


%d bloggers like this: