4 Reasons Z Not Outdated

November 30, 2020

DancingDinosaur continues to get questions about the Z being outdated. That’s surprising because Linux was introduced on Z over 20 years ago. It was a gutsy leading edge move  then and through the use of subsequent Z assist processors Z has continued to keep pace with the latest technology changes, from containers to python.

So here are four reasons your Z should not become outdated (unless you allow it to).

  1. Linux on Z
  2. IBM LinuxONE
  3. Red Hat OpenShift on Z
  4. Support for the most popular languages and coding practices

Linux on Z. Some other IT analysts might think otherwise, but for DancingDinosaur, today’s modern Z started with IBM’s introduction of Linux on Z. Finally there was an open source system that I actually understood and could use. Of course it would take a few more years of tweaking by IBM and others before Linux would run seamlessly on the Z but I was overjoyed anyway. It seemed at that time  a great step forward to making the Z more accessible to non-technical people and possibly less expensive and more flexible, which would attract smaller companies. The Z needed to address more than just the Fortune 100. 

IBM LinuxONE. A smaller, single frame Z. Again, poised to attract companies other than the Fortune 100. IBM also implied that it would be less expensive. DancingDinosaur wrote about this in Sept. 2018 here.

At that time IBM dubbed it as  the newest generation of the LinuxONE, the IBM LinuxONE Emperor II, built on the same technology as the IBM z14, which DancingDinosaur covered here. The key feature of the new LinuxONE Emperor II, is IBM’s Secure Service Container, presented as an exclusive LinuxONE technology representing a significant leap forward in data privacy and security capabilities. With the z14 the key capability was pervasive encryption. This time the Emperor II promised very high levels of security and data privacy assurance while rapidly addressing unpredictable data and transaction growth. And as LinuxONE it would be cheaper, right?

IBM still sees itself in a titanic struggle with Intel’s x86 platform. With the LinuxONE Emperor II IBM thought it had the chance to change some minds. That doesn’t mean, however, the Emperor II is a Linux no brainer, even for shops facing pressure around security compliance, never-fail mission critical performance, high capacity, and high performance. Change is hard and there remains a cultural mindset based on the lingering myth of the cheap PC of decades ago. IBM wasn’t likely to cut prices that low or offer deals companies couldn’t refuse. But the machine still has great security, capacity, and performance specs, and as IBM promises Linux is seamlessly built in this time. Even I might be able to get it up and running.

Red Hat OpenShift on Z According Kavita Sehgal, an IBM expert in designing and deploying Hybrid Cloud solutions on Z mainframes, gives developers agility on a platform that’s modern, scalable, automated, secure, reliable, and compliant to the standards that governments and regulated industries require. These differentiators are essential to any company that  needs to run mission-critical workloads on the Hybrid Cloud, and needs visibility into those workloads—whether on premise, on a private cloud, or on a public cloud (in effect, a hybrid cloud).  OpenShift also facilitates the mixing of different development technologies.

Support for most popular languages and coding practices  Through OpenShift on Z together with the other Z assist processors you can combine C++, Java, Python, Perl, and other common Linux languages, expedite the use of containers, and more.

So, there is enough flexibility above to ensure your Z won’t get outdated unless you want it to.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

The Pandemic and Z

November 19, 2020

DancingDinosaur really wants the pandemic to end. Being such a good Do-B, he often feels he is the only one still wearing a mask, using gloves, staying home, not physically shopping in stores etc. etc. It will be eight months in December and I go almost nowhere. My car gets about 3 weeks to the gallon. I start it up every few days just to see what’s camping out at the foot of the driveway. 

Pandemic virus: courtesy Getty

Back in April, when the pandemic was still novel and state governors were just realizing that they would have to step up and manage the situation with a few regs since the federal government apparently had punted on it. IEEE Spectrum came out with a glowing article on the Z headlined Mainframes Are Having a Moment. Nine months later it is turning into much longer than a moment.

This is not going to end in 2020 and probably not even 2021 since millions of people will have to get vaccinated first (2 doses each, spread a couple of weeks apart) before it will be safe to act like we used to, if we can still remember what that was. So many delivery drivers have been dropping stuff at my door that I’m wondering if I should give them a holiday tip. 

Back in April, Spectrum wrote: there’s a silver lining to state unemployment insurance systems’ failings caused by the COVID-19 crisis. People are urgently needed who can program mainframes. Seems the unemployment systems were choking as millions of people, week after week, month after month were filing claims and bringing the unemployment systems to their knees. Many, sadly, still are.

The Open Mainframe Project put out a desperate call for COBOL programmers at the state level and DancingDinosaur ran the announcement in mid April. As the announcement said: More than 10 million people in the United States have filed for unemployment amid the COVID-19 global pandemic and the resulting financial crisis ensued. The ranks of the unemployed continued to grow for months that followed. The growth has only recently started to moderate slightly.

Most colleges had dropped the mainframe ball years before in their mad chase of the sexy distributed systems their students wanted. Many college and university computer science departments had already dropped mainframe programming curriculum to focus on more modern languages and technologies that appealed to students. Ironically, only then faculty and staff started to  report an uptick in interest in Cobol. 

The increase actually began around the time pandemic-related layoffs inundated state unemployment agency computer systems, causing government officials to put out the urgent call for programmers who know Cobol to step in and help. DancingDinosaur learned Cobol decades earlier in college, but never used it after he squeaked by with a B-.  By the time I went to grad school for communications I peeked into the Computer Science Dept and they weren’t even offering Cobol. 

DancingDinosaur started this blog years ago after writing a series of freelance articles that said the mainframe was dead in response to an editor’s request. The editor would get a press release that said this or that company was eliminating its mainframe in favor of new distributed systems, meaning PCs. 

Being a hungry freelancer on the make for another gig, I called back those same companies that months earlier had told me how much they would gain by getting rid of the mainframe. After an embarrassing silence punctuated with a lot of ums and uhs they would sheepishly admit the mainframe is still there followed by a slew of excuses for why it wasn’t working out as planned. 

I contacted the editors who initially assigned me the mainframe-is-dead stories and told them what I learned. My big hoped-for an expose’ never followed. They had lost interest in the topic. So I started DancingDinosaur, which never replaced my freelance writing income but it is a lot more fun.

So how does one have fun with DancingDinosaur? By following any vaguely related topic that you want, like the pandemic.  

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

RPA Leads Revival of Z Terminal Emulation

November 13, 2020

When Rocket Software announced its acquisition of ConnectiQ mainframe and WebConnect mainframe terminal emulation this past October the obvious question  was why. Mainframe terminal emulation isn’t exactly new. But digging more closely into the announcement, it became clear: this is about Robotic Process Automation (RPA). 

As it turns out, RPA is just one of many forms of business process automation software. It is based on metaphorical software robots or AI digital workers.  By 2022, 65% of organizations that deployed robotic process automation also will introduce AI, including machine learning and natural language processing algorithms. So RPA encompasses more than terminal emulation for Z.

RPA, courtesy of Enterprisers Project

Gartner, in a recent report, actually takes it even further, calling it  hyperautomation, which it describes as an effective combination of complementary sets of tools that integrate functional and process silos to automate and augment business processes.

At Rocket, IBM Z customers have faced ongoing pressure to re-platform, even as enhancement and integration consistently proves to be the faster, less painful, and more cost-effective approach. With 30 years of experience optimizing, enhancing, integrating, and strengthening legacy platforms, Rocket is committed to delivering the broadest set of options for IBM Z and IBMi because no single approach fits every circumstance. Hence its latest Z terminal emulation acquisitions, which can delegate repetitive, lengthy tasks to efficient mainframe RPA.  

“Automation will take on an even more critical role in a post-pandemic world as cost takeout and business resilience become chief destinations on the technology roadmap,” said Craig Le Clair and Leslie Joseph of Forrester Research in a report entitled Ten Golden Rules For RPA Success. “RPA will be the first stop along the path to intelligent automation.”  

Gartner further raises the ante with hyperautomation, a combination of complementary sets of tools that can integrate functional and process silos to automate and augment business processes. It already pegged hyperautomation as one of the top 10 strategic trends for 2020.

Rocket is also acquiring WebConnect, an enterprise-class terminal emulation solution that will continue the company’s 15-year investment in the critical domain of host access for mainframe systems. The company’s ongoing innovation ensures that users can access their IBM Z, IBMi, and other VT-based systems in any way they require.   

Process automation, robotic or something else, is attracting much attention as organizations start to figure out how they will function in the post pandemic world that will eventually arrive in 2021 or 2022, we hope. Like Gartner, Forrester too has chimed in.

Forrester notes RPA and similar automation still present some pitfalls; 

  • Scale remains its Achilles’ heel. More than half of all RPA programs worldwide employ fewer than 10 bots. Moreover, less than 19% of RPA installations are at an advanced stage of maturity. Fragmented automation initiatives, a patchwork of vendors, incomplete governance models, and attempts to automate overly complex tasks stall efforts. 
  • Enterprise programs lack the momentum needed to meet ROI targets. At least 25% of companies struggle to meet their ROI targets. To these firms, scale means finding and automating more tasks.
  • Finding enough tasks to automate is the biggest scale issue. Need repetitive tasks that occur in high enough volume to justify the cost of building a bot but there is too much variation, even when the outcome is the same. 

“At Rocket, we work with customers to identify the best path to achieve their concrete outcomes,” said Christopher Wey, President of the Rocket IBMi business unit. “We’re also excited to bring the transformative force of RPA with ConnectiQ to IBM Z customers.” 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Edge Computing Meets 5G and Hybrid Clouds

November 4, 2020

We’ve known for several years that IBM adores hybrid clouds. It was just about a month ago that IBM restructured almost its entire business in an effort to pursue what it perceives as a $1 trillion hybrid cloud opportunity. DancingDinosaur covered it here.

Courtesy: PCMag.com

The company didn’t stop there. Rather, it leveraged  two key enabling technologies—edge computing and 5G high-bandwidth, low-latency, edge-based wireless networks–to enable any business to speed its digital journey.  In fact, Gartner predicts that by 2025, 75% of enterprise-grade data will be created and processed by 5G devices at the edge, and every industry will be impacted by this shift.

With that in mind, IBM teamed up with AT&T to deliver global access to 5G devices. The shift is fueled by two key enabling technologies—edge computing and 5G high-bandwidth, low-latency wireless networks, which make it possible for businesses of all sorts to boost processing efficiency. Specifically, 75% of enterprise-grade data will be created and processed by devices at the edge, close to where it was created and is used, not Gartner 

Enterprises can capture the value of these technologies simply by bringing hybrid cloud computing to a low-latency edge environment, enabling them to more quickly and securely build new applications in an edge environment or even on-premises. That’s the theory, at least 

As IBM puts it, 75% of enterprise-grade data will be created and processed by devices at the edge, and every industry will be impacted. Enterprises can capture the value of these technologies just  by bringing hybrid cloud computing to a low-latency edge environment, enabling them to more quickly and securely build new, innovative applications in an edge environment or on-premises.

That’s why IBM and AT&T announced their collaboration in the first place. AT&T brought the global network and IBM brought its open hybrid cloud platform built on Red Hat OpenShift. This should make it easier for enterprises to manage a heterogenous hybrid cloud computing environment in a low-latency, private cellular network edge environment.

Dong processing efficiently at the edge simply is faster and more efficient than shipping it elsewhere. Red Hat OpenShift facilitates the consistency of running workloads on a range of edge devices across different environments.

For telcos, as IBM puts it, this open approach is critical. IBM has been working with companies all over the world to do just that. In fact, 83% of the world’s largest telcos are IBM clients—including Vodafone, Verizon, Bharti Airtel and others.

By making it easier for businesses to manage open hybrid cloud computing in a low-latency, private cellular network edge environment like that of AT&T will help businesses across a variety of industries to quickly and securely build applications using regional or on-premises edge computing. Better still, built on Red Hat OpenShift and the IBM Cloud Satellite gives clients the flexibility to bring their applications to any environment where their data and processing may reside  while leveraging the efficiency of IBM’s open hybrid cloud-based edge processing.

As new hybrid cloud services emerge, enterprises can tap into the power of 5G for a wide range of uses, such as factory safety and efficiency, real-time health monitoring, or autonomous vehicle operation. And because they are being handled at the edge, these processes avoid the inefficiency of even the millisecond latency of sending workloads to a centralized cloud.

At the same time, companies need a secure environment in which to run and interconnect their critical workloads, from the tiniest wireless monitoring device on the network’s farthest edge to the central cloud—as well as all on- and off-premises points in between. 

Will this pay off? IBM reports it has been working with telcos on this.  It notes  83% of the world’s largest telcos are IBM clients—including Vodafone, Verizon, Bharti Airtel and others. Now all that’s needed are actual customers to put processing and applications on the edge to take advantage of this.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur

BizTransformation and the CHRO

October 27, 2020

If you haven’t had to write a proposal to win approval for a new technology project in the past year or so, make sure to include business transformation as the primary objective. Don’t worry if you haven’t a clue what you will actually be transforming. In the last eight or nine months, since the pandemic first made itself known, every business has been forced to change in multiple ways, in ways they never would have guessed. So relax, make business transformation your objective and you’ll have no shortage of projects to fill it in with when somebody requests documentation.

Just think about it. How often have you had to adjust to deal with customers who are suddenly working from home, or staff who are now working from home while they also are helping their children with a Zoom substitute for attending school. Believe me, you will have no shortage of transformations that your business is unexpectedly facing. 

Look at your supply chain, has that changed in any way? Or your customer service or sales processes. Any transformation going on there? Or product development, design, and engineering. These should be able to keep your business transforming until the end of time, at which point the pandemic might just be winding down, regardless of what the President says. 

The pandemic is paving the way for one change that human resources (HR) people have long been salivating over–elevating HR into the C-suite. DancingDinosaur, which has never been hired over his 30-year career as a technology writer for a salaried job, working instead as an independent contractor. To DancingDinosaur, HR was somebody who periodically bugged him to fill out a 1099 form, which he was always happy to do because he liked getting paid.

But the changes wreaked by the pandemic among the millions of salaried people, who suddenly found themselves utterly unprepared to be working from home for what has turned out to be months. These people often lacked even a suitable ergonomic desk chair to work at the kitchen table for extended periods.

Now the HR folks are not just blank fillers or form chasers. They have to counsel and help people organize themselves to do serious work productively for an extended period of time, like maybe forever.

Suddenly  as “C-suite leaders look to rapidly transform to meet new customer needs and overhaul business models, they report inadequate skills among their biggest hurdles to progress,” writes Amy Wright, IBM Managing Partner forTalent and Transformation. The needs include both technical skills to work with technology as well as behavioral skills like agility and the ability to collaborate effectively. She continues: “At the same time, our consumer research shows there has been a permanent shift in the expectations employees have of their employers, including better support for their physical and emotional health or skills training.”

The result is a new C-suite acronym, the Chief Human Resources Officer (CHRO). HR proponents argue that this is the moment to evolve, shifting away from a process-oriented function to an agile consulting arm and in doing so, drive engagement and productivity, foster trust in uncertain times, cultivate resilient workforces, and add some strategic perspective.

Not everybody may be ready to elevate the HR director to the CHRO, but the position got a big boost by an article in Forbes.  Now, more than ever, organizations require HR to set the tone for good workplace culture, excellent employee experiences, high retention, succession planning, change management, and other strategic business objectives. Today organizations also require HR to set the tone for good workplace culture, excellent employee experiences, high retention, succession planning, change management, and other business objectives, not just clean up awkward messes. Just don’t forget distributing 1099 forms to people like me.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Accelerates Hybrid Cloud

October 15, 2020

IBM has been drooling over the idea of the hybrid cloud for several years at least. Hybrid clouds result when an organization operates more than one cloud, often an on-premises cloud or more and one or more clouds from third-party providers. IBM wants to provide its IBM Cloud to the customer as its on-premises cloud and provide a variety of specialized clouds to the same customers, effectively expanding its hybrid cloud revenues.

So last week, (Oct. 8) IBM revealed complex plans to accelerate its hybrid cloud growth strategy to drive digital transformations for its clients. BTW, if you expect to sell complex technology and systems, you better tie it directly to business transformation, which has emerged as the hot C-suite buzzword. 

And, IBM continued, it will separate the Managed Infrastructure Services unit of its Global Technology Services division into a new public company, imaginatively called NewCo for now. This creates two companies, each with strategic focus and flexibility to drive client and shareholder value. This will be achieved as a tax-free spin-off to IBM shareholders, and completed by the end of 2021.

Of course, others have similar ideas. Oracle offers its cloud infrastructure with a free promotion. Microsoft offers its Azure Cloud platform. Even HPE is there with its Pointnext cloud services. So you have choices.

Arvind Krishna, IBM Chief Executive Officer adds: “IBM is laser-focused on the $1 trillion hybrid cloud opportunity.” Client buying needs, he continued, “for application and infrastructure services are diverging, while adoption of our hybrid cloud platform is accelerating.”

IBM will focus on its open hybrid cloud platform and AI capabilities. NewCo will have greater agility to design, run and modernize the infrastructure. Does it tempt you to jump in right now to buy some IBM shares (about $125 a share)?

The company is understandably enthusiastic. As two independent companies, IBM and NewCo might capitalize on their respective strengths. IBM will accelerate clients’ digital transformation journeys, and NewCo will accelerate clients’ infrastructure modernization efforts.

IBM will focus on its open hybrid cloud platform, which, it claims, represents a $1 trillion market opportunity. To build its hybrid cloud foundation, IBM acquired Red Hat, for $34 billion, to unlock the cross platform value of the cloud. This platform also promises to facilitate the deployment of powerful AI capabilities to enable the power of data, application modernization services, and systems. This moves from a company with more than half of its revenues in services to one with a majority in high-value cloud software and solutions and producing more than 50% of its portfolio in recurring revenues.

IBM’s open hybrid cloud platform architecture, based on Red Hat OpenShift, works with the entire range of clients’ existing IT infrastructures, regardless of vendor, driving up to 2.5 times more value for clients than a public cloud-only solution, it claims. 

This is a fresh repackaging of what ibm has been moving toward for some years and at considerable expense. Would you as a customer buy it? DancingDinosaur isn’t being asked but it would wait to see specific pricing, packaging, terms, and conditions. And absolutely shop around. You have choices.

After parsing much of the IBM boilerplate around this announcement, it is somewhat disappointing that IBM said nothing about where the Z fits in. The Z has been the company’s only profitable product performer in recent quarters. It certainly isn’t going to be a services player. 

DancingDinosaur has covered the Z under its various names for 20 years. Guess Z fans will just have to see where it ends up, which very well could be nowhere. Is it likely that IBM would abandon a profitable product line that attracts so much of the Fortune 100? Or will they dump it the way they dumped chip fabrication, by paying somebody to take it? Guess we’ll just have to wait and see. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

BMC 15th Annual Mainframe Survey

October 9, 2020

This month BMC came out with the results of its 15th annual mainframe survey. Survey respondents laid out some ambitious goals starting with plans to increase adoption of DevOps on the mainframe. Specifically they want greater stability, security, and scalability. 

Respondents also reported a desire to speed up AI adoption as they seek smarter operational data. Forty-six percent of respondents made data recovery a priority. Driving the increased interest was the desire to better predict data recovery times. Even more, 64%, wanted to reduce planned outages, effectively ensuring that high availability continues as a priority.

Similarly respondents expressed increased interest in SIEM. Security-related concerns ranked significantly higher than in last year’s survey, as did vulnerability scanning. It was not too long ago that mainframe shops were quite complacent about mainframe security. No longer.

Overall the mainframe comes out very well, especially compared to years when respondents were reporting plans to deactivate the mainframe. For example, 90% of respondents reported positive sentiments toward the mainframe by management.

Similarly, 68% forecast MIPS growth of 6 percent. Among the largest mainframe shops 67% report more than half of their data resides on the mainframe. 

Mainframe staffing has been an ongoing concern for years. Remember the experienced mainframe veterans hitting retirement and are impossible to replace. IBM and the Open Mainframe Project have been working on this and finally appear to be making headway.

To start, IBM has been expanding the capabilities of the mainframe itself. Over 20 years ago, IBM introduced Linux on the mainframe. That provided, at some level, an alternative to zOS. It was clunky and inelegant at first but over the last two decades it has been refined. Today there are powerful LinuxONE mainframes that can handle the largest transaction workloads. IBM has made it easier to use Linux as a Linux machine with the power of the z15. Throw in Java on the mainframe and you have a very flexible mainframe that doesn’t look and feel like a traditional mainframe.

More recently, the Open Mainframe Project introduced Zowe. a new open-source framework. Zowe brings together systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe, IBM CA, and Rocket Software introduced Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT, cloud development, and the mainframe. 

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe as he or she would on any cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know and use them to access, call, and integrate mainframe resources and services. So you can stop pining for those retired mainframe veterans. They’re drinking Scotch on the beach.

Ironically the mainframe is probably older than the programmers Zowe will attract. Zowe opens new possibilities for next generation applications from next generation programmers and developers for mainframe shops desperately needing new, mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation, while making experienced professionals more efficient. BTW, Zowe’s code is made available under the open-source Eclipse Public License 2.0.

Whether it is Zowe or just the opening of the mainframe things are changing in the right direction. The people with 1-10 years experience on the mainframe have increase from 47% to 63% while those with more than 20 years of mainframe experience represent 18%. Most encouraging is the growth of women, who now constitute 40% (from 30% a year before) while men have declined from 70% to 60%. 

Respondents also reported a desire to speed up AI adoption as they seek smarter operational data. Forty-six percent of respondents made data recovery a priority. Driving the increased interest was the desire to better predict data recovery times. Even more, 64%, wanted to reduce planned outages, effectively ensuring that high availability continues as a priority.

Similarly respondents expressed increased interest in SIEM. Security-related concerns ranked significantly higher than in last year’s survey, as did vulnerability scanning. It was not too long ago that mainframe shops were quite complacent about mainframe security. No longer.

Overall the mainframe comes out very well, especially compared to years when respondents were reporting plans to deactivate the mainframe. For example, 90% of respondents reported positive sentiments toward the mainframe by management.

Similarly, 68% forecast MIPS growth of 6 percent. Among the largest mainframe shops 67% report more than half of their data resides on the mainframe. 

Mainframe staffing has been an ongoing concern for years. Remember the experienced mainframe veterans hitting retirement and are impossible to replace. IBM and the Open Mainframe Project have been working on this and finally appear to be making headway.

To start, IBM has been expanding the capabilities of the mainframe itself. Over 20 years ago, IBM introduced Linux on the mainframe. That provided, at some level, an alternative to zOS. It was clunky and inelegant at first but over the last two decades it has been refined. Today there are powerful LinuxONE mainframes that can handle the largest transaction workloads. IBM has made it easier to use Linux as a Linux machine with the power of the z15. Throw in Java on the mainframe and you have a very flexible mainframe that doesn’t look and feel like a traditional mainframe.

More recently, the Open Mainframe Project introduced Zowe. a new open-source framework. Zowe brings together systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe, IBM CA, and Rocket Software introduced Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT, cloud development, and the mainframe. 

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe as he or she would on any cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know and use them to access, call, and integrate mainframe resources and services. So you can stop pining for those retired mainframe veterans. They’re drinking Scotch on the beach.

Ironically the mainframe is probably older than the programmers Zowe will attract. Zowe opens new possibilities for next generation applications from next generation programmers and developers for mainframe shops desperately needing new, mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation, while making experienced professionals more efficient. BTW, Zowe’s code is made available under the open-source Eclipse Public License 2.0.

Whether it is Zowe or just the opening of the mainframe things are changing in the right direction. The people with 1-10 years experience on the mainframe have increase from 47% to 63% while those with more than 20 years of mainframe experience represent 18%. Most encouraging is the growth of women, who now constitute 40% (from 30% a year before) while men have declined from 70% to 60%. 

When DancingDinosaur was young with a full head of dark hair he complained of the dearth of women at IT conferences. Now, as he approaches retirement, he hopes the young men appreciate it. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

  • Post
  • Block

No block selected.Open publish panel

When DancingDinosaur was young with a full head of dark hair he complained of the dearth of women at IT conferences. Now, as he approaches retirement, he hopes the young men appreciate it. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Quantum Computing Use Cases

October 2, 2020

Have you been dreaming of all the great things you would do if you just had ready access to a stable and sufficiently powerful quantum computer? Through the IBM Q Network you can access any of IBM’s quantum computers they are making available over the Internet. IBM laid out its roadmap for quantum computers just a couple of weeks ago, which DancingDinosaur covered here

The company reports that substantive quantum work is being attempted using machines available through its Q Network. DancingDinosaur has looked at Qiskit, IBM’s quantum programming language, and looked at the learning materials that accompany it, but even then, I haven’t experienced that Eureka moment–an idea that could only be effectively handled if only I had a sufficiently powerful quantum machine. My best programming ideas, I’m embarrassed to admit, can be handled perfectly well on an x86 box running Visual Basic. Sorry, but I’m just not yearning for the 1000+ qubit machine IBM is promising in 2023 or the million-plus qubit machine after that.

D-Wave Quantum machine

D-Wave Systems Inc., a Canadian quantum computing company, hired 451 Research to investigate enterprise attitude and appetite with regard to quantum computing. The survey found that quantum computing is emerging as a powerful tool for large-scale businesses, the majority of which generate over $1 billion in revenue.

Among the priorities the researchers found were increasing efficiency and productivity at an organizational level, boosting profitability, and solving large and complex business problems that may not be solvable with current methods, tools, and technology. And the researchers concluded, of course, that  now is the time for executives to take the quantum computing investment seriously because the competition is already exploring how to solve complex problems and gain the coveted first-to-market advantages. 

If that sounds familiar, we have been hearing versions of it for decades. This is the classic way to drive decision makers to invest in the next greatest thing–the fear of being left behind. DancingDinosaur has been writing exactly those kinds of reports arriving at similar conclusions and driving similar results for years.

D-Wave’s Volkswagon quantum story begins with the company launching in Lisbon the world’s first pilot project for traffic optimization using a D-Wave quantum computer. For this purpose, the Group is equipping buses of the city of Lisbon with a traffic management system developed in-house. This system uses a D-Wave quantum computer and calculates the fastest route for each of the nine participating buses individually and almost in real-time. The result: passengers’ travel times will be significantly reduced, even during peak traffic periods, and traffic flow will be improved.

Global industry leaders across fields – from transportation to pharmaceuticals to financial services – are now looking to quantum computing to rethink business solutions and maintain competitive advantage over their peers.

The survey found that while 39% of surveyed enterprises are already experimenting with quantum computing today, a staggering 81% have a use-case in mind for the next three years. High on the agenda for critical business benefits via quantum are increased efficiency and improved profitability, followed closely by improved processes, productivity, revenue, and a faster time to market for new products. Increased efficiency? Hey, I could have said that about what I did with Visual Basic on x86.

Efficiency is particularly critical to business leaders because enterprises often suffer productivity losses when tackling complex problems. In fact, over a third of enterprises have abandoned complex problems in the last three years due to time constraints, complexity, or a lack of capacity. Yet, 97% of enterprises rate solving complex problems as high importance or business-critical. Clearly today’s computing technology is not adequately meeting large-scale business’ needs, and VB on x86 just can’t cut it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Z Open Terminal Emulation

September 25, 2020

You can spend a lot of time working with the Z and not find much new in terminal emulation. But there actually are a few new things, mainly because times change and people work differently, using different devices and doing new things. Sure, it all goes back to the mainframe, but it is a new world.

Terminal emulator screen

Rocket Software’s latest wrinkle in terminal emulation is BlueZone Web, which promises to simplify using the mainframe by enabling users to access host-based applications anywhere and on any type of device. It is part of a broader initiative Rocket calls Open AppDev for Z. From DancingDinosaur’s perspective its strength lies in being Zowe-compliant, an open source development environment from the Open Mainframe Project.This makes IBM Z a valuable open platform for an enterprise DevOps infrastructure.

Zowe is the first open source framework for z/OS. It facilitates DevOps teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Launched in a collaboration of initial contributors IBM, CA Technologies, and Rocket Software, and supported by the Open Mainframe Project. The goal is to cultivate the next generation of mainframe developers, whether or not they have Z experience. Zowe promotes a faster team on-ramp to productivity, collaboration, knowledge sharing, and communication.

This is the critical thing about Zowe: you don’t need Z platform experience. Open source developers and programmers can use a wide range of popular open source tools, languages, and technologies–the tools they already know. Sure it’d be nice to find an experienced zOS developer  but that is increasingly unlikely, making Zowe a much better bet.   

According to the Open Source Project, IBM’s initial contribution to Zowe was an extensible z/OS framework that provides REST-based services and APIs that will allow even inexperienced developers to rapidly use new technology, tools, languages, and modern workflows with z/OS. 

IBM continues to invest in the open source environment through Zowe and other open source initiatives.  Zowe also has help from Rocket Software, which provides a web user interface, and CA, which handles the Command Line Interface. You can find more about zowe here.

IBM introduced Linux, a leading open source technology, to the Z over 20 years ago. In time it has expanded the range of the Z through open-source tools that can be combined with products developed by different communities. This does create unintentional regulatory and security risks. Rocket Open AppDev for Z helps mitigate these risks, offering a solution that provides developers with a package of open tools and languages they want, along with the security, easy management, and support IBM Z customers require.

“We wanted to solve three common customer challenges that have prevented enterprises from leveraging the flexibility and agility of open software within their mainframe environment: user and system programmer experience, security, and version latency,” said Peter Fandel, Rocket’s Product Director of Open Software for Z. “With Rocket Open AppDev for Z, we believe we have provided an innovative secure path forward for our customers,” he adds. Businesses can now extend the mainframe’s capabilities through the adoption of open source software, making IBM Z a valuable platform for their DevOps infrastructure.”

But there is an even bigger question here that Rocket turned to IDC to answer. The question: whether businesses that run mission-critical workloads on IBM Z or IBMi should remain on these platforms and modernize them by leveraging the innovative tools that exist today or replatform by moving to an alternative on-premises solution, typically x86 or the cloud.

IDC investigated more than 440 businesses that have either modernized the IBM Z or IBMi or replatformed. The results: modernizers incur lower costs for their modernizing initiative than the replatformers.  Modernizers were more satisfied with the new capabilities of their modernized platform than replatformers; and the modernizers achieved a new baseline for which they paid less in hardware, software, and staffing. There is much more of interest in this study, which DancingDinosaur will explore in the weeks or months ahead.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

IBM Roadmap for Quantum Computing

September 18, 2020

IBM started quantum computing in the cloud a few years back by making their small qubit machines available in the cloud and even larger ones now. In mid September, IBM released its quantum computing roadmap to take it from today to million-plus qubit devices. The first benchmark is a 1,000-plus qubit device, IBM Condor, targeted for the end of 2023. Its latest challenge: going beyond what’s possible on conventional computers by running revolutionary applications across industries.

control lots of qubits for long enough with few errors

The key is making quantum computers stable by keeping them cold. To that end IBM is developing a dilution refrigerator larger than any currently available commercially. Such a refrigerator puts IBM on a course toward a million-plus qubit processor. 

The IBM Quantum team builds quantum processors that rely on the mathematics of elementary particles in order to expand its computational capabilities running quantum circuits. The biggest challenge facing IBM’s team today is figuring out how to control large systems of qubits for long enough, and with minimal errors, to run the complex quantum circuits required by future quantum applications.

IBM has been exploring superconducting qubits since the mid-2000s, increasing coherence times and decreasing errors to enable multi-qubit devices in the early 2010s. Continued refinements allowed it to put the first quantum computer in the cloud in 2016. 

Today, IBM maintains more than two dozen stable systems on the IBM Cloud for clients and the general public to experiment on, including the 5-qubit IBM Quantum Canary processor and its 27-qubit IBM Quantum Falcon processor,  on which it recently ran a long enough quantum circuit to declare a Quantum Volume of 64, an IBM created metric. This achievement also incorporated improvements to the compiler, refined calibration of the two-qubit gates, and upgrades to the noise handling and readout based on tweaks to the microwave pulses.

This month IBM quietly released its 65-qubit IBM Quantum Hummingbird processor to its Q Network members. This device features 8:1 readout multiplexing, meaning it combines readout signals from eight qubits into one, reducing the total amount of wiring and components required for readout and improving its ability to scale.

Next year, IBM intends to debut a 127-qubit IBM Quantum Eagle processor. Eagle features several upgrades in order to surpass the 100-qubit milestone: through silicon vias, which allow electrical signals to pass through the substrates to enable smaller device sizes and a reduced signal path and multi-level wiring to effectively fan-out a large density of conventional control signals while protecting the qubits in a separate layer in order to maintain high coherence times. The qubit layout will allow IBM to implement the heavy-hexagonal error-correcting code that its team debuted last year, as it scaled up the number of physical qubits and error-corrected logical qubits.

These design principles established for its smaller processors will set it on a course to release a 433-qubit IBM Quantum Osprey system in 2022. More efficient and denser controls and cryogenic infrastructure will ensure that scaling up the processors doesn’t sacrifice the performance of the individual qubits, introduce further sources of noise, or take too large a footprint.

In 2023, IBM intends to debut the 1,121-qubit Quantum Condor processor, incorporating the lessons learned from previous processors while continuing to lower the critical two-qubit errors so that it can run longer quantum circuits. IBM presents Condor as a milestone that marks its ability to implement error correction and scale up devices while simultaneously complex enough to solve problems that can be solved more efficiently on a quantum computer than on the world’s best supercomputers, achieving the quantum Holy Grail.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.


%d bloggers like this: