Big Z Market by 2026

January 10, 2022

Want to start the new year with some good news? A December report suggests the  Global Mainframe Market,  valued at USD 2.28 billion in 2019, and now expected to reach USD 3.04 billion by 2026. That’s a CAGR of 4.3%, during the forecast period, 2020–2026. OK, you’ll have to wait 4 years to tap the full $3 billion, but start preparing for it sooner. And this isn’t just IBM talking. BMC Software and GSF Software and others participated in the study too.

An example from the report: The market of Mainframe Software is expected to grow with rising demand for industrial activities management especially in e-commerce and retail industry. Several companies are operating in the market to provide software with excellent features. The market is segmented by the presence of several market players familiar to DancingDinosaur readers including: BMC and GSF as well as IBM. 

The report was published by Credible Markets, a new-age (a description that immediately makes me suspicious) market research company which follows the pulse of global markets. Credible Markets positioned itself as a source for the market research needs of businesses within a quick time span. It reports collaborating with leading publishers of market intelligence and the coverage of its reports span all the key industry verticals and thousands of micro markets. It claims its massive repository allows clients to pick among recently published reports from a range of publishers that also provide extensive regional and country-wise analysis.

You also should already be familiar with the main Z players: IBM (United States),of course, but also Broadcom(United States), BMC Software, Inc. (United States), Rocket Software(United States), MacKinney Systems (United States), Software AG (Germany), UNICOM Systems (United States), GSF Software (United States), CSI International (United States), SDS (United States) as well as some less familiar players: Beta Systems Group (United States), Lookup Mainframe Software (United States), Levi, Ray & Shoup (United States), ASG Technologies Group, Inc. (United States), and Precisely (United States). All of these are included in the latest report.

What would DancingDinosaur want to see? Mainly, I’d want to see where new business opportunities might come from. First, look for organizations that are new to the mainframe space, a no-brainer for sure. Then, look for companies that are getting frustrated in the x86 world, maybe due to security issues or capacity constraints. 

Also look for x86-based organizations feeling hindered by flexibility constraints. In short, look for everything the mainframe does really well and find organizations that would be open to a new way of working.

In looking for these kinds of opportunities you soon will have to confront the mainframe pricing issue. It is quite possible to flip this to your favor. Start by showing how much the x86 world actually costs you in terms of labor, reduced performance, downtime, increased management costs, and missed opportunities due to any persistent unreliability. If you gather your data accurately and honestly you should be able to present a convincing case, although some of IBM’s recent supposed discounts either don’t save you much or aren’t easy to use.

But there are some Z strategies that make sense and save money. Start with the IBM’s latest Z15.  IBM z15 is designed to bring significant savings for enterprises still using earlier IBM Z technology. 

IBM z15 is designed to bring significant savings for enterprises using earlier IBM Z technology. An IBM IT Economics model was used to examine z15 hardware and software upgrade costs. The cost model compared total operating costs over five years for hardware, and software. 

Finally, how you partition and organize your use of the Z15 in terms of the ways you organize, schedule, and distribute workloads can often save you money in terms of software costs, reduced power consumption, labor, and more . Anyway, hope you have a great few years preparing for growth in your mainframe,

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Fresh Options for a Mainframe

December 22, 2021

DancingDinosaur doesn’t like to encourage people to leave the mainframe except when they can supplement the organization’s capabilities through the use of the Cloud or other distributed platforms in conjunction with the mainframe. Some encourage abandoning the mainframe altogether but that poses significant risks and added costs. 

Planetmainframe.com: Hybrid world

Rather, I prefer to keep the proven benefits of the mainframe –high availability, high capacity, and rock-solid reliability–while supplementing it selectively with capabilities cherry-picked from the numerous options available through cloud and distributed platforms. This amounts to a hybrid approach that can preserve your investment in the proven mainframe systems your organization already depends upon and your existing data while selectively supplementing it with carefully chosen cloud and distributed systems.

OK, maybe DancingDinosaur is being overly cautious, but I bet you and your management will be better off in the long run and more comfortable following such a hybrid approach. BTW, after spending decades writing up mainframe-is-dead migration adventures, very few (maybe none, in looking back over my notes) have ended with unqualified success.

Sure, none ended in an unmitigated disaster that couldn’t be saved, but not without unexpected additional investment, far more time, plenty of compromises, and unnecessary aggravation. And a few decision-makers unexpectedly started calling headhunters.

David Linthicum,  Chief Cloud Strategy Officer, Deloitte Consulting, takes a different approach in his piece the hybrid-cloud mainframe.  He writes: “a new approach is coming into vogue, one where you can partition, refactor, and then move mainframe apps. Parts of the application runs on the mainframe, and parts run as cloud-native applications.

Continuing, he explains: “Using this technique, you’re running a distributed application that is essentially a hybrid-cloud mainframe. On the cloud side you can leverage any modern development platforms, such as containers/Kubernetes, serverless, CI/CD tools, etc., or even use cloud-native databases as augmented storage or as a complete replacement.  

He adds: “This is becoming a popular technique. Mainframe applications, or any traditional platform for that matter are broken apart, or partitioned. Then the pieces are moved to the cloud using one of the a accepted approaches while the leftover pieces remain on the mainframe. Typically, these are the pieces that won’t or can’t move easily, such as applications written in older mainframe assembler or use other methods where the skills, languages, and tools are no longer available or even understood anymore.

Linthicum then lays out the basic steps  as he sees them:

*Lift-and-Shift

*Containerize/refactor

*Move and wrap

*Stay put   

*Partition, migrate, modernize 

Core to this effort must focus on how to partition applications so they can be refactored for the cloud with minimum impact on the core value of the system. In short, you are selecting parts of components of mainframe-based applications that only make sense to move while leaving the rest alone. 

What Linthicum is suggesting amounts to an approach that results in a hybrid system for the mainframe that allows some mainframe apps to work and play well with modernized mainframe applications while you keep your business critical mainframe apps as always. Meanwhile, your data can move between select cloud and distributed applications and the mainframe itself.

Will everybody be happy? Probably not. But you will preserve what is best about your mainframe while avoiding the upheaval, disruption, risks, and high costs of losing it while gaining the advantages of cloud-based and distributed systems that can work well as a hybrid mainframe. 

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Please note: DancingDinosaur is taking the rest of this year off. Have a Happy New Year and best wishes for 2022. Expect the next post to arrive the week of Jan 10, 2022

Z and Cloud Modernization Center

December 16, 2021

On Dec. 8 IBM  unveiled its Z and Cloud Modernization Center, which it describes as  a digital front door to a vast array of tools, training, resources and ecosystem partners to help IBM Z customers accelerate the modernization of their applications, data and processes in an open hybrid cloud architecture.

Source: IBM

IBM has long been promoting hybrid cloud computing. To IBM hybrid cloud computing enables a common platform across all an organization’s cloud, on-premises, and edge environments. You just skill once, build once, and manage from a single pane of glass. With its hybrid cloud approach   hybrid cloud approach can  experience up to 2.5x more value than a public cloud-only approach it claims.

In the IBM Institute for Business Value study, “Application Modernization on the Mainframe,” 71 percent of executives say mainframe-based applications are central to their business strategy. Four out of five respondents say their organizations need to rapidly transform to keep up with competition, which includes modernizing mainframe-based apps and adopting a more open approach. 

The report confirmed that executives view modernizing mainframe-based applications—and connecting them with new applications through a hybrid cloud environment—as crucial to driving a holistic digital transformation strategy across the mainframe and cloud.  

Software AG, earlier this week, announced it was following IBM in its cloud modernization efforts. The aim of this collaboration is to take down barriers to mainframe modernization for clients by using APIs to connect mainframe applications to the cloud without altering any code. APIs provide a non-invasive approach to modernization by creating real-time interactions between applications distributed across on-premises and multi-cloud environments. 

The company’s API-enabling mainframe integration solutions; specifically webMethods and CONNX, are designed to give enterprises options to create reusable services from mainframe application code, screens or data. The goal: delivering cutting-edge enterprise-wide agility and capability.

According to a recent IBM Institute for Business Value survey, “The hybrid cloud platform advantage,” derives its value from a full hybrid, multi-cloud platform technology and operating model at scale 2.5 times the value derived from a single-platform or single-cloud vendor approach. In addition, IBM notes: an IBM hybrid cloud transformation that integrates IBM Z can extend up to 5x the value of a public cloud-only approach. 

A Hurwitz and Associates whitepaper sponsored by IBM, further confirms this additional value through business acceleration, developer productivity, infrastructure cost efficiency, regulatory compliance, security, and deployment flexibility.

Today, IBM continues,  many IBM Z users, almost by definition, already are running on modern infrastructure. However, to truly leverage the benefits of hybrid cloud, organizations must continue to modernize their existing applications and data. Through the IBM Z and Cloud Modernization Center, these organizations can gain insights on maintaining their current IT while focusing on the design and execution of a strategy for their core applications and data running on IBM Z as they prepare for hybrid cloud.

“The world class reliability and security of IBM’s Z solutions have contributed significantly to the mainframe’s remarkable longevity and business value for enterprise customers,” said Charles King, PUND-IT in the IBM announcement. “However, another crucial feature—adaptability—is just as important. For well over two decades, IBM has ensured that Z mainframes addressed vital requirements in both existing and emerging applications and use cases, including Linux, open systems and hybrid cloud.”

With the new IBM Z and Cloud Modernization Center, IBM and its partners, such as Software AG and others, are providing companies with the tools, resources and training they need to successfully modernize and transform mainframe-based applications, data, and processes in hybrid cloud environments, and thereby maximizing their IBM Z investments.”

Arno Theiss, General Manager of Mainframe Solutions at Software AG, added: “As digital transformation initiatives progress and mature, we believe that a hybrid infrastructure model is the only viable option. Companies need the flexibility of the cloud for some applications and platforms combined with the stability and ownership of an on-premises mainframe. In a truly connected enterprise, data should be able to flow in any direction between applications, regardless of where they reside.”

This is the only way to deliver connected services to customers and make business operations more efficient and fluid. Software AG will help IBM Z organizations build that hybrid environment using APIs around their mainframes so they can connect to the cloud in a non-invasive way and address the risk of disrupting their core applications.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

DancingDinosaur is taking the rest of this year off. Have a Happy New Year and best wishes for 2022. Expect the next post to arrive the week of Jan 10, 2022

OSS Red Hat Boosts IBM

December 9, 2021

When IBM purchased Red Hat in 2018 for $34.5 billion, even Dancing Dinosaur, along with much of the rest of the industry, thought it was a little pricey. What did you think then? Here’s what I think now with the clarity of the latest IBM financials: what do the pundits and writers like me know? 

IBM reported less-than-expected third-quarter revenue of $17.62 billion, again putting it on track for yet another lackluster year of sales, but at least the years-long quarterly losing streak has mercifully come to an end for now. At the same time Red Hat has emerged as the best thing IBM has done lately. 

From an IBM Red Hat blog in July: Considered by IBM as a cloud and cognitive software business unit, Red Hat generated 17% quarterly growth and 40% growth in OpenShift recurring revenue. Maybe IBM did overpay, but it appears to be the best thing IBM has going at the moment

SOURCE: TM Forum Inform

Ironically, a report on IBM and OSS notes that a significant roadblock for IBM is not OSS vulnerabilities but today’s volatile labor market that has skilled workers thinking twice before jumping into a new role without significant increases in wages and benefits. Higher hiring and retention costs may preclude companies like IBM from being able to sufficiently staff departments that require the skilled specialists that are suddenly in higher demand.

IBM has been in the midst of a strategy shift with a focus on hybrid cloud and AI since Arvind Krishna was promoted to CEO in last year. Red Hat, the software company it purchased for $34 billion in 2018 has been at the center of it all. With the company reporting recent earnings, the financial performance looked pretty bleak, but at least Red Hat continues to grow at a brisk pace.

As for the cloud and cognitive software business unit, as IBM describes it,  which now includes Red Hat, was up 2.5% to $5.69 billion. At the earnings meeting with analysts after the report hit, CFO Jim Kavanaugh indicated that Red Hat grew 17% for the quarter. “Red Hat revenue growth was driven by double-digit growth in both infrastructure and application development and emerging technology. And it had more than 40% growth in OpenShift recurring revenue,” he noted, referring to Red Hat’s container orchestration platform.

This may have led Forrester Research, in part, to come out as bullish on OSS and Red Hat. In a commissioned study conducted by Forrester the researcher reveals the details of its industry survey targeting decision-makers at North American enterprises, and delivers key insights and analysis on how open source is being used within enterprises today, including:

  • More than 50% of Fortune 500 enterprise use OSS
  • OSS adoption is connected to positive business outcomes
  • Well-formed OSS adoption strategies are critical to success
  • Importance of internal and external OSS expertise 

Meanwhile, OSS for businesses continues its upward trajectory with impressive growth despite the overall struggles of IBM. And open source systems continue to be hot industry-wide, despite persistent reports of vulnerabilities in open source software.

Forrester remains bullish on OSS and Red Hat. In a commissioned study conducted on behalf of Perforce, Forrester reveals the details of its industry survey, noted just above, targeting decision-makers at North American enterprises.

DancingDinosaur’s goal, however, is not to show that OSS is better than proprietary software. Some OSS software can indeed be technically poor, just as some proprietary software also can be. Many people, however, may fail to even consider OSS because it carries an unknown name or unfamiliar acronym. My only hope: OSS awareness continues to grow wherever appropropriate. 

Dancing Dinosaur’s advice: thoroughly check out all relevant software possibilities, proprietary or OSS, and select whatever appears to work best for you.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

IBM Exceeds 100 Qubit Quantum

December 2, 2021

Today (Nov 18, 2021), IBM Quantum unveiled Eagle, a 127-qubit quantum processor. Eagle is leading quantum computers into a new era — “we’ve launched a quantum processor that has pushed us beyond the 100-qubit barrier. We anticipate that, with Eagle, our users will be able to explore uncharted computational territory” — and experience a key milestone on the path towards practical quantum computation.

IBM Quantum breaks the 100‑qubit processor barrier | IBM Research Blog
IBM 100+ qubits quantum

You can view Eagle as a step in a technological revolution in the history of computation. As quantum processors scale up, each additional qubit doubles the amount of space complexity — the amount of memory space available to execute algorithms. We hope to see quantum computers bring real-world benefits across multiple fields as this increase in space complexity moves us into a realm beyond the abilities of classical computers.

As this revolution plays out, the company continues, we hope to keep sharing our best quantum hardware with the community early and often. This approach allows it and its users to work together to understand how best to explore and develop on these systems to achieve quantum advantage as soon as possible. IBM makes its quantum computers available by joining its Quantum Network. 

Constructing a processor that breaks the hundred-qubit barrier wasn’t something IBM could do overnight. Scientists for decades have theorized that a computer based on the same mathematics followed by subatomic particles — quantum mechanics — could outperform classical computers at simulating nature. However, constructing one of these devices proved an enormous challenge. Qubits can de-cohere — or forget their quantum information — with even the slightest nudge (noise) from the outside world. 

Producing Eagle on our short timeline was possible in part thanks to IBM’s legacy of pioneering new science and investing in core hardware technology, including processes for reliable semiconductor manufacturing and packaging and bringing nascent products to market…and by creating the sufficient refrigeration required to keep qubits at the required low temperature for better stability.

Eagle’s qubit count feat represents an important milestone on the company’s Quantum Roadmap. Eagle demonstrates how its team is solving challenges across hardware and software to eventually realize a quantum computer capable of solving practical problems in fields from renewable energy to finance and more.

Quantum computation at enterprise scale

IBM Quantum’s Eagle processors contain nearly twice the qubits of the company’s 65-qubit Hummingbird processor — but building something bigger takes more work than just adding qubits. The team had to combine and improve upon techniques developed in previous generations of IBM Quantum processors in order to develop a processor architecture, including advanced 3D packaging techniques that they are confident can form the backbone of processors up to and including IBM’s planned 1,000+ qubit Condor processor.

Eagle is based on the company’s heavy-hex, which represents the fourth iteration of the topology for IBM Quantum systems. Read more. heavy-hexagonal qubit layout as debuted with our Falcon processor, where qubits connect with either two or three neighbors as if sitting upon the edges and corners of tessellated hexagons. This particular connectivity decreased the potential for errors caused by interactions between neighboring qubits while providing significant boosts in yielding functional processors.

Eagle also incorporates readout multiplexing as featured in the company’s Hummingbird R2. Previous processors required a set of control and readout electronics for each qubit — this is manageable for only a few dozen qubits, but would be far too bulky for 100+, let alone 1,000+ qubit processors. Readout multiplexing allows IBM to drastically reduce the amount of electronics and wiring required inside of the dilution refrigerator.  Perhaps most importantly, IBM notes, Eagle incorporates past company expertise in classical processor fabrication to provide scalable access wiring to all qubits.

Quantum processors, IBM explains, require a tangle of wiring that must be routed outward to their edges. However, 3D integration allows the researchers to place particular microwave circuit components and wiring on multiple physical levels. While packaging qubits remains one of the largest challenges for future quantum computers, multi-level wiring and other components provide the techniques that make possible the path toward Condor, with minimal impact to individual qubits’ performance.

There’s work yet to be done. The scale of a quantum chip is just one of three metrics to measure the performance of a quantum processor, and we must continue to push the quality and speed of our processors by benchmarking their Quantum Volume and CLOPS (Circuit Layer Operations Per Second), a crucial metric correlated with how fast a quantum processor can execute circuits, respectively. The IBM quantum story continues with IBM Quantum System Two, to be continued here soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

8 Steps to Bolster Your Supply Chain

November 12, 2021

By the time you read this it will be at least near the end of the work week.. But supply chain issues apparently aren’t likely to go away anytime soon, even after the holiday season. 

Writer Olivia Staub, writing for IBM on the widely reported current supply chain upheaval: “By now we’ve all heard the rumblings (and the roars!) about how myriad challenges — backlogged ports, factory shutdowns, and shipping delays — are going to impact holiday shopping this year (and probably beyond, I’d bet).”

IBM supply chain components

You’ve heard it. I’ve heard about it too. Still Staub offers  a current holiday piece: 8 Steps to Bolster your Supply Chain, with some of my comments added. The piece is now becoming widely replicated.

We’ve all heard the rumblings about the myriad challenges — backlogged ports, factory shutdowns, and shipping delays — are going to impact holiday shopping this year. According to Business Insider, the cargo ships waiting off the coast of California are partially creating this record backlog.

She continues: “We don’t want to minimize the serious impact of these substantial roadblocks — especially as the retail industry enters the eight-week holiday shopping window that accounts for a big portion of many businesses’ annual sales — but we also want to acknowledge that there are steps you can take to work around similar issues anytime.”

So, here are her 8 steps to survive this holiday season (or any similar crunch time.):

1. Be up-front with your customers

Chances are your customers have heard about many of these issues and have concerns they might not be able to buy the gifts and goods they want for the holidays. Address their concerns directly. For instance, encourage customers to shop early, to sign up to be notified when an out-of-stock item is back in stock.

2. Kick-off the holidays earlier than usual

This piece already is probably too late for this. But she notes: Early holiday shopping is anticipated this year. Be ready by taking steps like adding extra staff–if you can find any to hire–to handle increased sales volume.

3. Rethink promotions

As the price for goods — and for shipping and delivering goods — go up, it leaves less room for discounts.This year consider promotions that encourage loyalty or offer bundles and low-cost freebies with purchase.

4. Prepare your organization to be ready for more in-person shoppers

Of course, staff shortages are being widely reported–some retailers are even sending recruitment emails to me, an old retired guy. But, Staub continues;  well-designed visual merchandising and signage will entice shoppers to stop in, but she adds, give shoppers more time during the busiest time of year, especially the procrastinators and last-minute shoppers, by extending your store hours.

5. Hone in on your most popular items

Good point if you can find and get them. With so much in flux, she notes, some companies are finding solace (and sales) in the classics. In addition to stocking up on your customers traditional favorites, consider including less variety than you might normally offer.

6. Consider supplier alternatives 

If the clogged ports and shipping delays have you rethinking some of your vendors, and your company is in a position to get your products from other sources, consider finding local or US-based suppliers. In general it is always good to have alternative providers in your Rolodex — just don’t be surprised if you have never contacted them before that they won’t have time to return your call. But, she continues, this can reduce delivery time and allow you to market the products as “local”.

7. Find shipping alternatives

As national shipping carriers face staffing issues and major backlogs, look for local and regional alternatives. If you can, she says unrealistically, consider building up your own delivery services, even at a local level, to help relieve some of the shipping pressures. But it is probably not feasible to set up and manage your own shipping on short notice, even locally. Better to find someone locally who may have some excess capacity.

8. Embrace technology

DancingDinosaur always encourages embracing technology, but not at the last moment. Technology requires planning, preparation, and training. Holiday sales are projected to grow to 5.4 trillion dollars in 2022. That is serious money. If you can use any of Staub’s suggestions in any way to grab even a sliver of that what do you have to lose? Go for it.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Nedbank Teams with BMC Compuware

November 5, 2021

Been awhile since DancingDinosaur dug into a mainframe customer case study but is one from Australia. The case study is Nedbank, a large South African bank. The bank  accelerates IBM Z Innovation with BMC Compuware ISPW. The  bank reports the z15 as its core banking platform. “While we did assess our options, the reality is that nothing processes as fast as the IBM Z platform,”notes Devi Moodley | Executive Enterprise DevOps and Release Management. Nedbank is one of the largest banks in South Africa, serving both the commercial and retail banking sectors in addition to the insurance market.

Nedbank adopted Agile development methodologies .“We could see unicorns (high valuation startups) and fintechs coming in which were offering fewer traditional solutions, far more agile than us, and closer to their clients’ needs,” said Devi Moodley, Executive, Enterprise DevOps and Release Management,. 

“This meant a key part of the wider digital transformation journey required modernizing on the Z platform too along with its distributed systems. Compuware then joined with BMC to empower Nedbank’s next generation of developers to mainstream its Z platform. The goal: bring apps that run on the Z to market faster with increased quality.

The bank also realized that its legacy Source Code Management (SCM) tool was unable to support its need to embrace Agile development, and it needed a new set of tooling. The current review process took too long, and it couldn’t handle parallel development or manage contention. Branching, merging, and versioning also proved difficult, and errors often weren’t identified until the end of the day,” Moodley observed. 

Nedbank identified two teams—Core Banking and Payments, as these teams had the largest number of IBM Z components. It then evaluated three potential solutions: BMC Compuware on its own, BMC Compuware with an analysis tool layered on top, and a BMC Compuware competitor. 

The teams gathered quantitative and qualitative feedback. The solutions were compared through a cost lens along with other factors, such as ease of ownership, how many sets of tools would be needed, the size of the team needed to implement and maintain the solutions, and what kind of support would be needed from each supplier. 

The bank also identified BMC Compuware ISPW as the best solution for Continuous Integration and Continuous Delivery (CI/CD) on the Z platform. BMC Compuware ISPW ensures the Z code pipelines are secure, stable, and streamlined throughout the DevOps lifecycle. 

This gave Nedbank’s Z team the ability to better manage processes with the ability to see the status of all programs throughout the development lifecycle, perform concurrent development, and confidently merge code updates and readily integrate with modern DevOps toolchains using Jenkins plugins, REST APIs, and Command Line Interfaces (CLIs) 

“BMC Compuware’s DevOps engineers and technical specialists sat side-by-side with our teams for a week or more and were on hand to answer any questions,” said Moodley. They helped with conducting practical tests and guiding specific story points through BMC Compuware ISPW.” 

The bank was also challenged to prioritize what aspect of the solution should be adopted and when. Having strong senior executive application owners who fully supported the journey and adoption of DevOps made a huge difference. It was about much more than just adopting new tools; the ‘left shift’ needed to be embraced into the Nedbank culture.

While other platform teams adopted DevOps earlier, adoption on the IBM Z platform proved faster. The bank also benefitted in many way, particularly through increased operational efficiency, other efficiencies enabled the bank to reduce the elapsed time on administrative processes by 95 percent, contributing towards operational cost savings of more than ZAR 3 million ($176k) per annum.  Sounds good to me.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

IBM, Raytheon Collaborate

October 26, 2021

How far afield should IBM range in search of growth and revenue? Probably as far as it can to attract opportunities, advance its technological prowess, and grow the business.  IBM hasn’t been a PC company for decades. The company hasn’t been the $100 billion plus powerhouse for years, but with this latest announcement, who knows?

In a recent October announcement IBM is ranging into AI, Cryptography, and Quantum–areas the company is long familiar with. But it is the partner that make it interesting.  IBM announced  Raytheon Technologies (NYSE: RTX) will jointly develop advanced artificial intelligence, cryptographic and quantum solutions with it for the aerospace, defense and intelligence industries, including the federal government, as part of a strategic collaboration agreement between them.

IBM Quantum Computer

Artificial intelligence and quantum technologies, IBM explains, give aerospace and government customers the ability to design systems more quickly, better secure their communications networks, and improve decision-making processes. By combining IBM’s breakthrough commercial research with Raytheon Technologies’ own research, plus aerospace and defense expertise, the companies hope to be able to crack currently unsolvable challenges.

“The rapid advancement of quantum computing and its exponential capabilities has spawned one of the greatest technological races in recent history – one that demands unprecedented agility and speed,” said Dario Gil, IBM senior vice president and director of Research. “Our new collaboration with Raytheon Technologies will be a catalyst in advancing these state-of-the-art technologies – combining their expertise in aerospace, defense and intelligence with IBM’s next-generation technologies to make discovery faster, and the scope of that discovery larger than ever.”

Collaboration or not, IBM is moving ahead with quantum computing. Its development roadmap specifies plans to achieve frictionless quantum computing on a 1000+ qubit system by the end of 2023. The company’s guiding principle as it advances its quantum computing systems is how it can increase the amount of useful work that these systems can accomplish, which it refers to as quantum computing performance.

Quantum computing performance is determined by three key metrics, and you’re already familiar with two of them: the number of qubits and Quantum Volume. As quantum computing evolves, more importance is placed on the useful work that its systems can do in a reasonable amount of time, which it is introducing as a third metric, probably time-related.

Bringing quantum computers into organizations’ computing workflows to solve the world’s most pressing problems requires pushing forward on all three of these metrics all the time. “We expect that only superconducting qubits will be able to achieve performance gains across all three metrics in the near term,” according to IBM.

In addition to artificial intelligence and quantum, the companies will jointly research and develop advanced cryptographic technologies that lie at the heart of some of the toughest problems faced by the aerospace industry and government agencies.

“Take something as fundamental as encrypted communications,” said Mark E. Russell, Raytheon Technologies chief technology officer. “As computing and quantum technologies advance, existing cybersecurity and cryptography methods are at risk of becoming vulnerable. IBM and Raytheon Technologies will now be able to collaboratively help customers maintain secure communications and defend their networks better than previously possible.”

IBM puts in bluntly: Our development roadmap, and plan to achieve frictionless quantum computing on a 1000+ system by the end of 2023 is our guiding principle as we advance our quantum computing systems and increase the amount of useful work that these systems can accomplish, which IBM noted as quantum computing performance.

Quantum computing performance is determined by three key metrics and you’re probably familiar with two of them: the number of qubits and Quantum Volume. As quantum computing evolves, more importance is placed on the useful work that the systems can do in a reasonable amount of time, so IBM is introducing a third metric, probably time related.

Bringing quantum computers into organizations’ computing workflows to solve the world’s most pressing problems requires pushing forward on all three of IBM’s  key metrics all the time. IBM expects that only superconducting qubits will be able to achieve performance gains across all three metrics in the near term. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Enterprise Server 3.0

October 8, 2021

Have you heard of Enterprise Server 3.0?  It is being promoted by a small number of companies but the one that comes most immediately to mind is Bloor Research.

Z15 beats Enterprise Server 3.0

Bloor defines Enterprise Server 3.0: a server that can “add or hot swap system capacity without disruption… handle very high volume input and output (I/O) and emphasize throughput computing… replace dozens or even hundreds of smaller servers.” 

In short, what Bloor is talking about are servers that can readily adopt to changes in system usage. He sees the future of business as mutable. That is a future where businesses will be in a constant state of evolutionary change in response to rapidly evolving business environments. 

So, what can Enterprise Server 3.0 do about it? Bloor suggests that a properly managed enterprise server should have capabilities in the areas of security, resilience, performance, and reliability that are hard to match with clusters of x86 servers running commodity operating systems. 

Moreover, he continues, there may well be no obvious and holistic business benefit from the migration. The bottom line is that a company has to stay in business while modernizing its systems and any modernization must have a clearly documented business case and properly managed risk. Sounds more like Business 101 to me, something we all took a decade or more ago in grad school.

Some companies take this into account from the start. IBM has provided some version of Server 3.0 in the Z15. It offers a variety of  pay-per-use options under Tailored Fit Pricing for IBM Z, which now delivers a hardware consumption pricing solution.

Furthermore, the company also offers cross-stack pricing, storage capacity on demand, elastic capacity of IBM Power, and more. Combined, these consumption-based solutions offer some measure of control and flexibility to scale IT as needed while balancing expenses to achieve whatever value you feel you get from IBM Z.

Of course, you are not limited to IBM and Z only. Dell EMC PowerEdge servers powered by 3rd Generation AMD EPYC processors push workload boundaries with tailored IT and business solutions. They promise faster and more competitive outcomes, efficient data center manageability, and end-to-end infrastructure security.

If you are going to leave Z, your other choices include whatever x86 vendors offer, including Dell, Oracle RAC, and Microsoft. Are those going to be comparable to a Z? Not likely. But you probably could piece together something workable if you invest the time, expertise, testing, and patience. You might save some money but spend a fortune trying to reproduce the security, availability, capacity, and flexibility of any Z.

So here are Bloor’s options for modernising existing business systems running on Enterprise Server 3.0 in more detail. He starts by examining the sources of assistance that are available. He doesn’t see much in the way of discrete technology platforms.

The Enterprise Server 3 platform needs extremely powerful database services, capable of processing huge volumes of data and servicing tens of thousands (even, in special cases, over a million) transactions per second.This is a huge subject, ne notes, deserving a paper in its own right. No kidding. 

Similarly, Bloor sees all the traditional technology platforms, in a well managed organisation as an integrated whole, with currents of data flowing between the different areas, each with different local characteristics, of the technology ocean. He is not just talking about DB2 relational database but the IMS hierarchical database; Adabas inverted list database and the Natural 4GL; IDMS network database; and others. Then he notes: this is a huge subject deserving a paper in its own right. Amen.

IDC may say it best: Businesses that remain on what are sometimes referred to as legacy platforms and that take advantage of the plethora of hardware and software innovations that have been made available for those platforms have an overall better outcome, quantitatively and qualitatively, than those that move off them. Z enthusiasts couldn’t say it better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

5G Networks Are Arriving

October 5, 2021

5G Networks Are Arriving

On Sept. 23  IBM announced a strategic multi-year agreement with Telefónica to use IBM intelligent automation software and services to implement a product dubbed  UNICA Next, Telefónica’s first cloud-native, 5G core network platform. With UNICA Next, Telefónica aims to deliver the agility, reliability, and efficiency to continuously optimize its services, now and in the future. 

IBM Maximo works with 5G to improve operations

The industry has been drooling over the potential of  5G for some time. Now Telefónica and other Communications Service Providers (CSPs) around the world are preparing for the benefits 5G and the edge are expected to  bring demand for core network functions. At some point soon Telefónia hopes to have an open, secure, intelligent, and highly automated network that can power transformation for both consumer and enterprise customers across all industries.

To bolster its 5G effort the company intends to modernize its 5G core network platform built with the IBM Cloud Pak for Network Automation, Red Hat OpenShift, and Juniper networking. Telefonica expects its 5G to deliver low latency, high bandwidth, and advanced network slicing, promising to effectively enable Telefónica to assist with business transformation across industries.

The 5G networking market, however, is only just taking shape. An Amazon search suggests the early players offer a single plan that’s only available in parts of a handful of American cities.

One standard you want to keep in mind is ETSI. ETSI, an EU-based standardization group, focuses on a variety of areas including Content Delivery, Networks, Wireless Systems, Transportation, Connecting Things, Interoperability, Public Safety, Security, and, of course, 4G alongside upcoming 5G. 

With its UNICA Next offer, Telefónica seems to be counting on its partnership with IBM to acquire the agility, reliability, and efficiency to continuously optimize its services now and in the future. As Telefónica, along with other emerging CSPs, 5G promises to be the hot upcoming technology.  It will bring core network functions, such as an open, secure, intelligent, and highly automated networking that can facilitate transformation for consumer and enterprise customers across all industries.

To make all of this happen Telefónica has engaged IBM Global Business Services, a leading systems integrator and the digital transformation services and consultancy arm of IBM, Red Hat, and Juniper networking to deploy a cloud-native platform. The offering is planned to feature a new open-standard, open-networking technology-compliant platform that will be deployed across multiple central, regional, and distributed data centers, each offering low latency and high bandwidth, while able to deliver services in an agile manner. UNICA Next expects its data centers to be deployed starting in October 2021; that’s this month! Its scalable architecture is designed to address ETSI and other relevant industry standards.

“Building out the UNICA Next platform with its next-generation network architecture shows how important it is to build the infrastructure now to support the deployment of 5G, which has the potential to support thousands of use cases and applications for consumers and enterprises in all industries. Our collaboration will not only help us harness the potential of 5G, but also prepare for the future through a hybrid-cloud led technology and business transformation. With IBM, Telefónica is combining the latency and bandwidth advancements of 5G with the customization and intelligence of the cloud: we anticipate the results will be transformative,” said Javier Gutierrez, director of strategy, network, and IT development for Telefónica.

“This implementation of Telefónica’s cloud-native, 5G core network platform reflects IBM’s significant investments in AI-powered automation software and the telco’s prime systems integration expertise required to deploy a complete telecommunication networks – core, access, and edge,“adds Gutierrez.

IBM Global Telco Solutions Lab in Coppell, Texas, connected along with Telefónica’s Network Cloud Lab in Madrid, will help accelerate UNICA Next’s evolution by building new, fully integrated releases using CI/CD methodology for ongoing life-cycle upgrades to the existing UNICA Next platform. By working with IBM in this way, Telefónica will be able to increase agility and data security and continue to innovate and transform, drawing on IBM’s large network function ecosystem and Red Hat’s vast set of certified partners, and Juniper’s relationships with network function and hardware vendors.

5G may arrive as soon as this month. Are you ready?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.


%d bloggers like this: