Quantum Secured Cryptography

June 15, 2022

In 2017 IBM declared the latest mainframe includes constant encryption protection. Not sure if DancingDinosaur covered it 5 years ago. From the initial announcement it sounded pretty good. If I didn’t cover it then let’s cover it now.

IBM still periodically promotes continuous or pervasive encryption with the Z. and there have been plenty of opportunities for mainframe shops to upgrade over the intervening years. Most recently, the z16 has gained considerable attention in that regard. 

The z16 (courtesy of IBM)

Since then the Z mainframe technology has evolved dramatically by embracing Linux, open source, container-driven development, and new tools and technologies. Still, securing data has remained a constant challenge.

“The vast majority of stolen or leaked data today is in the open and easy to use (and steal) because encryption has been very difficult and expensive to do at scale,” said Ross Mauri, general manager for the IBM Z, adding “We created a data protection engine for the cloud era to have a significant and immediate impact on global data security.”

Data security still  remains a serious, ongoing challenge for virtually all enterprises, and the widespread adoption of cloud and mobile technologies have only added to the data security risks. IBM used this product release to underscore a “global epidemic” behind 9 billion data records lost or stolen since 2013.

The cure for this epidemic, IBM believes, is “pervasive encryption.” And yet Big Blue — and many others — acknowledge that encryption is often sparsely applied in corporate and cloud datacenters, because encryption products for x86 environments have tended to degrade performance. And their complexity makes them a pain to manage and expensive to implement.

IBM developed its new system over a three-year period with input from 150 customers, all with data breaches and encryption at the top of their lists of concerns. The resulting IBM Z pervasive encryption capability reflects its call to action on data protection as articulated by Chief Information Security Officers and data security experts worldwide, it added.

“The pervasive encryption that is built in but is designed to extend beyond any new Z, which “really makes this the first system with an all-encompassing solution to the security threats and breaches we’ve been witnessing in the past 24 months,” said Peter Rutten, analyst at IDC’s Servers and Compute Platforms Group.

IBM Z is designed to encrypt data associated with an entire application, cloud service, or database in flight or at rest with one click. This kind of “bulk encryption” is made possible by a 7x increase in cryptographic performance over the previous generation z13, driven by a 4x increase in silicon dedicated to cryptographic algorithms, according to IBM.

The system also comes with tamper-responding encryption keys. A favorite target of hackers, encryption keys are routinely exposed in memory as they’re used. IBM Z’s key management system includes hardware that causes keys to be invalidated at any sign of intrusion, and can then be restored in safety.

Another capability included is encrypted APIs. IBM z/OS Connect technologies are designed to make it easy for cloud developers to discover and call any IBM Z application or data from a cloud service, or for IBM Z developers to call any cloud service, the company explained. IBM Z allows organizations to encrypt these too.

The IBM Z system can also give companies a means of complying with emerging standards, such as the EU’s General Data Protection Regulation (GDPR), which went into effect recently or the requirements of the Federal Financial Institutions Examination Council (FFIEC), Singapore and Hong Kong’s similar guidances, and the New York State Department of Financial Services’ newly published Cybersecurity Requirements for Financial Services Companies.

Finally, the company also announced that IBM Z will be providing an encryption engine for IBM cloud services and run IBM Blockchain services “to provide the highest commercially available levels of cryptographic hardware.” The company also announced new blockchain services in centers in Dallas, London, Frankfurt, Sao Paulo, Tokyo and Toronto.

Will all that make you sleep a bit better at night? It should.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter.

Protect the Mainframe from Quantum Threats

May 13, 2022

For decades, most large-scale companies have used mainframes to host and run the software applications that make up their core systems. Often these mainframe computers and their applications are inherited from mergers and acquisitions, or from deferred IT investments.

Today, it is estimated that more than half of core business processes may still run on a mainframe system. But maintaining and relying on these now antiquated applications pose cost and, soon, quantum security risks.. 

IBM’s Z16 with built-in inference processing

Organizations are torn between the need to manage costs while maximizing the value of their mainframe. This leads them to ask, “If our system is not broken, why should we invest to fix it.  In DancingDinosaur’s experience in covering the mainframe for decades the mainframe is not actually broken. The real problem is a failure to modernize an increasingly problematic system. But problematic increases as quantum computing gains traction.

IBM, by the way, does not actively discourage such thinking. It always is eager to tout the latest mainframe with the latest bells and whistles. Today that is the z16, introduced in April, covered here, as an inference processing workhorse. Now the trick is to determine when you actually need that kind of processing and can it take on the quantum threats you may face.

Quantum algorithms running on sufficiently powerful quantum computers have the potential to weaken or break the core cryptographic primitives that we currently use to secure systems and communications. The fact that these algorithms can be broken leaves the foundation for global digital security at risk, notes IBM. Temporary solutions like increasing RSA or ECC key size will only buy a little time — like extra months, not extra years.

Fortunately, IBM is extremely active around the latest security, which addresses quantum computing. IBM refers to it as the next technology revolution. Whether a revolution or not, when a sufficiently powerful quantum computer is available, it invariably will give rise to new security challenges that bad guys can access. There are many exciting applications in industries including pharmaceuticals, finance, and manufacturing but they also need to be thinking about quantum security. 

Organizations and standards bodies already have started taking action to address the threat. The National Institute of Standards and Technology (NIST) initiated a process to solicit, evaluate and standardize new public-key cryptographic algorithms that can resist threats posed by both the classical computers we have today and quantum computers that will be available soon. 

NIST plans to select a small number of new quantum-safe algorithms this year and have new quantum-safe standards in place by 2024. IBM researchers have also been involved in the development of quantum-safe  cryptographic algorithms based on lattice cryptography, which are in the final round of consideration.

Unfortunately, we have a only little time to implement quantum-safe solutions before the advent of large-scale quantum computers that can break quantum-grade security threats arise. That’s not much time. We don’t know when a large-scale quantum computer capable of breaking public key cryptographic algorithms will be available, but experts predict that this could be possible before the end of the decade. 

And, sensitive data with a long lifespan is already vulnerable to harvest-now-decrypt-later attacks: That suggests hackers can capture encrypted data today and store it for later when they can decrypt it using a quantum computer.

This wouldn’t be Dancing Dinosaur if it didn’t think IBM wasn’t already thinking about, planning for, and preparing new quantum-safe cryptographic technology. IBM boasts the z16 as the industry’s first quantum-safe system, protected by quantum-safe technologies across multiple layers of firmware to protect business-critical infrastructure and data from quantum attacks. And it won’t be the last, for sure. In the meantime, stay tuned and keep your fingers crossed.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter.

Cloud or Mainframe?

May 3, 2022

Surprise, surprise. IBM’s answer is both. 

To respond to the ongoing pressures of the global pandemic and more, IBM observed that businesses around the world have turbo-charged their digital transformations.  

At least some of these priorities are companies looking to take advantage of cloud computing.   Strategically, they also have concerns about optionality, or lock-in.  These realities explain why so few clients have made a wholesale move to cloud and many may never.

The unique needs each company faces in their business transformation journey require a diverse mix of applications and environments, including traditional data centers, edge computing and SaaS. It also raises the question of the role of the mainframe in today’s IT infrastructure?

According to a recent IBM study*, the vast majority (a whopping 71%) of IT executives surveyed from major corporations say critical mainframe-based applications not only have a place in their IT platforms today but are central to their business strategy. And in three years, the percentage of organizations leveraging mainframe assets in a hybrid cloud environment is expected to more than double. 

Why? Four of five executives say their organizations need to rapidly transform to keep up with competition, which includes modernizing mainframe-based apps and adopting a more open approach to cloud migration.

A hybrid cloud approach that includes and integrates mainframe computing can drive up to five times the value of a public cloud platform alone. The main sources of that value fall into five categories: 1) increased business acceleration, 2) developer productivity, 3) infrastructure efficiency, 4) risk and compliance management, and 5) long-term flexibility. 

With the billions businesses have invested in business-critical mainframe applications like financial management, customer data, and transaction processing over the years, this strategy holds true for both mainframe customers as well as those of IBM’s global consulting practice. Mainframe customers’ primary goal is to modernize their existing investments and minimize risk while delivering hybrid cloud innovation when they are ready to make that move.

IBM aims to guide its cloud migration clients on their application modernization journey with these recommendations:

Adopt an iterative approach. Many enterprises are experiencing firsthand the complexity of their IT environments. Adding to the vertical cloud silos undercut flexibility by making processes related to development, operations, and security even more fragmented than before, in effect making it nearly impossible to achieve the standardization and scale that cloud promises to deliver. 

Your plan to integrate new and existing environments must factor in your industry and workload attributes to co-create a business case and road map designed to meet your strategic goals. Adopt an incremental and adaptive approach to modernization as compared to a big bang approach. Leverage techniques such as coexistence architecture to gradually make the transition to the integrated hybrid architecture.

Then, assess your portfolio and build your roadmap. To understand your desired future state, assess your current state. Examine the capabilities that define the role of the mainframe in your enterprise today and how those capabilities tie into your hybrid cloud technology. BTW, the mainframe is an ideal partner for hosting clouds. Finally, take stock of your existing talent and resources and determine any changes. 

IBM, don’t be surprised, also suggests the new IBM z16 can perform many of the critical functions underpinning an open and secure hybrid cloud environment while closing some gaps. This includes accessing storage of unstructured on-premises data across a hybrid cloud platform, scaling and automating data-driven insights with AI, and being sufficiently agile to process critical apps and data in real-time;  all the while assessing security risks. 

Storing data across multiple clouds and moving it between partners and third parties can leave companies more vulnerable to security issues such as data breaches. Just remember to assess infrastructure solutions that support the ability to protect data, especially when it leaves your platform.

Then leverage multiple modernization strategies and enable easy access to existing mainframe applications and data by using APIs. This means providing a common developer experience by integrating open-source tools and a streamlined process for agility in addition to developing cloud native applications on the mainframe and containerizing those applications.

IT executives expect significant usage increases in both mainframe (35%) and cloud-based applications (44%) over the next two years. So consider how you can extract more value from both your mainframe and cloud investments. Blending mainframe power into the cloud landscape helps achieve the enterprise-wide agility and capability required to keep pace with changing business needs.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Meet the metaverse

April 22, 2022

Have you experienced the metaverse? It’s being hyped as yet another next big thing. Supposedly all the forward-looking companies are gravitating toward metaverses. Is yours? Should it?

IBM files a patent for metaverse

Along with the metaverse comes the Metaverse Continuum. According to those supposedly in the know, it describes a spectrum of digitally enhanced worlds and business models that are poised to revolutionize life and the enterprise in the next decade. I’ve been following this most, but admittedly they lost me when they started referencing realities. Haven’t a clue what they might mean by realities.  

Maybe this can clarify things. “It applies to all aspects of business, from consumer to worker and across the enterprise; from reality to virtual and back; from 2D to 3D; and from cloud and artificial intelligence to extended reality, blockchain, digital twins, edge technologies, and beyond. Sort of includes everything but the kitchen sink. Wait, maybe that’s included too. What they really are saying is that metaverse can be whatever marketers want to imagine it to be and can sell.

But one thing is clear simply by the hazy hype: A large company, probably a recognized industry leader, is going to offer you a compelling pitch for whatever it calls a metavarse and then ask you for a lot of money to participate. That should be your cue to steer them to the door.

More recently  Metaverse Continuum has appeared. According to its link: the metaverse is a spectrum of digitally enhanced worlds and business models. As they explain it, metaverses will revolutionize nearly all aspects of business in the next decade, allowing collaboration in virtual spaces, augmented physical places, and a blend of both. And it will enable new lines of business while transforming interactions between customers and companies. If you believe this there is a famous statue of a lady standing in New York harbor that I would be happy to sell you for a fraction of what they will be asking

In the technology industry change is a given. So in reviewing all the metaverse baloney out there I found only one statement that smacked as containing  kernel of truth: businesses are aiming toward a future different from the one they were designed to operate in. OK, look back on your career. How many times could you have said something similar. As noted earlier: tech change happens all the time.

You and your business will be facing changes different from those you were expecting to face 10 years ago, 5 years ago, 2 years ago, or maybe just 2 weeks ago. Change happens quickly and relentlessly. You have to decide which changes  seem worth pursuing and which are worth skipping. My gut: most are worth skipping.  

Soon, every company will find itself at the intersection of many new worlds as it has done many times before, from building new physical and virtual realities to buying into metaverse environments created by others. Of course you can always not opt for any of them. If you don’t hear a compelling case at an attractive price, skip it. With technology change there will always be the next thing coming along. 

To grow and thrive in this new and fast-changing world the strategies organizations develop now must have key capabilities at their core: ownership of data, broad inclusion, diversity, sustainability, security and personal safety.

Tomorrow we might see these initiatives grow into smart neighborhoods, cities, and countries, or not. Major companies will have their own internal metaverses to let employees work and interact from anywhere. Hmm, I thought they could do that now without metaverses.

We are only in the earliest days of the metaverse. Those who shy away from the uncertainties of the metaverses will soon be operating in worlds defined by others. Of course, you could cobble one to your liking on your own or with a couple selected like-minded partners. Or just wait until better possibilities come along at better prices. Remember, technology change happens constantly.

So what are you going to do about metaverses? DancingDinosaur is instinctively conservative and skeptical at first. I would suggest you do nothing other than read whatever you can about metaverses that interests you. Watch how things develop, compare notes with companies you consider peers, and resi

st being rushed into anything. There will always be another opportunity.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

z16 Inference Processing

April 7, 2022

IBM z16 

On April 5 IBM unveiled the z16, IBM’s next-generation system with an integrated on-chip AI accelerator—delivering latency-optimized inferencing. This innovation is designed to enable clients to analyze real-time transactions, at scale — for mission-critical workloads such as credit card processing, healthcare, and financial transactions. Building on IBM’s history of leading security, IBM z16 also is specifically designed to help protect against likely future threats that might be used to crack today’s and future encryption technologies.

IBM has been telegraphing the z16 for months, at least since the Telum chip was introduced toward the end of Aug.  IBM announced the IBM Telum Processor at HotChips; Telum will be the central processor chip for the next generation IBM Z, finally introduced this week as the z16, and for LinuxONE.  With AI-based inference built in. organizations that want help in preventing fraud in real-time, or other use cases, will welcome the new z16, which promises to deliver  in-transaction inference in real time and at scale.

The 7nm Telum microprocessor is ideal for organizations seeking AI-based insights from their data without compromising response time for high volume transactional workloads. To help meet these needs, IBM Telum is designed with a new dedicated on-chip accelerator for AI inferencing, to enable real time AI embedded directly into transactional workloads, alongside improvements for performance, security, and availability.

With the z16 innovations, “companies can increase decision velocity through inferencing right where their mission-critical data lives,” said Ric Lewis, SVP, IBM Systems. “This opens up opportunities to change the game in their respective industries so they will be positioned to deliver better customer experiences and business outcomes.”

Clearly the payoff from the z16 can come fast and be significant. Financial institutions worldwide repeatedly struggle with the impacts of fraudulent activities on their revenues and consumer interactions. According to a new study from IBM and Morning Consult “2022 IBM Global Financial Fraud Impact Report,” credit card fraud is the most common type of fraud among consumers in the seven countries surveyed.

With the z16 bringing together AI inferencing via its built-in Telum Processor, and with its highly secured and reliable high-volume transaction processing, banks for the first time, can analyze for fraud during the actual transactions on a massive scale, catching the bad guys before the transactions go through.The z16 can process 300 billion inference requests per day with just one millisecond of latency. For consumers, this could mean reducing the time and energy required to handle fraudulent transactions on their credit cards. For both merchants and card issuers, this could mean a reduction in revenue losses as consumers avoid the frustration associated with false declines.

And it doesn’t stop there. Other threats, such as tax fraud and organized retail theft are emerging as challenges for both governments and businesses to control. Real-time payments and alternative payment methods like cryptocurrencies are pushing the limits of traditional fraud detection techniques. Applying the new capabilities of IBM z16 to these industries can help create an entirely new class of use cases, including:

  • Loan approval: to speed approval of business or consumer loans
  • Clearing and settlement: to determine which trades and/or transactions may have a high-risk exposure before settlement
  • Federated learning for retail: to better model risk against fraud and theft

Clearly the z16 brings a potent set of security capabilities and other goodies to the Z. Am eager to see how much and how quickly organizations can start cashing in.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. 

IBM zCX for Red Hat OpenShift

March 31, 2022

Modernize your IBM z/OS ecosystem with an agile and flexible deployment of Linux on Z applications in a self-contained Red Hat OpenShift cluster.

Person standing in front of stacked shipping containers

zCX cluster for OpenShift

Leveraging IBM® zCX Foundation for Red Hat® OpenShift® (zCX for OpenShift) allows for an agile and flexible deployment of Linux® on Z applications and software in a self-contained Red Hat OpenShift cluster into IBM z/OS®. That means you can co-locate applications and workloads while simultaneously taking advantage of the many z/OS Qualities of Service (QoS).

In short, any mainframe shop can expand its ecosystem with the addition of IBM Foundation for Red Hat OpenShift. The object; to integrate Linux on Z applications with z/OS to deliver enterprise-level container orchestration and management capabilities around containerized software, an increasingly popular approach..

Specifically, you can enable existing or new z/OS applications to use services that were previously unavailable with zCX for OpenShift. That means include z/OS in the design of new solutions that require a containerized Linux software deployment and orchestrate it with Red Hat OpenShift.

Best of all, no specialized z/OS skills or expertise is required.  Just develop and deploy containerized software in support inside z/OS using standard Red Hat OpenShift interfaces, processes, and tooling, with no special skills required.

Leverage the enterprise-ready Kubernetes container platform available in Red Hat OpenShift to manage a hybrid cloud strategy, a multicloud, or edge deployments. In that way you can benefit from z/OS Qualities of Service (QoS) and provide automatic, integrated restart capabilities in the event of  site failures (using z/OS DR/GDPS), which amounts, basically, to Inherent disaster recovery.

Operational management consistent with z/OS integrates seamlessly into your current z/OS environment through consistent operational management that already is in line with z/OS. Or, you can opt for co-location. That would minimize network latency by co-locating certain applications and workloads. Enabling applications to access z/OS data to be as close as possible has some clear savings advantages.

In addition, you could provide an integrated z/OS operational model with z/OS QoS  via zCX for performance and security while still including co-located non-SQL databases, latest microservices, and an analytics framework.

IBM is ready to deliver the full zCX package.

However, some of the zCX fine print: A good first assumption is that the new work running in the zCX environment will be 95% zIIP-eligible. However, your zCX environment may be more or less zIIP eligible depending on characteristics of the workload itself. Capacity planning should be based on the measured zIIP eligibility of your specific zCX applications. zCX can be deployed exclusively on standard processors if no zIIP processors are available.

Results were extrapolated from internal IBM benchmarks performed in a controlled environment using a single z14 z/OS 2.4 LPAR with TCP/IP inbound workload queuing (IWQ) for inbound traffic and two zCX containers: one running Node.js and one running a MongoDB database. zIIP eligibility is based on the CPU consumption of the work running on the zCX address spaces and the associated work on the TCPIP and VTAM address spaces. Results may vary.

This product, IBM zCX Foundation for Red Hat OpenShift, is an IBM product. Therefore, primary service and support is offered through IBM. Purchase of zCX for OpenShift provides entitlement to Red Hat OpenShift. If you experience any issues with zCX for OpenShift, open a case here: http://www.ibm.com/mysupport 

DancingDinosaur initially was suspicious of the IBM acquisition of Red Hat. But in subsequent months IBM has demonstrated the ability to deliver enhance value from Red Hat, particularly for the Z. zCX is just more value for the Z

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

Can IBM Apps Address Climate Change

March 21, 2022

Everyone who managed to stay awake in business school learns in the first week that meeting customer expectations is essential for success. What they did not tell you in that first week was that was probably the only important thing they would teach. It requires combining data from multiple sources to get a full picture of each individual customer and act then on those real-time insights to deliver a truly connected experience, whatever that was. You can stop snoring now

Recently, Adobe and IBM this promised to make this even clearer. Specifically, IBM announced the expansion of its partnership with Adobe around the use of artificial intelligence (AI)-powered weather data from The Weather Company, a joint IBM Business on the Adobe Experience Platform. Customers can leverage Weather Channel data but the expanded solutions are expected to be generally available until later this year.

No doubt IBM knows weather. What it left out was mentioning climate change and what to do about it. Not sure they have an AI app for climate change especially when both the North and South Poles hit record high temps recently. Hmm. is there an IBM AI app for that? How should Z users advise their customers.

IBM has designed its new weather solution to enable customers to give customers AI-driven insights on how weather could affect consumer purchasing habits across categories such as retail, healthcare, travel and hospitality, and consumer packaged goods. Customers could shift their behavior in response to forecasted weather changes, whether snow days or heat waves or anything in-between. 

Unless you have been buried in a cave or otherwise cut off from contact with the outside world you have to already be aware that climate change is underway and likely to occur faster and bigger in the years ahead. That means sometimes more snow and sometimes less snow or snow mixed with rain, what the weather people have started to refer to as “wintry mix.” Maybe they think it sounds poetic but it just means wet, slushy snow.

And don’t forget summer when climate change will mean encountering more and more destructive heat waves, wildfires, excessive humidity, extended drought, and high winds. Will probably want another AI app just for that.

Climate change is going to keep you attuned to weather as never before. Weather data is a proven predictor of consumer behavior. From snow days to heat waves weather data can drive buyers to redirect their spending. By then, you won’t need IBM’s weather platform. You’ll have all you need on the front page of the daily newspaper. Sure, you can use IBM’s partnership and its Experience Platform, arriving later this year, or you use the evening TV weather report and apply own judgment tomorrow and any night that follows. 

Commenting on the potential value of bringing weather data into the customer experience, Chris Luna, Senior Global Media Manager at 3M, an existing IBM and an Adobe customer said, “It brings powerf.ul combination of AI-driven weather data and insights.”

As third-party identifiers and traditional forms of targeting are phased out via impending legislation and consumer privacy regulations brand marketers will be forced to leverage alternative datasets and solutions to ensure their customers are receiving accurate, timely, rapidly updated, and web-based experiences. 

Again, where is IBM’s AI climate change app? Could it be enough in the face of the inexorable onslaught of climate change? That’s purely a personal judgment call on your part. Some of this depends on whether if or how much you how much you believe in climate science and climate change to begin with

DancingDinosaur feels climate science, unfortunately, is right on target. IBM may have the best climate AI, but until enough people take it seriously now and start changing their behavior it won’t matter.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter who believes in climate science. Follow DancingDinosaur on Twitter, @mainframeblog.

Newest Z Announced Again

March 17, 2022

IBM has confirmed that a new model of its Z Series mainframes will arrive in late 2Q. DancingDinosaur covered this the first week of Feb when IBM initially leaked it.  They haven’t, however, added much in the way of new details or even the name of the machine. Below is the z15. IBM hasn’t named the new machine or provided a graphic.

But you already read most of it here from DancingDinosaur. The company emphasized the equipment’s debut as a source of improved revenue for the company’s infrastructure business. CFO James Kavanaugh put the release on the roadmap during Big Blue’s Q4 2021 earnings call. The CFO suggested the new release will make a positive impact on IBM’s revenue, which came in at $16.7 billion for the quarter and $57.35 billion for the year. The Q4 number was up 6.5 per cent on year, the annual number was a $2.2 billion jump. 

This is the lure IBM uses to pull new customers into its mainframe ecosystem. Big Blue also advances mainframes as excellent components for a hybrid cloud environment – the bait IBM’s latest CEO, Arvind Krishna, said is IBM’s new focus.

IBM is good at cranking out a new mainframe loaded with compelling capabilities every year or so. Now all IBM has to figure out how to get those mainframe shops to expand their use of IBM systems beyond the Z.  That has proven considerably harder.

A glimmer of hope for IBM comes from the hybrid cloud. On the earnings call, Krishna said IBM clients are “eager to leverage hybrid cloud and artificial intelligence to move their business forward.” The new mainframe promises to possibly tick both of those clocks, maybe.

While IBM’s mainframe customers wait for the new machine’s business impact Big Blue is expanding it focus to software. Q4 software revenue rose eight per cent, consulting revenue climbed 13 points, but infrastructure, read mainframes, stayed flat at best.  And even hybrid cloud infrastructure revenue fell 12 per cent. Guess there will be no cavalry riding to rescue this quarter.

However, DancingDinosaur sees IBM’S most promising player not the mainframe, as much as LinuxONE, where he sees a way to expand the mainframe, particularly if you understand it as a mainframe Linux server with the never fail, high capacity of the mainframe.  

But overal,l hybrid cloud revenue jumped 16 per cent for the quarter and 20 per cent for the year, to top $20bn in sales. 

On the earnings call, Krishna said IBM clients are “eager to leverage hybrid cloud and artificial intelligence to move their business forward.” The new mainframe looks like it will tick both of those boxes.

While IBM investors wait for the new machine and its impact on Big Blue’s bottom line they can contemplate that Q4 saw software revenue rise eight per cent, consulting revenue climb 13 points, but infrastructure stay flat and hybrid cloud infrastructure revenue fall 12 per cent.

But overall hybrid cloud revenue jumped 16 per cent for the quarter and 20 per cent for the year, to top $20bn in sales.

Execs also suggested a robust storage revenue means IBM’s infrastructure portfolio is in fine shape, while Red Hat’s 21 per cent revenue rise shows the company has in-demand tools for buyers seeking IT automation, security, hybrid cloud, and cloud-native development tools.

Krishna suggested automation could be especially important to IBM, as he sees clients turn to it more often as the COVID-19 pandemic has seen many leave the IT workforce. He predicted the resulting skills shortages will persist throughout the 2020s.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

IBM COBOL Mainframe

March 4, 2022

The Open Mainframe Project’s COBOL Working Group (CWG), launched in 2020, aims to promote and support the continued use of the COBOL language globally.

As CWG themselves put it: “We’ve been doing COBOL so consistently and for so long, that everybody has forgotten  it is still running the world’s economy,” according to CWG’s latest report. 

IBM COBOL Compiler Family

Forty+ years ago DancingDinosaur was required to take COBOL to fulfill a math/science requirement. Fortunately, it was offered as a pass/fail option and I passed and almost immediately forgot everything we covered in COBOL except to realize that the code actually made sense. Later when I really mastered BASIC it reminded me of COBOL–simple, straightforward, and seemingly as useful.

Fortunately, many people did not forget it. Today COBOL still runs many, if not most, of the critical core applications of many mainframe shops today. In fact, when some bright executive comes up with the idea to replace the mainframe with a modern and cheaper distributed system COBOL, thankfully, presents an almost insurmountable roadblock. 

They can’t confidently replace those proven and reliable COBOL apps. They aren’t pretty but they work, don’t fail, and their people are productive using them. And even when those executives are willing to spend big money to rewrite them success is far from assured.

So what’s a mainframe shop to do? Participate in the Open Mainframe Group. First you’ll see why COBOL is a proven staple with hundreds of billions of lines of production code used across many industries. Second, you’ll  understand why many respondents see the need to continue to provide appropriate staffing and training for COBOL, which will remain a critical commodity in the language’s continued viability, according to the CWG.

TSRI promises to modify your working mainframe COBOL apps while shifting them to Object-Oriented languages with high accuracy and minimal business disruption. The resulting code is object-oriented, compiled, integrated, uniform, and native target language code. Maybe.

At this point, Dancing Dinosaur wants to add a note of caution. After writing numerous pieces on failed conversions of COBOL to something else in an effort to replace or modernize the mainframe be cautious and suspicious. I’ve written this story numerous times before. It is rarely as seamless, effective, and inexpensive as the vendor promises. If you try it and have had success please let me know and I’ll be happy to promote you and the details of your triumph here. 

TSRI reports completing more than 250 referenceable projects, including COBOL modernizations like the REMIS system for the U.S. Air Force, 5 million lines of COBOL modernized to C for Sprint Nextel, 12-plus million lines of COBOL (and other languages) for the U.S. Railroad Retirement Board, and others. DancingDinosaur has not yet verified this with any of these users.

TSRI really wants to move you off you mainframe systems by promising to reduce operational and maintenance expenses. In the process you might reduce hardware failure risk (with the mainframe !!–c’mon, remember, it’s a mainframe, which almost never fails). 

So what comes next? Migrate Monolithic COBOL Applications to Service-Oriented Applications according to TSRI.  You might end it with a standard output, a multi-tier architecture with DAO layer, user-interface presentation layer, and application logic layer. We target a variety of cloud architectures, including Amazon Web Services (AWS), Azure, private and government cloud, and others. And with additional refactoring, TSRI can break monolithic COBOL systems into microservices based on system requirements.

Of course with the Z you can already do this yourself. The Z runs Linux, kubernetes, supports micro services, handles the most popular new languages, and supports a service-oriented architecture if you want one. It’s already there. You just have to turn it on and figure out what you want to do with it, which, by far, may be the hardest part. Do you really want to count on a third-party to do that for you and do it right? 

DancingDinosaur is often accused of being a mainframe bigot. IBM certainly has its drawbacks when it comes to the mainframe, mainly around pricing. But the risk of mainframe hardware failure certainly isn’t one of them.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.

IBM Cloud LinuxONE

March 1, 2022

As enterprises move to adopt a hybrid cloud environment, some are concerned about the ability to sync between private and public cloud deployments.

The fear comes down to various cloud systems not working with your on-premises systems. As IBM notes, the world is moving to hybrid cloud and multicloud deployments, with an estimated 95% of companies utilizing a combination of private and public clouds utilizing various providers for each. 

This image has an empty alt attribute; its file name is TvOcd33BzpR7U2tsEwHHutwYxP1vbga3MPzhw_GB6yfMPDBkRq4DnO9KIGEf2bkhv5pkwkdAfw5v_gVSr2tGYzqFOm_DaajQWCN68i-wnNz_x5ZrvStdv-gb7rIXa9mBeauiRczX

The IBM model comes with plenty of reasons to jump onto a cloud; from the cost-efficiency to be gained by cutting back on capital expenditures to the accelerated speed of development with the multitude of tools available in the public cloud. However, they also bring legitimate concerns about the future, namely about consistency, connectivity, and movement between clouds. 

Similar issues plagued IT every time technology changes.  The key, IBM notes, is to insist on a combination of options to meet your particular needs,  both on-premises and in various public clouds to let you take advantage of whatever the  various hybrid clouds offer.

Not surprisingly, IBM offers LinuxONE, an enterprise-grade mainframe Linux server that meets the needs of mission-critical workloads. Companies can choose to use LinuxONE for a variety of Linux-based workloads, such as database scalability or digital assets. And then expand into which hybrid cloud option to host your preferred cloud.

To meet those needs today, IBM Cloud announced in August  the general availability of IBM Cloud LinuxONE for VPC. This offering integrates the LinuxONE platform with the functionality of Virtual Servers for VPC.

VPC stands for Virtual Private Cloud, which means that each organization builds and operates its cloud environment as its own private network. Another best-of-any option, the organization reaps the benefits of the public cloud, say, for agility and flexibility while maintaining the privacy of an on-premises environment with a logically isolated network.

LinuxONE for VPC are virtual servers based on the LinuxONE processor architecture that brings the advantages of the platform together with the advantages of VPC. For the first time, LinuxONE is available as a choice in the public cloud. This grants additional options for companies and individuals looking to develop in the public cloud with the backing of an enterprise-grade mainframe Linux server. 

For those who have or are interested in similar deployments on-premises, this option allows development and testing of the application or workload in the public cloud. Spin up a few instances to experience the LinuxONE platform for just a few hours or stand up a permanent public cloud development environment before porting workloads to production on-premises. Besides the Virtual Private Cloud, these virtual servers come with a host of benefits, such as the following:

  • Simplified logging and monitoring: Use a dashboard to view the health of virtual servers and their usage in a graphical interface.
  • Backup and restore: Use in combination with IBM Cloud Object Storage to ensure high availability.
  • File share: Share data volumes between instances for faster communication.
  • Snapshot: Take moment-in-time backups of instances and save in storage for quick recovery.
  • Start/Stop/Restart: Invoke a restart without submitting a ticket and waiting for help.
  • Terraform integration: Efficiently scale-up cloud infrastructure with an industry-standard tool
  • Network features: Enable customers to build a secure private network for protected communication between all of their environments

Provisioning, deployment and management all occur through the standard IBM Cloud Virtual Servers for VPC catalog page. LinuxONE for VPC is available starting in the Tokyo Multi-Zone Region (MZR), with additional regions rolling out soon.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog.


%d bloggers like this: