IBM and Sumo Accelerate Hybrid Cloud

September 20, 2021

Sumo Logic, a player in continuous intelligence and IBM announced the availability of Sumo Logic’s Continuous Intelligence Platform on Red Hat Marketplace, an open cloud marketplace for buying and deploying certified container-based software. Sumo’s platform delivers Continuous Intelligence Platform that can help companies make data-driven decisions and reduce the time to investigate security and operational issues. 

Its cloud-native SIEM, real-time security analytics and compliance functions are designed to help organizations cloud-native security and observability solutions for companies running on the Red Hat OpenShift platform. The payback: they can deploy faster and achieve insights into their cloud and hybrid infrastructures, applications, and services sooner.

Ramin Sayar, President and CEO at Sumo Logic said in the announcement: “Companies want to streamline the procurement, deployment, and management of their container applications.” Sumo’s Continuous Intelligence Platform allows organizations to get all their data into one place for observability, security, and business intelligence and reduce administrative labor in the process. 

Sayar continues:IBM helps us to offer our platform with Red Hat OpenShift on Red Hat Marketplace. This makes it easier for organizations to take advantage of our platform as they modernize and/or migrate their applications across hybrid cloud environments.”

Sumo Logic achieved Red Hat OpenShift Operator Certification status based on the support of the Continuous Intelligence Platform’s cloud-native and hybrid cloud environments, and compatibility with Red Hat OpenShift. As part of the certification, Sumo Logic will extend the company’s existing Kubernetes collection agents to Red Hat’s OpenShift Operator model, designed to make it easier to deploy and manage data from customers’ OpenShift Kubernetes clusters. 

Its Continuous Intelligence Platform helps companies make data-driven decisions and reduces the time to investigate security and operational issues. A cloud-native SIEM, real-time tool, it enables companies to make data-driven decisions and reduce the time to investigate security and operational issues.

An IDC study sponsored by Red Hat noted that organizations can achieve an average of 49% higher revenue for software products that have been certified like Sumo, by Red Hat. Such customers increasingly either require or prefer certified solutions.

For companies building cloud-native infrastructure and applications, Red Hat Marketplace is an essential destination for unlocking the value of cloud investments, designed to minimize the barriers facing global organizations as they accelerate innovation. 

Bob Lord, Senior Vice President, Worldwide Ecosystems at IBM adds: “Red Hat Marketplace is a one stop shop for enterprises to seamlessly deploy and manage software across hybrid cloud environments, spanning multiple clouds and on-premises. Sumo Logic’s certification for Red Hat OpenShift, along with its availability on Red Hat Marketplace, can help deliver innovation and value for clients via containerized workloads and the simplified deployment and management of data.”

“We believe Red Hat Marketplace is an essential destination to unlock the value of cloud investments,” said Lars Herrmann, vice president, Partner Ecosystems, Product & Technologies, Red Hat. “Our goal with the marketplace, is to make it faster and easier for companies to implement the tools and technologies that can help them succeed in this hybrid multicloud world.” Red Hat emerged as instrumental in recent quarters for IBM’s recent success in bolstering its revenues with Hybrid Cloud customer projects.

Now for the standard corporate boilerplate: Any information regarding offerings, updates, functionality, or other modifications, including release dates, is subject to change without notice. The development, release, and timing of any offering, update, functionality, or modification described herein remains at the sole discretion of Sumo Logic, and should not be relied upon in making a purchase decision, nor as a representation, warranty, or commitment to deliver specific offerings, updates, functionalities, or modifications in the future.

Similarly, Red Hat and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Red Hat itself is owned by IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

Next Generation of IBM Power Servers

September 10, 2021

IBM is not relenting on its push into hybrid clouds. Its success to date has been encouraging, but now it is going further. The company is throwing a new line of servers, the Power10 Processor, which they say is designed specifically for hybrid cloud environments.

IBM E1080, Power10 processor

The new IBM Power E1080 server, the first in a new family of servers based on the IBM Power10 processor, was designed specifically for hybrid cloud environments. The company adds: IBM Power10-equipped E1080 server is engineered to be one of the most secured server platforms and is designed to help companies operate a secured, frictionless hybrid cloud experience. 

It apparently was intended to offer a platform to meet what IBM perceived as the unique needs of enterprise hybrid cloud computing. Specifically  the POWER10 processor focuses on energy efficiency and performance in a 7nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the previous IBM POWER9 processor.

To IBM, the POWER10 processor is an important evolution in IBM’s roadmap for POWER. It is expected to be available in the second half of 2021, which is not very far away.

Some of the new processor innovations highlighted back in August included: the company’s first commercial 7nm processor, expected to deliver up to a 3x improvement in capacity. It also promises processor energy efficiency within the same power envelope as the POWER9 but allowing for greater performance.

It supports Multi-Petabyte Memory Clusters with a breakthrough new technology IBM calls Memory Inception, This is intended to improve cloud capacity and economics for memory-intensive workloads from ISVs like SAP, the SAS Institute, and others as well as large-model AI inference.

And there is more: new hardware-enabled security capabilities include transparent memory encryption, which is designed to support end-to-end security.  Also new processor core architectures in the IBM POWER10 processor with an embedded Matrix Math Accelerator is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16, and INT8 calculations per socket.

There is far more packed into the Power10 processor, more than I can squeeze into one piece. But IBM points out another consideration worth keeping in mind: not all hybrid cloud models are designed equally. You might say DancingDinosaur taps the hybrid cloud because he draws on multiple cloud providers, but that is not anywhere near what the E1080 is intended to address.

Reports Dylan Boday, IBM VP of Product Management for AI and Hybrid Cloud: “When we were designing the E1080, we had to be cognizant of how the pandemic was changing not only consumer behavior, but also our customer’s behavior and needs from their IT infrastructure. The resulting E1080 is IBM’s first system designed from the silicon up for hybrid cloud environments, a system tailor-built to serve as the foundation for our vision of a dynamic and secure, frictionless hybrid cloud experience.”

By any measure, the E1080 is not a trivial machine. It offers 240 cores, 64 TB memory, 1.6 TB/sec of memory, and 576 GB/sec of IO bandwidth. Need more? The 1080 hybrid cloud capabilities include hybrid cloud-like planned industry-first with the tight metering of Red Hat software, including Red Hat OpenShift and Red Hat Enterprise Linux. It also brings 4.1x greater OpenShift containerized throughput per core vs x86. C’mon IBM, with a chip like the 1080 do you still need CPU comparisons to x86?

Klaus Fehiker, a long time Power user from Finanz Informatik is ready to board the E1080 train. “The new server addresses our demands to continue delivering our services at scale with high resiliency requirements, including new levels of security and improved energy-efficiency. We are also keen to see how the new features can accelerate our journey to cloud and the infusion of AI into our business applications.”

The IBM Power E1080 server helps deliver on the customer demand for a frictionless hybrid cloud experience, with architectural consistency across the entire hybrid cloud estate to simplify management, At the same time it can seamlessly scale applications to meet the dynamic needs of today’s world.

There is no doubt that the 1080 processor is an incredible achievement in almost any dimension you want to look at. After spending my entire career writing about amazing technology IBM’s  E1080 is as impressive as any I covered.  

After spending over two hours painstakingly picking my way through IBM’s spec sheet for it here please don’t ask me how much it might cost you. It’s apparent IBM is putting a lot of effort into making the E1080 flexible. And there is a dazzling array of configuration options and promises of flexible pricing. You’ll just have to figure it out on your own or hope an IBM rep can.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

Hi Telum

September 3, 2021

Hi Telum

Last week IBM unveiled details of its upcoming new IBM Telum processor, the next-gen-microprocessor for IBM Z and IBM LinuxONE.

IBM Telum Processor was designed to bring deep learning inference to enterprise workloads. It’s mission: address fraud in real-time. To do that Telum is IBM’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. In short, you could catch a bad guy doing a bad thing while it is taking place. Of course you would need to put the people and processes in place to respond effectively and have them properly trained, but that is probably an entirely different conversation.

In short, you could catch a bad guy doing a bad thing while it is taking place. If you had trained people in place.Of course, you would need to put trained people and processes in place to respond effectively and have them properly trained.

The breakthrough of this new on-chip hardware acceleration is intended to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. Don’t expect a Telum-based system tomorrow. Rather, IBM suggests it is planned for the first half of 2022.

IBM is touting the Telum processor as the next-gen microprocessor for IBM Z and IBM LinuxONE, both Z-based systems. As IBM puts it: Telum is the company’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is intended to help customers achieve business insights at scale key industries and do it in what amounts to near real time.

Somehow, DancingDinosaur is not convinced most businesses are prepared to go that far that fast. You can imagine any number of complications arising starting with litigation.

Telum is the first IBM chip with a processor and technology created by the IBM Research AI Hardware Center. In addition, Samsung is IBM’s technology development partner for the Telum processor, developed in 7nm EUV technology node.

A little more from IBM on Telum specifics: The microprocessor contains 8 processor cores, clocked at over 5GHz, with each core supported by a redesigned 32MB private level-2 cache. The level-2 caches interact to form a 256MB virtual Level-3 and 2GB Level-4 cache.

Along with improvements to the processor core itself, the 1.5x growth of cache per core over the z15 generation is designed to enable a significant increase in both per-thread performance and total capacity IBM can deliver in the next generation IBM Z system. Telum’s performance improvements are vital for rapid response times in complex transaction systems, especially when augmented with real time AI inference.

Telum also features significant innovation in security, with transparent encryption of main memory. Telum’s Secure Execution improvements are designed to provide increased performance and usability for Hyper Protected Virtual Servers and trusted execution environments, making Telum an optimal choice for processing sensitive data in Hybrid Cloud architectures, a big IBM marketing target.

IBM’s standard  boilerplate: Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.


DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at http://www.technologywriter.com

IBM Delivers 2Q21 Win

August 26, 2021

IBM’s latest quarterly earnings (2Q21) are finally getting  interesting now that it has put years of quarterly losses into the past, at least for now. And finally the company has started gaining traction with hybrid clouds. Total cloud revenue in the quarter was $7.0 billion.

IBM and Red Hat announce hybrid cloud software

In fact, the company credits cloud computing for its record highest quarterly sales growth in over 2 years. It almost makes you forget the recent years of incessant quarterly losses. CFO James Kavanaugh prefers to cheer on the strong spending by clients in retail, manufacturing, and travel in the US. And who can blame him?

He continues: Sales from its cloud computing services jumped 21 percent while the company experienced a sales decline in global technology services.  So much for the excitement around IBM’s Kyndryl services restructuring although it may be too soon to expect to see much from that.

Here’s how Reuters explained the Kyndryl thing: The plan to separate was announced in 2020, which DancingDinosaur reported at that time. It followed years of IBM trimming its legacy businesses as it increasingly focused on its cloud offerings to counter slowing software sales and seasonal demand for its mainframe servers–DancingDinosaur wouldn’t exactly call it seasonal; sales increases invariably followed each upgrade of the Z. BTW, those cloud offerings were specifically tagged for hybrid clouds

So, how are you going to treat Kyndryl? Will you include it in a proposal? Will you buy from it, grant it credibility? If it delivers the same high quality IT services you expect at a competitive price and in a timeframe that meets your needs why not give it a consideration?

DancingDinosaur will be watching to see how other top tier IT services providers respond. In the meantime Kyndryl came out respectfully in IBM 2Q statement:  The impact of the Kyndryl separation costs for second-quarter 2021 was ($0.15) per share.

Sales from IBM’s cloud computing services jumped 21 percent to $6.5 billion) in the quarter. The 109-year-old firm is preparing to split itself into two public companies, with the namesake firm narrowing its focus on the so-called hybrid cloud, where it sees a $1 trillion market opportunity. The company did record a sales decline in global technology services, but added it was largely offset by a rise in revenue in the remaining three units, including a surprise growth in the business that hosts mainframe computers.

Mainframe saw strong traction from the financial services industry, where its banking clients shopped for more capacity as trading volumes soared during the retail trading frenzy, CFO added Kavanaugh said. The pandemic wasn’t bad for every business.

“I am glad to see that strategic projects, which are IBM’s bread and butter, are coming back,” added Patrick Moorhead, analyst at Moor Insights & Strategy. He noted: systems and global business services growth was a surprise on the cloud.

IBM revenue rose nearly 1 percent to $17.73 billion in the quarter, beating analysts’ average estimate of $17.35 billion, according to IBES data from Refinitiv.

Net income fell to $955 million in the quarter ended March 31, from $1.18 billion compared to a year earlier. Overall, the company earned $1.77 per share, beating market expectation of $1.63. 

Is that good enough? DancingDinosaur isn’t an investor so it won’t offer an opinion. However, it is nice not to be writing about incessant losses quarter after quarter.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

LinuxONE III Express

August 19, 2021

One of the best decisions IBM made was to bring Linux, a popular, highly capable, open source product to the Z. That was 20 years ago. OK, it was a little rough to start, at least for a novice Linux wannabe like me back then. The latest version, released in May 2021, LinuxONE III Express promises to be much more; offering hybrid cloud, containers, Red Hat Open Shift, plus all the Z’s classic strengths–scalability, reliability, performance, availability, security.

z15 T02

z15 T02 / LinuxONE III LT2

Let’s start with off-the-shelf innovative packages like LinuxONE. Here IBM starts with an open source system built on the z15 most recently, but it could also have the z14 or z13. DancingDinosaur recommends opting for the z15 or higher level machines when they become available at some point. 

As IBM puts it: the LinuxONE is an enterprise-grade platform with mainframe-class features built for hybrid cloud environments and modern businesses. According to the company, LinuxONE solutions are optimized for open-source architectures, especially Red Hat OpenShift, Kubernetes, and IBM Cloud Pak.

Let’s look at the LinuxONE III Express. According to Ross Mauri, GM of IBM’s Z and LinuxONE organization, the new solution is “designed to be an off-the-shelf option for startups, business partners, and ISVs, and it’s built to get clients up and running quickly. It can do that because it is using tested Z components from the processor, Linux software, and the frame itself.

When Mauri says “off-the-shelf,” he means it. The LinuxONE III Express is a pre-configured system based on the LinuxONE Model LT2 that IBM introduced a year ago. The new solution, designated III, comes with a set number of cores (13), memory (1280GB), networking cards and other features, with price and configurations adjusted according to the number of cores and memory activated.

For example, the base price of $135,000, as quoted by IBM, delivers a solution with four cores and 384GB of memory activated. However, “activated” doesn’t mean fixed forever, Mauri continued. Like other scalable IBM solutions, additional cores and memory can be activated with a simple microcode upgrade. This enables the system’s capabilities to keep pace with any organization’s evolving compute requirements.

IBM also has developed packages for specific scenarios and applications. These range from an implementation package of training and support for customers new to LinuxONE to packages designed for advanced virtualization workloads.

After seemingly endless strings of losing quarters, IBM finally appears to be generating positive quarterly revenue starting with, of all things, the Z and hybrid cloud. OK; it’s not every quarter but its some positive quarterly revenue, which beats strings of incessant losses.

So, is Z on a roll? Over the last year, it reports increased growth around the platform driven by new interest in IBM’s confidential computing capabilities and new workloads such as digital asset custody. It also reports  85 customers are actively engaged in planning or have a POC in progress with Red Hat OpenShift on Z and IBM Z and LinuxONE.

In fact, according to a recent report conducted by Forrester Consulting on behalf of Deloitte, 74% of respondents believe the mainframe has long-term viability as a strategic platform for organizations.

IBM even sees benefits in the pandemic. COVID-19, it reports, has been a driver of accelerated workload growth on its platforms. Companies are facing new business challenges associated with changing employee and  consumer  behaviors, everything from work-at-home to increased volatility in financial trading and increased online retail transactions.

As a result, they are turning to Z to help manage that increased spike in business in a predictable, resilient and secured way. In fact, IBM reports that Z customers temporarily activated a total of over 4x more additional general purpose processor capacity via On/Off Capacity on Demand in 1Q 2021 compared to 1Q 2019 – enough additional temporary capacity to process up to 9.9 trillion more transactions.

The Z family is no longer for  big enterprises only. Startups too, like KodyPay and Ilara Health, and business partners like Cognition Foundry are turning to LinuxONE for its security and as a competitive differentiator for hybrid cloud. 

That’s why the IBM LinuxONE III Express comes with a new pricing model for Z hardware and Tailored Fit Pricing. Also  announced was co-location of IBM DS8910F Storage into the Z15 Model T02 frame. Time to look at Z again.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Welcome to zOS 2.5

August 12, 2021

As IBM notes in its introduction to zOS 2.5, “Adaptive business and operating models, driven by accelerated disruptions, are shaping the future of enterprises today. Enterprises are embracing the next normal with an accelerated and strategic focus on application modernization, cloud-native processes, and artificial intelligence (AI). And the users, finally, don’t have to be diehard Z fanatics.

z/OS 2.5 simplifies mainframe computing

If IBM had introduced zOS 2.5 decades ago when DancingDinosaur was starting out as a self-taught game developer his life might have been different. But IBM didn’t introduce it until last month, a time DancingDinosaur is contemplating retirement.

Instead, IBM is using zOS 2.5 to introduce its hybrid cloud approach as the solution for what it hopes will be a swift and massive transformation. Furthermore, the company promises to deliver a consistent, standards-based approach to development, security, and operations via its hybrid cloud architecture.

z/OS v2.5 was designed to support hybrid cloud and AI business applications.This is accomplished by enabling next-generation systems operators and developers to have easy access and a simplified experience with IBM z/OS. Darn, I arrived just a couple of decades too early.

This was not, however, designed for veteran z/OS users. They’ve made it clear they are retiring soon if they haven’t already. This is for people who have limited or no experience with the Z, but need to make it work.

As IBM says: z/OS V2.5 brings value whether this new generation of users are running Linux applications on z/OS or extending existing COBOL applications with Java programs. Any application development team can leverage z/OS V2.5 to achieve rapid application deployment and provisioning to their hybrid cloud.

In addition, z/OS V2.5, IBM continues,  supports the scale and deployment of agile business use cases for hybrid cloud and AI capabilities–2 critical areas of interest to IBM.

z/OS V2.5 also supports IBM’s container push by enabling OCI container runtime and Kubernetes container orchestration for IBM z/OS applications and workloads, which enables the adoption of a container-based, cloud-native strategy for mission-critical z/OS applications.

z/OS system programmers aren’t being left out. Even early-tenure system programmers can independently and confidently deploy, maintain, and manage z/OS software functions using guided and customized instructions and workflows. Simplified functions such as z/OS Management Facility (z/OSMF) provide an intuitive user interface as well as automated instructions. This simplified and modern experience is designed to enable easier installation, management, and use of z/OS by programmers and administrators at all levels, with no special skills required for increased agility. Why couldn’t DancingDinosaur have such capabilities decades ago?

And the goodies don’t stop there: z/OS V2.5 delivers more ease of use capabilities: Continued reduction of the requirement for assembler skills by extending the IBM Job Entry Subsystem 2 (JES2) policy-based customization facility that was introduced in z/OS V2.4. –where was that when I needed it.

Enhanced Tailored Fit Pricing (TFP) for IBM Z eases usage. A new system parameter automatically reports and applies the TFP solution to the system. This improvement is designed to be an easier and less error-prone alternative to defining TFP solutions with Sub-Capacity Reporting Tool (SCRT) control statements. This should save you some real time as well as money.

Enterprise modernization with more seamless COBOL-Java interoperability. This gives application developers full transparency by extending application programming models.

And don’t forget storage. Capabilities added to IBM z/OS Cloud storage through DFSMS transparent cloud tiering (TCT) and the Object Access Method (OAM) cloud tier support. TCT, and separately OAM, enable z/OS to utilize hybrid cloud as an additional storage tier for structured and unstructured data. z/OS use of cloud storage is designed to reduce capital and operating expenses with data transfer to hybrid cloud storage environments for simplified data archiving and data protection on IBM Z. The list of enhancements and ease-of-use improvements seem endless.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com

Leveraging Red Hat for Z

August 5, 2021

IBM keeps trying to figure out how best to leverage its acquisition of Red Hat. When it closed the $34 billion acquisition in 2019 it declared in part:

  • Red Hat’s unwavering commitment to open source would remain unchanged
  • Together, the two will deliver a next-generation hybrid mult-icloud platform

Cloud native development on Z with Wazi and Red Hat

To that end, IBM Wazi Developer for Red Hat CodeReady workspaces promise to  develop, innovate, and transform enterprises looking to modernize applications and adopt a cloud model to drive agility and innovation. This requires they quickly evolve their processing and tooling to address changing customer needs. 

Development environments that include IBM Z as part of their hybrid solution face a challenge with cross platform consistency in tooling for their application programmers and developers.  But that’s apparently being addressed.

IBM Wazi Developer for Red Hat CodeReady Workspaces provides a cloud native development experience for z/OS needing to build hybrid applications that span IBM Z and multi-cloud platforms. Integration with open, standard DevOps toolsets, like Python, MongoDB, Kubernetes, and more, will enable organizations to move closer to having a unified development experience across the enterprise. 

The challenge driving a hybrid cloud strategy that includes z/OS applications brings up some unique factors which need to be addressed. From customer studies 66% are invested in hybrid cloud as the critical enabler for all other initiatives, 54% are integrating data on the mainframe with new cloud native services as part of application modernization,  79% cited their top mainframe-related challenge is acquiring the right resources and skills to get work done, and 78% want to be able to update mainframe applications more frequently. 

Not surprisingly, a hybrid cloud strategy that includes z/OS applications brings up Z-specific factors you want to address. From market studies, responders indicate:  66% are invested in hybrid cloud already as a critical enabler for other initiatives, 54% are integrating data on the mainframe with new cloud-native services as part of application modernization,  and 79% cited their top mainframe-related challenge is acquiring the right resources and skills to get work done.

Development skills also present a major concern due to legacy mainframe developers retiring, something DancingDinosaur has frequently noted before.  Specifically, legacy mainframe skills require time to master, especially when new developers are used to a modern, graphical point-and-click, drag-and-drop development experience. 

The solution–if you needed to ask–is IBM Wazi Developer, which enables developers to analyze, develop, and test IBM z/OS application components in a containerized, virtual IBM Z environment on the Red Hat OpenShift Container Platform. And of course a Z shop wants to build modern hybrid cloud applications with a cloud-native development experience for z/OS.

Furthermore, you want to reduce the need for specialized skills and improve productivity with a modern, efficient development experience, and Increase speed and agility with a self-service, containerized, sandbox environment. Finally, to rebalance entitlement based on business needs Wazi enables this kind of containerized analysis with a graphical, web-based user interface to discover and analyze relationships between z/OS application components, the Wazi Sandbox, and a containerized, personal sandbox environment running on Red Hat OpenShift running on x86 hardware Wazi Code. 

IBM also recommends starting with the OpenShift-native IDE or an Eclipse-based IDE that allows developers to use tools they already are familiar with. They can also access z/OS resources via user-friendly Zowe and the Remote System Explorer (RSE) API. Don’t forget, however, support for COBOL, Assembler, REXX, and PL/I.

Other desired features include code completion, real-time syntax checking, and copybook navigation to improve code quality. Use a containerized, personal z/OS sandbox running on Red Hat OpenShift on x86 to develop and test z/OS application components without requiring IBM Z hardware to save money while then deploying the code to either the sandbox testing stage or IBM Z for production. Your choice.


DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Welcome Z Innovations

August 2, 2021

If the mainframe can support hundreds or even thousands of online terminals, why do managers worry it can’t handle the cloud? Just a couple of weeks ago,  DancingDinosaur looked at the mainframe on the edge here and, yes,  it works.

Look at Linux and Python and a constant stream of mainframe technology updates. Linux has been running on the Z for over 20 years, maybe not always smoothly but certainly it has been sufficiently smooth in the last ten years.  Mainframes continue to evolve to the point where you don’t even need a mainframe present to actually try it.

Today the mainframe handles hybrid clouds, containers, and Kubernetes as well as continued development on new, even more advanced tools. Have you tried Z Development and Test Environment (ZD&T), which tests mainframe apps on x86 servers. 

ZD&T is pretty nifty. It runs a z/OS distribution on an x86 or Linux workstation, effectively creating an environment for mainframe application demonstration, development, testing, and employee education without Z mainframe hardware present and its associated costs. In effect, it enables z/OS, middleware, and other z/OS software to run on Intel and other compatible computers, and emulates Z with virtual I/O and devices. 

IBM Z Systems continue to steadily evolve. Around mid-2013 IBM acquired SoftLayer Technologies (now IBM Cloud), considered the world’s largest privately held cloud computing infrastructure provider at that time. (The hyperscalers are bigger today.) The goal was to strengthen IBM’s leadership position in cloud computing. And it is paying off based on IBM’s recent financials. 

Z Trial provides another new twist. It promises 22 mainframe-oriented trials you can use for 3 days. Even IBM Tech Sales reportedly use Z Trial infrastructure to deliver demos and workshops where each participant has its own mainframe. In a given month, 500 Z Trial instances are created and used, according to IBM. Most actually are composed of a Windows client and a Linux Server.

As IBM explains, a Windows client hosts the demo for many Z software products. On the Linux side, all necessary servers and ZD&T are installed. Both client and server are connected to a VLAN to keep communication secure and contained. To avoid other access issues, IBM Z Trial exposes the Windows virtual machine, eliminating any need to open special ports.

However, the “available anywhere in two hours” mainframe doesn’t happen without automatic provisioning. Due to the way the IBM external website works and interacts with the IBM Cloud infrastructure, the company reportedly decided to build a custom Python-based provisioning toolset. By using IBM Cloud APIs, Z Trial is capable of creating and attaching block storage, and deploying instances on selected data centers spread all over the world. 

Finally, there’s monitoring. With hundreds of Z Trial instances running, you need to know if everything is working as expected and how many resources are being used. IBM Cloud offers a mobile app that enables a quick look at how many servers are running, the bandwidth being used, any infrastructure events, and support tickets. Each category described can be drilled down, allowing for individual instances to be managed.

IBM Z Trial appears as a fully working cloud-based implementation of a mainframe environment. It easily could be used to support typical mainframe development activities, from traditional maintenance to DevOps. You could even use Jenkins to provision mainframe instances that build and test code. So, if you have ever wondered how a mainframe might work with your datacenter here is an easy way to find out.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

How Big is the Mainframe Market

July 21, 2021

 How big is the mainframe market? If you said one (IBM) it would be sad but understandable. DancingDinosaur is somewhat familiar with 15-20 vendors; those that have PR people who ping me often enough so I don’t forget them completely. A Google search brings up a total of 286, described as active mainframe vendors ranked by the number of mainframe software products they currently offer. This number raises a few questions itself; starting with huh, who are all these organizations?   

A report titled Global Mainframe Market Research, published by Data Bridge Market Research, sums up the mainframe market as follows:  The mainframe market is expected to witness market growth at a rate of 3.75% in the forecast period of 2021 to 2028. The Data Bridge Market Research report on the mainframe market provides analysis and insights regarding the various factors expected to be prevalent throughout the forecast period while providing their impacts on the market’s growth. The rise in the number of large data is escalating the growth of  the mainframe market.

The researchers, apparently, are not chasing down Google’s 286 players one by one either , which shouldn’t be surprising. A few of the major competitors currently working in the global mainframe market include BMC Software Inc., Dell, FUJITSU, Red Hat, Inc., IBM, ATOS SE, CA Technologies. Red Hat, of course, has been purchased by IBM and is starting to generate mainframe impact through its OpenShift mainframe middleware. See some reports here.

Included in its latest report, however, are: Cognizant, DXC Technology Company, HCL Technologies Limited, Infosys Limited, LzLabs GmbH, Redcentric plc, Unisys, ViON, Wipro Limited, Compuware Corporation, Hewlett Packard Enterprise Development LP, NEC Corporation, Hitachi, Ltd., among other domestic and global players. Not exactly mainframe household names but close enough.

So while we are mulling various analysts following the IBM market, let’s take a look at another favorite DancingDinosaur analyst, Timothy Prickett Morgan and a recent piece of his on cloud computing, a topic that has been covered here numerous times. He has an interesting view focused on IBMi, formerly the AS/400.

Writes Morgan: “We hear a lot of talk about cloud these days. The coronavirus pandemic made a lot of companies take a hard look at how they were doing business, and hardware wasn’t excluded from this conversation. Businesses are now saying, with so many cloud options available, where and how can they start getting their feet wet?

“We certainly all know the IBMi community, he continues,  is dramatically different from most other platforms in the IT space, and has been since Day One. For household names such as Amazon Web Services, Microsoft Azure, and Google Cloud, third party applications make up the majority of back office software hosted on cloud infrastructure. In our community (the mainframe world), however, businesses rely on highly customized apps that have flourished on IBMi and its predecessors and that has provided competitive advantage over the years – and continues to do so. Companies seem to be hesitant to move to the public cloud: Security, compliance, ownership, and cost are some of the top areas of concerns and big unknowns.

Morgan concludes: What is it that people get wrong when they think about IBMi and the public cloud? His answer:Everybody wants a simple answer to a complicated question. In its most simple sense, cloud can mean that someone is managing your hardware and operating system for you in a data center. But cloud isn’t a distinct thing that forces people into a specific route. It’s a broad continuum that has options for most of today’s IBMi clients. That’s the bit that often seems scary to many– a client might look at one option that doesn’t suit its needs, but it’s just one of many approaches it can take. Figuring out which i option for you is best is between you and IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Find Your Place on the Edge

July 13, 2021

What do you think of edge computing? A hot trend? Just more of the same? Maybe something interesting, possibly even useful? Or maybe just another of the endless trends that pop up, create a little buzz for a while, and is soon forgotten to be replaced by another next big thing.

About 80 percent of all data will be processed near its source. That means primarily at the network edge by 2025. Enriched computer functionality at the edge combined with ever-advancing artificial intelligence already are growing at a brisk clip, according to HPE.com. The edge, more than the pandemic, is changing almost everything about how people live and work, from advances in vehicles and retail stores to edge-powered manufacturing plants and sports stadiums.

The edge is where most of us will live and work. IBM already has staked out a spot there.  And where data resides, is generated, or should be processed there for best speed, efficiency, and the least latency. Transactions increasingly occur at the edge, customer experiences happen there, businesses conveniently stash goods there, data often is required there for fast decision making, and where and when people generate the most transactions.

Eighty percent of all data will be processed at the source—primarily at the edge—by 2025. That makes understanding and analyzing all this data key to differentiating your business and satisfying your customers. Those who can act on their data where it can be delivered and used the fastest will win.

The next wave of business transformation, according to some observers, demands an edge-centric approach. Such an edge-centric world requires a different technology approach; one that is more diverse and dispersed, and populated with more and a greater variety of devices and capabilities than ever before. The very definitions of on-prem and off-prem will need to change. IT operations work best in controlled, standardized, recognizable  environments. Edge use cases, notes HPE, are anything but. In short, the edge will be populated by a wide assortment of constantly changing devices, new and old,  with different capabilities. The Z, for sure, is an old-timer on the edge. Remember 3270 terminals and its many variations? These were dumb terminals or green screen, as some referred to them. Your organization might have had hundreds, maybe even thousands. Might still have some.

IBM, however, apparently has selected 5G as its primary channel to the edge. Security, in fact, may turn into its biggest strength. “We can apply pervasive encryption just about everywhere,” noted Tarun Chopra, VP, IBM Z Hybrid Cloud. 

Chopra rattles off a list of IBM security options starting with its data privacy passport and embedded z security, which it can send out to anywhere it is desired on the edge. The Z15, he adds, should be particularly good at this. More can still be done to bolster the security strategy. “It’s not perfect yet but more is in the work,s” he adds. 

Much of that needs to be done, mostly by IT and the business itself. For instance, the organization needs to identify the edge devices it most wants as well as how it wants to communicate with them. How much communication do they anticipate, for what purposes. How fast does it need to be? Will 4G do or do you need 5G?

IBM ran through his exercise for itself in an effort to automate and accelerate quality-inspection processes. To begin, team decided to use IBM’s own Industry 4.0 solutions to handle the quality-inspection processes.The Systems Manufacturing team then developed an AI-powered, automated inspection solution that runs at the edge for maximum efficiency. The solution combines two of IBM’s leading AI and edge solutions, IBM Maximo® Visual Inspection and IBM Edge Application Manager software. 

Of course you would probably have to purchase those capabilities from somebody or build them yourself. DancingDinosaur never encourages a reader to build it themselves, as tempting as it might seem, if quality capabilities can be readily purchased. Save yourself the headaches and just buy it.

In terms of inspection efficiency the IBM team reported a 20% reduction in false-positive defect detections In terms of cost, the team reported a 20% cost savings on software maintenance. Those sounds like acceptable results to DancingDinosaur. Would those work for your organization?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.


%d bloggers like this: