Posts Tagged ‘mainframe’

IBM Brings Red Hat Ansible to Z

March 23, 2020

From the day IBM announced its $34 billion acquisition of Red Hat last October, DancingDinosaur had two questions:  1) how could the company recoup its investment in the open source software company and 2) what did it imply for the future of the z.


With about a billion dollars in open source revenue,  Red Hat was the leading open source software player, but to get from a billion dollars to $34 billion is a big leap. In Feb.  IBM announced Red Hat’s OpenShift middleware would work with the z and LinuxONE. OpenShift is a DevOps play for hybrid cloud environments, a big interest of IBM.

Along with the availability of OpenShift for z IBM also announced that Cloud Pak for Applications is available for the z and LinuxONE. In effect, this supports the modernization of existing apps and the building of new cloud-native apps. This will be further enhanced by the delivery of new Cloud Paks for the z and LinuxONE announced by IBM last summer. Clearly the z is not being abandoned now.

Last week, IBM announced the availability of Red Hat Ansible Certified Content for IBM Z, enabling Ansible users to automate IBM Z applications and IT infrastructure.This means that no matter what mix of infrastructure or clients you are working with, IBM is bringing automation for the z,  helping you manage and integrate it across the hybrid environment through a single control panel.

Ansible functionality for z/OS, according to IBM,  will empower z clients to simplify configuration and access resources, leverage existing automation, and streamline automation of operations using the same technology stack that they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution via Content Collections, Red Hat Ansible Certified Content for z provides easy automation building blocks to accelerate the automation of z/OS and z/OS-based software. These initial core collections include connection plugins, action plugin modules, and a sample playbook to automate tasks for z/OS such as creating data sets, retrieving job output, and submitting jobs.

For those not familiar with Ansible, as Wikipedia explains, Ansible  is an open-source software provisioning, configuration management, and application-deployment tool.  Find more on Ansible, just click https://en.wikipedia.org/wiki/Ansible_(software).

IBM needed to modify Ansible to work with z and hybrid clouds. Red Hat Ansible Certified Content for IBM Z, allows Ansible users to automate z applications and IT infrastructure. The Certified Content will be available in Automation Hub, with an upstream open source version offered on Ansible Galaxy. This means that no matter what mix of infrastructure or clients you  are working with, IBM is bringing automation for z to let you manage across this hybrid environment through a single control panel.

Ansible functionality for z/OS will empower z teams to simplify the configuration and access of resources, leverage existing automation, and streamline automation of operations using the same technology stack they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution with Content Collections, Red Hat Ansible Certified Content for Z allows easy automation building blocks that can accelerate the automation of z/OS and z/OS-based software.

Over the last several months, IBM improved the z developer experience by bringing DevOps and industry-standard tools like Git and Jenkins to the z. For instance it announced IBM Z Open Editor, IBM Developer for z/OS V14.2.1, and, Zowe, an open source framework for z/OS, which DancingDinosaur covered in Aug. 2018.  In February IBM announced the availability of Red Hat OpenShift on IBM Z, which enables developers to run, build, manage and modernize cloud native workloads on their choice of architecture.

Now, Ansible allows developers and operations to break down traditional internal and historical technology silos to centralize automation — while leveraging the performance, scale, control and security provided by the z. 

What more goodies for z will IBM pull from its Red Hat acquisition?  Stockholders should hope it is at least $34 billion worth or more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Power9 Summit Fights COVID-19

March 16, 2020

IBM has unleashed its currently top-rated supercomputer, Summit, to simulate 8,000 chemical compounds in a matter of days in a hunt for something that will impact the COVID-19 infection process by binding to the virus’s spike, a key early step in coming up with an effective vaccine or cure. In the first few days Summit already identified 77 small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

POWER9 Summit Supercomputer battles COVID-19

 

The US Dept of Energy turned to the IBM Summit supercomputer to help in the fight against COVID-19 that appears almost unstoppable as it has swept through 84 countries on every continent except Antarctica, according to IBM. The hope is that by quickly culling the most likely initial chemical candidates, the lab researchers could get an early jump on the search for an effective cure.

As IBM explains it, viruses infect cells by binding to them and using a ‘spike’ to inject their genetic material into the host cell. When trying to understand new biological compounds, like viruses, researchers in wet labs grow the micro-organism and see how it reacts in real-life to the introduction of new compounds, but this can be a slow process without computers that can perform fast digital simulations to narrow down the range of potential variables.And even then there are challenges. 

Computer simulations can examine how different variables react with different viruses, but when each of these individual variables can be comprised of millions or even billions of unique pieces of data and compounded by the need to be run in multiple simulations this isn’t trivial. Very quickly this can become a very time-intensive process, especially  if you are using commodity hardware. 

But, IBM continued, by using Summit, researchers were able to simulate 8,000 compounds in a matter of days to model which bone might impact that infection process by binding to the virus’s spike. As of last week, they have identified dozens of small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

“Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, Governor’s Chair at the University of Tennessee, director of the UT/ORNL Center for Molecular Biophysics, and principal researcher in the study. “Our results don’t mean that we have found a cure or treatment for COVID-19. But we are very hopeful  that our computational findings will both inform future studies and provide a framework that the subsequent researchers can use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.” 

After the researchers turn over the most likely possibilities to the medical scientists they are still a long way from finding a cure.  The medical folks will take them into the physical wet lab and do whatever they do to determine whether a compound might work or not.  

Eventually, if they are lucky,  they will end up with something promising, which then has to be tested against the coronavirus and COVID-19. Published experts suggest this can take a year or two or more. 

Summit gave the researchers a jump start with its massive data processing capability, enabled through its 4,608 IBM Power Systems AC922 server nodes, each equipped with two IBM POWER9 CPUs and six NVIDIA Tensorcore V100 GPUs, giving it a peak performance of 200 petaflops, in effect more powerful than one million high-end laptops. 

Might quantum computing have sped up the process even more? IBM didn’t report throwing one of its quantum machines at the problem, relying instead on Summit, which has already been acclaimed as the world’s fastest supercomputer.

Nothing stays the same in the high performance computing world. HEXUS reports that when time is of the essence and lives are at stake, the value of supercomputers is highly evident. Now a new one, is being touted as  the world’s first 2 Exaflops+ supercomputer, is set to begin operations in 2023. This AMD-powered giant, HEXUS notes, is claimed to be about 10x faster than Summit. That’s good to know, but let’s hope the medical researchers have already beaten the Coronavirus and COVID-19  by then.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Do Your ISVs Run Current Z Systems

March 9, 2020

How many of your mainframe ISVs have the latest z15? Here is my mea culpa: DancingDinosaur has spoken with many mainframe ISVs repeatedly over the years–usually their PR people or SMEs anyway–and never specifically asked if they were running the latest z at the time. I’m embarrassed; it will now be the first question I ask when they come looking for publicity.

IBM Launches Their New Mainframe Called the IBM z15

IBM z15

DancingDinosaur recently spoke with Matt Deres, the CIO of Rocket Software, a leading mainframe ISV, specifically because they had recently upgraded from a z14 to the latest z15. For the record, Rocket Software is a privately held ISV founded in 1990. Rocket develops products in such fields as analytics, networks, data, storage, and enterprise software. The firm’s products are designed to run on mainframes, Linux/Unix/Windows, IBM i, cloud, and hybrid/virtualized systems.

“The main reason we got it was to make sure we’re using the same platform our customers are using and buying,” explained Deres. In addition, they wanted to take advantage of the new features and functionality.  Their system is a z15 Model 717, running 9 IFLs.

The company is committed to keeping current with the technology. That means moving to a z16 in 24-30 months. “We usually lag a few months behind,” he admits. Even with that small lag, Rocket ensures its software remains state-of-the-art.

Currently the company’s 900 people are using the z15. These include developers, sales engineers, and consultants. And the transition from the z14 to the z15 proved seamless. People kept working without interruption.

Granted, small ISVs may have difficulty securing a fully configured z. There are, however, other ways to gain access to z functionality. Sure, they are not the same as having an actual z, even an older, modestly configured one. OK, not every ISV can be Rocket Software when it comes to staying current with the z, but there are other ways. 

For example, IBM Workload Simulator for z/OS and OS/390 can simulate a network of terminals and its associated messages. This solution for stress, performance, regression, function, and capacity planning testing eliminates the need for large amounts of terminal hardware and operator time by providing a powerful analysis with log list, log compare, and response time utilities.

Maybe the most popular is the IBM zPDT. The z Personal Development Tool (zPDT) allows developers to emulate z/OS on their x86 desktops.  z/OS on x86 may be one of the easiest ways to grow a mainframe ISV business, especially if budget is tight. Basically, it lets you simulate z functionality on an x86 system.

Finally, there is Hercules, an emulator. Hercules, as described by Wikipedia, allows software written for IBM mainframes  (System/370, System/390, and zSeries/System z) and plug-compatible mainframes (such as Amdahl) to run on other types of computer hardware, notably on low-cost personal computers. Development started in 1999 by Roger Bowler, a mainframe systems programmer.

Hercules runs under multiple parent operating systems including GNU/Linux, Microsoft Windows, FreeBSD, Solaris, and Mac OS X and is released under the open source software license QPL.  A vendor (or distributor) must still provide an operating system, and the user must install it. Hercules, reportedly,  was the first mainframe emulator to incorporate 64-bit z/Architecture support.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Introduces New Flash Storage Family

February 14, 2020

IBM describes this mainly as a simplification move. The company is eliminating 2 current storage lines, Storwize and Flash Systems A9000, and replacing them with a series of flash storage systems that will scale from entry to enterprise. 

Well, uh, not quite enterprise as Dancing Dinosaur readers might think of it. No changes are planned for the DS8000 storage systems, which are focused on the mainframe market, “All our existing product lines, not including our mainframe storage, will be replaced by the new FlashSystem family,” said Eric Herzog, IBM’s chief marketing officer and vice president of worldwide storage channel in a published report earlier this week

The move will rename two incompatible storage lines out of the IBM product lineup and replace them with a line that provides compatible storage software and services from entry level to the highest enterprise, mainframe excluded, Herzog explained. The new flash systems family promises more functions, more features, and lower prices, he continued.

Central to the new Flash Storage Family is NVMe, which comes in multiple flavors.  NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

At the top of the new family line is the NVMe and multicloud ultra-high throughput storage system. This is a validated system with IBM implementation. IBM promises unmatched NVMe performance, SCM, and  IBM FlashCore technology. In addition it brings the features of IBM Spectrum Virtualize to support the most demanding workloads.

Image result for IBM flash storage family

IBM multi-cloud flash storage family system

Get NVMe performance, SCM and  IBM FlashCore technology, and the rich features of IBM Spectrum Virtualize to support your most demanding workloads.

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

Next up are the IBM FlashSystem 9200 and IBM FlashSystem 9200R, IBM tested and validated rack solutions designed for the most demanding environments. With the extreme performance of end-to-end NVMe, the IBM FlashCore technology, and the ultra-low latency of Storage Class Memory (SCM). It also brings IBM Spectrum Virtualize and AI predictive storage management with proactive support by Storage Insights. FlashSystem 9200R is delivered assembled, with installation and configuration completed by IBM to ensure a working multicloud solution.

Gain the performance of all-flash and NVMe with SCM support for flash acceleration and the reliability and innovation of IBM FlashCore technology, plus the rich features of IBM Spectrum Virtualize — all in a powerful 2U storage system.

Combine the performance of flash and NVMe with the reliability and innovation of IBM FlashCore® and the rich features of IBM Spectrum Virtualize™, bringing high-end capability to clients needing enterprise mid-range storage.

In the middle of the family is the IBM FlashSystem 7200 and FlashSystem 7200H. As IBM puts it, these offer end-to-end NVMe, the innovation of IBM FlashCore technology, the ultra-low latency of Storage Class Memory (SCM), the flexibility of IBM Spectrum Virtualize, and the AI predictive storage management and proactive support of Storage Insights. It comes in a powerful 2U storage all flash or hybrid flash array. The IBM FlashSystem 7200 brings mid-range storage while allowing the organization to add  multicloud technology that best supports the business.

At the bottom of the line is the NVMe entry enterprise all flash storage solution, which brings  NVMe end-to-end capabilities and flash performance to the affordable FlashSystem 5100. As IBM describes it, the FlashSystem® 5010 and IBM FlashSystem 5030 (formerly known as IBM Storwize V5010E and Storwize V5030E–they are still there, just renamed) are all-flash or hybrid flash solutions intended to provide enterprise-grade functionalities without compromising affordability or performance. Built with the flexibility of IBM Spectrum Virtualize and AI-powered predictive storage management and proactive support of Storage Insights. IBM FlashSystem 5000 helps make modern technologies such as artificial intelligence accessible to enterprises of all sizes. In short, these promise entry-level flash storage solutions designed to provide enterprise-grade functionality without compromising affordability or performance

IBM likes the words affordable and affordability in discussing this new storage family. But, as is typical with IBM, nowhere will you see a price or a reference to cost/TB or cost/IOPS or cost of anything although these are crucial metrics for evaluating any flash storage system. DancingDinosaur expects this after 20 years of writing about the z. Also, as I wrote at the outset, the z is not even included in this new flash storage family so we don’t even have to chuckle if they describe z storage as affordable.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Meet IBM’s New CEO

February 6, 2020

Have to admire Ginny Rometty. She survived 19 consecutive losing quarters (one quarter shy of 5 years), which DancingDinosaur and the rest of the world covered with monotonous regularity, and she was not bounced out until this January. Memo to readers: Keep that in mind if you start feeling performance heat from top management. Can’t imagine another company that would tolerate it but what do I know.

Arvind Krishna becomes the Chief Executive Officer and a member of the I BM Board of Directors effective April 6, 2020. Krishna is currently IBM Senior Vice President for Cloud and Cognitive Software, and was a principal architect of the company’s acquisition of Red Hat. The cloud/Red Hat strategy has only just started to show signs of payback.

As IBM writes: Under Rometty’s leadership, IBM acquired 65 companies, built out key capabilities in hybrid cloud, security, industry and data, and AI both organically and inorganically, and successfully completed one of the largest technology acquisitions in history (Red Hat).  She reinvented more than 50% of IBM’s portfolio, built a $21 billion hybrid cloud business and established IBM’s leadership in AI, quantum computing, and blockchain, while divesting nearly $9 billion in annual revenue to focus the portfolio on IBM’s high value, integrated offerings. Part of that was the approximately $34 billion Red Hat acquisition, IBM’s, and possibly the IT industry’s, biggest to date. Rometty isn’t going away all that soon; she continues in some executive Board position.

It is way too early to get IBM 1Q2020 results, which will be the last quarter of Rometty’s reign. The fourth quarter of 2019, at least was positive, especially after all those quarters of revenue loss. The company reported  $21.8 billion in revenue, up 0.1 percent. Red Hat revenue was up 24 percent. Cloud and cognitive systems were up 9 percent while systems, which includes the z, was up 16 percent. 

Total cloud revenue, the new CEO Arvind Krishna’s baby, was up 21 percent. Even with z revenue up more than cloud and cognitive systems, it is probably unlikely IBM will easily find a buyer for the z soon. If IBM dumps it, they will probably have to pay somebody to take it despite the z’s faithful, profitable blue chip customer base. 

Although the losing streak has come to an end Krishna still faces some serious challenges.  For example, although DancingDinosaur has been enthusiastically cheerleading quantum computing as the future there is no proven business model there. Except for some limited adoption by a few early adopters, there is no widespread groundswell of demand for quantum computing and the technology has not yet proven itself useful. Also there is no ready pool of skilled quantum talent. If you wanted to try quantum computing would you even know what to try or where to find skilled people?

Even in the area of cloud computing where IBM finally is starting to show some progress the company has yet to penetrate the top tier of players. These players–Amazon, Google, Microsoft/Azur–are not likely to concede market share.

So here is DancingDinosaur’s advice to Krishna: Be prepared to scrap for every point of cloud share and be prepared to spin a compelling case around quantum computing. Finally, don’t give up the z until the accountants and lawyers force you, which they will undoubtedly insist on.To the contrary, slash the z prices and make it an irresistible bargain. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Montana Sidelines the Mainframe

January 21, 2020

Over the past 20+ years DancingDinosaur has written this story numerous times. It never ends exactly the way they think it will. Here is the one I encountered this past week.

IBM z15

But that doesn’t stop the pr writers from finding a cute way to write the story. This time the  writers turned to references to the moon landings and trilby hats (huh?). Looks like a Fedora to me, but what do I know; I only wear baseball hats. But they always have to come up with something that makes the mainframe sound completely outdated. In this case they wrote: Mainframe computers, a technology that harkens back to an era of moon landings and men in trilby hats, are still widely used throughout government, but not in Montana for much longer.

At least they didn’t write that the mainframe was dead and gone or forgotten. Usually, I follow up on stories like this months later and call whichever IT person is still there. I congratulate him or her and ask how it went. That’s when I usually start hearing ums and uhs. It turns out the mainframe is still there, handling those last few jobs they just can’t replace yet.

Depending on how playful I’m feeling that day, I ask him or her what happened to the justification presented at the start of the project. Or I might ask what happened to the previous IT person. 

Sometimes, I might even refer them to a recent DancingDinosaur piece that explains about Linux on the mainframe or Java or describes mainframes running the latest Docker container technology or microservices. I’m not doing this for spite; I’m just trying to build up my readership. DancingDinosaur hates losing any reader, even if it’s late in their game.  So I always follow up with a link to DancingDinosaur

In an interview published by StateScoop, Chief Information Officer Tim Bottenfield described how for the last several years, the last remaining agencies using the state’s mainframe have migrated their data away from it and are now developing modern applications that can be moved to the state’s private and highly virtualized cloud environment. By spring 2021, Montana expects to be mainframe-free. Will make a note to call Bottenfield in Spring 2021 and see how they are doing.  Does anyone want to bet if the mainframe actually is completely out of service and gone by then?

As you all know, mainframes can be expensive to maintain, particularly if it’s just to keep a handful of applications running, which usually turn out to be mission-critical applications. Of the three major applications Montana still runs on its mainframe, two are used by the Montana Department of Public Health and Human Services, which is in the process of recoding those programs to work on modern platforms, as if the z15 isn’t  modern.

They haven’t told us whether these applications handle payments or deliver critical services to citizens. Either way it will not be pretty if such applications go down. The third is the state’s vehicle titling and registration system, which is being rebuilt to run out of the state’s data center. Again, we don’t know much about the criticality of these systems. But think how you might feel if you can’t get accurate or timely information from one of these systems. I can bet you wouldn’t be a happy camper; neither would I.

Systems like these are difficult to get right the first time, if at all. This is especially true if you will be using the latest hybrid cloud and services technologies. Yes, skilled mainframe people are hard to find and retain but so are any technically skilled and experienced people. If I were a decade younger, I could be attracted to the wide open spaces of Montana as a relief from the congestion of Boston. But I’m not the kind of hire Montana needs or wants. Stay tuned for when I check back in Spring 2021.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Cloud Pak–Back to the Future

December 19, 2019

It had seemed that IBM was in a big rush to get everybody to cloud and hybrid cloud. But then in a recent message, it turned out maybe not such a rush. 

What that means is the company believes coexistence will involve new and existing applications working together for some time to come. Starting at any point new features may be added to existing applications. Eventually a microservices architecture should be exposed to new and existing applications. Whew, this is not something you should feel compelled to do today or next quarter or in five years, maybe not even in 10 years.


Here is more from the company earlier this month. When introducing its latest Cloud Paks as enterprise-ready cloud software the company presents it as a containerized software packaged with open source components, pre-integrated with common operational services and a secure-by-design container platform and operational services consisting of  logging, monitoring, security, and identity access management. DancingDinosaur tried to keep up for a couple of decades but in recent years has given up. Thankfully, no one is counting on me to deliver the latest code fast.

IBM has been promoting packaged software  and hardware for as long as this reporter has been following the company, which was when my adult married daughters were infants. (I could speed them off to sleep by reading them the latest IBM white paper I had just written for IBM or other tech giants. Don’t know if they retained or even appreciated any of that early client/server stuff but they did fall asleep, which was my intent.)

Essentially IBM is offering as enterprise-ready Cloud Paks, already packaged and integrated with hardware and software, ready to deploy.  It worked back then as it will now, I suspect, with the latest containerized systems because systems are more complex than ever before, not less by a long shot. Unless you have continuously retained and retrained your best people while continually refreshing your toolset you’ll find it hard to  keep up. You will need pre-integrated and packaged containerized cloud packages that will work right out of the box. 

This is more than just selling you a pre-integrated bundle. This is back to the future; I mean way back. Called Cloud Pak for data system, IBM is offering what it describes as a  fusion of hardware and software. The company chooses the right storage and hardware; all purpose built by IBM in one system. That amounts to convergence of storage, network, software, and data in a single system–all taken care of by IBM and deployed as containers and microservices. As I noted above, a deep trip back to the future.

IBM has dubbed it  Cloud-in-a-box. In short, this is an appliance. You can start very small, paying for what you use now. If later you want more, just expand it then. Am sure your IBM sales rep will be more than happy to provide you with the details. It appears from the briefing that there is an actual base configuration consisting of  2 enclosures with 32 or 128 TB. The company promises to install this and get you up and running in 4 hours, leaving only the final provisioning for you.

This works for existing mainframe shops too, at least those running Linux on the mainframe.  LinuxONE shops are probably ideal. It appears all z shops will need is DB2 and maybe Netezza. Much of the work will be done off the mainframe so at least you should  save some MIPS.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

This is the last appearance of DancingDinosaur this year. It will reappear in the week of Jan. 6, 2020. Best wishes for the holidays.

IBM Suggests Astounding Productivity with Cloud Pak for Automation

November 25, 2019

DancingDinosaur thought IBM would not introduce another Cloud Pak until after the holidays, but I was wrong. Last week IBM launched Cloud Pak for security. According to IBM it helps an organization uncover threats, make more informed risk-based decisions, and prioritize your team’s time. 

More specifically, it connects the organization’s existing data sources to generate deeper insights. In the process you can access IBM and third-party tools to search for threats across any cloud or on-premises location. Quickly orchestrate actions and responses to those threats  while leaving your data where it is.

DancingDinosaur’s only disappointment in the IBM’s new security cloud pak as with other IBM Cloud Paks is that it runs only on Linux. That means it doesn’t run RACF, the legendary IBM access control tool for zOS. IBM’s Cloud Paks reportedly run on z Systems, but only those running Linux. Not sure how IBM can finesse this particular issue. 

Of the 5 original IBM Cloud Paks (application, data, integration, multicloud mgt, and automation) only one offers the kind of payback that will wow top c-level execs; automation.  Find Cloud Park for Automation here.

To date, IBM reports  over 5000 customers have used IBM Digital Business Automation to run their digital business. At the same time, IBM claims successful digitization has increased organizational scale and fueled growth of knowledge work.

McKinsey & Company notes that such workers spend up to 28 hours each week on low value work. IBM’s goal with digital business automation is to bring digital scale to knowledge work and free these workers to work on high value tasks.

Such tasks include collaborating and using creativity to come up with new ideas or meeting and building relationships with clients or resolving issues and exceptions. By automating these tasks the payoff, says IBM, can be staggering simply  by applying intelligent automation.

“We can reclaim 120 billion hours a year  spent by knowledge workers on low value work by using intelligent automation,” declares IBM.  So what value can you reclaim over the course of the year for your operation with, say, 100 knowledge workers, earning, maybe, $22 per hour, or maybe 1000 workers earning $35/hr. You can do the math. 

As you would expect,  automation is the critical component of this particular Cloud Pak. The main targets for enhancement or assistance among the rather broad category of knowledge workers are administrative/departmental work and expert work, which includes cross enterprise work.  IBM offers vendor management as one example.

The goal is to digitize core services by automating at scale and building low code/no code apps for your knowledge workers. For what IBM refers to as digital workers, who are key to this plan, the company wants to free them for higher value work. IBM’s example of such an expert worker would be a loan officer. 

Central to IBM’s Cloud Pak for Automation is what IBM calls its Intelligent Automation Platform. Some of this is here now, according to the company, with more coming in the future. Here now is the ability to create apps using low code tooling, reuse assets from business automation workflow, and create new UI assets.

Coming up in some unspecified timeframe is the ability to enable  digital workers to automate job roles, define and create content services to enable intelligent capture and extraction, and finally to envision and create decision services to offload and automate routine decisions.

Are your current and would-be knowledge workers ready to contribute or participate in this scheme? Maybe for some. it depends for others. To capture those billions of hours of increased productivity, however, they will have to step up to it. But you can be pretty sure IBM will do it for you if you ask.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

 

October 8, 2019

z15 LinuxONE III for Hybrid Cloud

 

It didn’t take long following the introduction of the z15 for a LinuxONE to arrive. Meet the LinuxONE III, a z15 machine with dedicated built-in Linux. And it comes with the primary goodies that the z15 offers: automatic pervasive compression of everything along with a closely related privacy capability, Data Passport.

3-frame LinuxONE III

Z-quality security, privacy, and availability, it turns out, has become central to the mission of the LinuxONE III.The reason is simple: Cloud. According to IBM, only 20% of workloads have been moved to cloud. Why? Companies need assurance that their data privacy and security will not be breached. To many IT pros and business executives, the cloud remains the wild, wild west where bad guys roam looking to steal whatever they can.

IBM is touting the LinuxONE III, which is built on its newly introduced z15, for hybrid clouds. The company has been preaching the gospel of clouds and, particularly, hybrid clouds for several years, which was its primary reason for acquiring Red Hat. Red Hat Linux is built into the LinuxONE III, probably its first formal appearance since IBM closed its acquisition of Red Hat this spring. 

With Red Hat and z15 IBM is aiming to cash in on what it sees as a big opportunity in hybrid clouds. While the Cloud brings the promise of flexibility, agility and openness, only 20% of workloads have been moved to cloud, notes IBM. Why? Companies need assurance that their data privacy and security will not be breached. LinuxONE III also promises cloud native development.

By integrating the new IBM LinuxONE III as a key element in an organization’s hybrid cloud strategy, it adds another level of security and stability and availability to its cloud infrastructure. It gives the organization both agile deployment and unbeatable levels of uptime, reliability, and security. While the cloud already offers appealing flexibility and costs, the last three capabilities–uptime, reliability, security–are not usually associated with cloud computing. By security, IBM means 100% data encryption automatically, from the moment the data arrives or is created. And it remains encrypted for the rest of its life, at rest or in transit.

Are those capabilities important? You bet. A Harris study commissioned by IBM found that 64 percent of all consumers have opted not to work with a business out of concerns over whether that business could keep their data secure. However, that same study found 76 percent of respondents would be more willing to share personal information if there was a way to fully take back and retrieve that data at any time. Thus the importance of the z15’s pervasive encryption and the new data passports.

IBM has previously brought out its latest z running dedicated Linux. Initially it was a way to expand the z market through a reduced cost z.  DancingDinosaur doesn’t know the cost of the LinuxONE III. In the past they have been discounted but given the $34 billion IBM spent to acquire Red Hat the new machines might not be such a bargain this time.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 


%d bloggers like this: