Posts Tagged ‘LinuxONE’

IBM Brings Red Hat Ansible to Z

March 23, 2020

From the day IBM announced its $34 billion acquisition of Red Hat last October, DancingDinosaur had two questions:  1) how could the company recoup its investment in the open source software company and 2) what did it imply for the future of the z.


With about a billion dollars in open source revenue,  Red Hat was the leading open source software player, but to get from a billion dollars to $34 billion is a big leap. In Feb.  IBM announced Red Hat’s OpenShift middleware would work with the z and LinuxONE. OpenShift is a DevOps play for hybrid cloud environments, a big interest of IBM.

Along with the availability of OpenShift for z IBM also announced that Cloud Pak for Applications is available for the z and LinuxONE. In effect, this supports the modernization of existing apps and the building of new cloud-native apps. This will be further enhanced by the delivery of new Cloud Paks for the z and LinuxONE announced by IBM last summer. Clearly the z is not being abandoned now.

Last week, IBM announced the availability of Red Hat Ansible Certified Content for IBM Z, enabling Ansible users to automate IBM Z applications and IT infrastructure.This means that no matter what mix of infrastructure or clients you are working with, IBM is bringing automation for the z,  helping you manage and integrate it across the hybrid environment through a single control panel.

Ansible functionality for z/OS, according to IBM,  will empower z clients to simplify configuration and access resources, leverage existing automation, and streamline automation of operations using the same technology stack that they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution via Content Collections, Red Hat Ansible Certified Content for z provides easy automation building blocks to accelerate the automation of z/OS and z/OS-based software. These initial core collections include connection plugins, action plugin modules, and a sample playbook to automate tasks for z/OS such as creating data sets, retrieving job output, and submitting jobs.

For those not familiar with Ansible, as Wikipedia explains, Ansible  is an open-source software provisioning, configuration management, and application-deployment tool.  Find more on Ansible, just click https://en.wikipedia.org/wiki/Ansible_(software).

IBM needed to modify Ansible to work with z and hybrid clouds. Red Hat Ansible Certified Content for IBM Z, allows Ansible users to automate z applications and IT infrastructure. The Certified Content will be available in Automation Hub, with an upstream open source version offered on Ansible Galaxy. This means that no matter what mix of infrastructure or clients you  are working with, IBM is bringing automation for z to let you manage across this hybrid environment through a single control panel.

Ansible functionality for z/OS will empower z teams to simplify the configuration and access of resources, leverage existing automation, and streamline automation of operations using the same technology stack they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution with Content Collections, Red Hat Ansible Certified Content for Z allows easy automation building blocks that can accelerate the automation of z/OS and z/OS-based software.

Over the last several months, IBM improved the z developer experience by bringing DevOps and industry-standard tools like Git and Jenkins to the z. For instance it announced IBM Z Open Editor, IBM Developer for z/OS V14.2.1, and, Zowe, an open source framework for z/OS, which DancingDinosaur covered in Aug. 2018.  In February IBM announced the availability of Red Hat OpenShift on IBM Z, which enables developers to run, build, manage and modernize cloud native workloads on their choice of architecture.

Now, Ansible allows developers and operations to break down traditional internal and historical technology silos to centralize automation — while leveraging the performance, scale, control and security provided by the z. 

What more goodies for z will IBM pull from its Red Hat acquisition?  Stockholders should hope it is at least $34 billion worth or more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Power9 Summit Fights COVID-19

March 16, 2020

IBM has unleashed its currently top-rated supercomputer, Summit, to simulate 8,000 chemical compounds in a matter of days in a hunt for something that will impact the COVID-19 infection process by binding to the virus’s spike, a key early step in coming up with an effective vaccine or cure. In the first few days Summit already identified 77 small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

POWER9 Summit Supercomputer battles COVID-19

 

The US Dept of Energy turned to the IBM Summit supercomputer to help in the fight against COVID-19 that appears almost unstoppable as it has swept through 84 countries on every continent except Antarctica, according to IBM. The hope is that by quickly culling the most likely initial chemical candidates, the lab researchers could get an early jump on the search for an effective cure.

As IBM explains it, viruses infect cells by binding to them and using a ‘spike’ to inject their genetic material into the host cell. When trying to understand new biological compounds, like viruses, researchers in wet labs grow the micro-organism and see how it reacts in real-life to the introduction of new compounds, but this can be a slow process without computers that can perform fast digital simulations to narrow down the range of potential variables.And even then there are challenges. 

Computer simulations can examine how different variables react with different viruses, but when each of these individual variables can be comprised of millions or even billions of unique pieces of data and compounded by the need to be run in multiple simulations this isn’t trivial. Very quickly this can become a very time-intensive process, especially  if you are using commodity hardware. 

But, IBM continued, by using Summit, researchers were able to simulate 8,000 compounds in a matter of days to model which bone might impact that infection process by binding to the virus’s spike. As of last week, they have identified dozens of small-molecule compounds, such as medications and natural compounds, that have shown the potential to impair COVID-19’s ability to dock with and infect host cells.

“Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, Governor’s Chair at the University of Tennessee, director of the UT/ORNL Center for Molecular Biophysics, and principal researcher in the study. “Our results don’t mean that we have found a cure or treatment for COVID-19. But we are very hopeful  that our computational findings will both inform future studies and provide a framework that the subsequent researchers can use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.” 

After the researchers turn over the most likely possibilities to the medical scientists they are still a long way from finding a cure.  The medical folks will take them into the physical wet lab and do whatever they do to determine whether a compound might work or not.  

Eventually, if they are lucky,  they will end up with something promising, which then has to be tested against the coronavirus and COVID-19. Published experts suggest this can take a year or two or more. 

Summit gave the researchers a jump start with its massive data processing capability, enabled through its 4,608 IBM Power Systems AC922 server nodes, each equipped with two IBM POWER9 CPUs and six NVIDIA Tensorcore V100 GPUs, giving it a peak performance of 200 petaflops, in effect more powerful than one million high-end laptops. 

Might quantum computing have sped up the process even more? IBM didn’t report throwing one of its quantum machines at the problem, relying instead on Summit, which has already been acclaimed as the world’s fastest supercomputer.

Nothing stays the same in the high performance computing world. HEXUS reports that when time is of the essence and lives are at stake, the value of supercomputers is highly evident. Now a new one, is being touted as  the world’s first 2 Exaflops+ supercomputer, is set to begin operations in 2023. This AMD-powered giant, HEXUS notes, is claimed to be about 10x faster than Summit. That’s good to know, but let’s hope the medical researchers have already beaten the Coronavirus and COVID-19  by then.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Do Your ISVs Run Current Z Systems

March 9, 2020

How many of your mainframe ISVs have the latest z15? Here is my mea culpa: DancingDinosaur has spoken with many mainframe ISVs repeatedly over the years–usually their PR people or SMEs anyway–and never specifically asked if they were running the latest z at the time. I’m embarrassed; it will now be the first question I ask when they come looking for publicity.

IBM Launches Their New Mainframe Called the IBM z15

IBM z15

DancingDinosaur recently spoke with Matt Deres, the CIO of Rocket Software, a leading mainframe ISV, specifically because they had recently upgraded from a z14 to the latest z15. For the record, Rocket Software is a privately held ISV founded in 1990. Rocket develops products in such fields as analytics, networks, data, storage, and enterprise software. The firm’s products are designed to run on mainframes, Linux/Unix/Windows, IBM i, cloud, and hybrid/virtualized systems.

“The main reason we got it was to make sure we’re using the same platform our customers are using and buying,” explained Deres. In addition, they wanted to take advantage of the new features and functionality.  Their system is a z15 Model 717, running 9 IFLs.

The company is committed to keeping current with the technology. That means moving to a z16 in 24-30 months. “We usually lag a few months behind,” he admits. Even with that small lag, Rocket ensures its software remains state-of-the-art.

Currently the company’s 900 people are using the z15. These include developers, sales engineers, and consultants. And the transition from the z14 to the z15 proved seamless. People kept working without interruption.

Granted, small ISVs may have difficulty securing a fully configured z. There are, however, other ways to gain access to z functionality. Sure, they are not the same as having an actual z, even an older, modestly configured one. OK, not every ISV can be Rocket Software when it comes to staying current with the z, but there are other ways. 

For example, IBM Workload Simulator for z/OS and OS/390 can simulate a network of terminals and its associated messages. This solution for stress, performance, regression, function, and capacity planning testing eliminates the need for large amounts of terminal hardware and operator time by providing a powerful analysis with log list, log compare, and response time utilities.

Maybe the most popular is the IBM zPDT. The z Personal Development Tool (zPDT) allows developers to emulate z/OS on their x86 desktops.  z/OS on x86 may be one of the easiest ways to grow a mainframe ISV business, especially if budget is tight. Basically, it lets you simulate z functionality on an x86 system.

Finally, there is Hercules, an emulator. Hercules, as described by Wikipedia, allows software written for IBM mainframes  (System/370, System/390, and zSeries/System z) and plug-compatible mainframes (such as Amdahl) to run on other types of computer hardware, notably on low-cost personal computers. Development started in 1999 by Roger Bowler, a mainframe systems programmer.

Hercules runs under multiple parent operating systems including GNU/Linux, Microsoft Windows, FreeBSD, Solaris, and Mac OS X and is released under the open source software license QPL.  A vendor (or distributor) must still provide an operating system, and the user must install it. Hercules, reportedly,  was the first mainframe emulator to incorporate 64-bit z/Architecture support.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Montana Sidelines the Mainframe

January 21, 2020

Over the past 20+ years DancingDinosaur has written this story numerous times. It never ends exactly the way they think it will. Here is the one I encountered this past week.

IBM z15

But that doesn’t stop the pr writers from finding a cute way to write the story. This time the  writers turned to references to the moon landings and trilby hats (huh?). Looks like a Fedora to me, but what do I know; I only wear baseball hats. But they always have to come up with something that makes the mainframe sound completely outdated. In this case they wrote: Mainframe computers, a technology that harkens back to an era of moon landings and men in trilby hats, are still widely used throughout government, but not in Montana for much longer.

At least they didn’t write that the mainframe was dead and gone or forgotten. Usually, I follow up on stories like this months later and call whichever IT person is still there. I congratulate him or her and ask how it went. That’s when I usually start hearing ums and uhs. It turns out the mainframe is still there, handling those last few jobs they just can’t replace yet.

Depending on how playful I’m feeling that day, I ask him or her what happened to the justification presented at the start of the project. Or I might ask what happened to the previous IT person. 

Sometimes, I might even refer them to a recent DancingDinosaur piece that explains about Linux on the mainframe or Java or describes mainframes running the latest Docker container technology or microservices. I’m not doing this for spite; I’m just trying to build up my readership. DancingDinosaur hates losing any reader, even if it’s late in their game.  So I always follow up with a link to DancingDinosaur

In an interview published by StateScoop, Chief Information Officer Tim Bottenfield described how for the last several years, the last remaining agencies using the state’s mainframe have migrated their data away from it and are now developing modern applications that can be moved to the state’s private and highly virtualized cloud environment. By spring 2021, Montana expects to be mainframe-free. Will make a note to call Bottenfield in Spring 2021 and see how they are doing.  Does anyone want to bet if the mainframe actually is completely out of service and gone by then?

As you all know, mainframes can be expensive to maintain, particularly if it’s just to keep a handful of applications running, which usually turn out to be mission-critical applications. Of the three major applications Montana still runs on its mainframe, two are used by the Montana Department of Public Health and Human Services, which is in the process of recoding those programs to work on modern platforms, as if the z15 isn’t  modern.

They haven’t told us whether these applications handle payments or deliver critical services to citizens. Either way it will not be pretty if such applications go down. The third is the state’s vehicle titling and registration system, which is being rebuilt to run out of the state’s data center. Again, we don’t know much about the criticality of these systems. But think how you might feel if you can’t get accurate or timely information from one of these systems. I can bet you wouldn’t be a happy camper; neither would I.

Systems like these are difficult to get right the first time, if at all. This is especially true if you will be using the latest hybrid cloud and services technologies. Yes, skilled mainframe people are hard to find and retain but so are any technically skilled and experienced people. If I were a decade younger, I could be attracted to the wide open spaces of Montana as a relief from the congestion of Boston. But I’m not the kind of hire Montana needs or wants. Stay tuned for when I check back in Spring 2021.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Cloud Pak–Back to the Future

December 19, 2019

It had seemed that IBM was in a big rush to get everybody to cloud and hybrid cloud. But then in a recent message, it turned out maybe not such a rush. 

What that means is the company believes coexistence will involve new and existing applications working together for some time to come. Starting at any point new features may be added to existing applications. Eventually a microservices architecture should be exposed to new and existing applications. Whew, this is not something you should feel compelled to do today or next quarter or in five years, maybe not even in 10 years.


Here is more from the company earlier this month. When introducing its latest Cloud Paks as enterprise-ready cloud software the company presents it as a containerized software packaged with open source components, pre-integrated with common operational services and a secure-by-design container platform and operational services consisting of  logging, monitoring, security, and identity access management. DancingDinosaur tried to keep up for a couple of decades but in recent years has given up. Thankfully, no one is counting on me to deliver the latest code fast.

IBM has been promoting packaged software  and hardware for as long as this reporter has been following the company, which was when my adult married daughters were infants. (I could speed them off to sleep by reading them the latest IBM white paper I had just written for IBM or other tech giants. Don’t know if they retained or even appreciated any of that early client/server stuff but they did fall asleep, which was my intent.)

Essentially IBM is offering as enterprise-ready Cloud Paks, already packaged and integrated with hardware and software, ready to deploy.  It worked back then as it will now, I suspect, with the latest containerized systems because systems are more complex than ever before, not less by a long shot. Unless you have continuously retained and retrained your best people while continually refreshing your toolset you’ll find it hard to  keep up. You will need pre-integrated and packaged containerized cloud packages that will work right out of the box. 

This is more than just selling you a pre-integrated bundle. This is back to the future; I mean way back. Called Cloud Pak for data system, IBM is offering what it describes as a  fusion of hardware and software. The company chooses the right storage and hardware; all purpose built by IBM in one system. That amounts to convergence of storage, network, software, and data in a single system–all taken care of by IBM and deployed as containers and microservices. As I noted above, a deep trip back to the future.

IBM has dubbed it  Cloud-in-a-box. In short, this is an appliance. You can start very small, paying for what you use now. If later you want more, just expand it then. Am sure your IBM sales rep will be more than happy to provide you with the details. It appears from the briefing that there is an actual base configuration consisting of  2 enclosures with 32 or 128 TB. The company promises to install this and get you up and running in 4 hours, leaving only the final provisioning for you.

This works for existing mainframe shops too, at least those running Linux on the mainframe.  LinuxONE shops are probably ideal. It appears all z shops will need is DB2 and maybe Netezza. Much of the work will be done off the mainframe so at least you should  save some MIPS.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

This is the last appearance of DancingDinosaur this year. It will reappear in the week of Jan. 6, 2020. Best wishes for the holidays.

October 8, 2019

z15 LinuxONE III for Hybrid Cloud

 

It didn’t take long following the introduction of the z15 for a LinuxONE to arrive. Meet the LinuxONE III, a z15 machine with dedicated built-in Linux. And it comes with the primary goodies that the z15 offers: automatic pervasive compression of everything along with a closely related privacy capability, Data Passport.

3-frame LinuxONE III

Z-quality security, privacy, and availability, it turns out, has become central to the mission of the LinuxONE III.The reason is simple: Cloud. According to IBM, only 20% of workloads have been moved to cloud. Why? Companies need assurance that their data privacy and security will not be breached. To many IT pros and business executives, the cloud remains the wild, wild west where bad guys roam looking to steal whatever they can.

IBM is touting the LinuxONE III, which is built on its newly introduced z15, for hybrid clouds. The company has been preaching the gospel of clouds and, particularly, hybrid clouds for several years, which was its primary reason for acquiring Red Hat. Red Hat Linux is built into the LinuxONE III, probably its first formal appearance since IBM closed its acquisition of Red Hat this spring. 

With Red Hat and z15 IBM is aiming to cash in on what it sees as a big opportunity in hybrid clouds. While the Cloud brings the promise of flexibility, agility and openness, only 20% of workloads have been moved to cloud, notes IBM. Why? Companies need assurance that their data privacy and security will not be breached. LinuxONE III also promises cloud native development.

By integrating the new IBM LinuxONE III as a key element in an organization’s hybrid cloud strategy, it adds another level of security and stability and availability to its cloud infrastructure. It gives the organization both agile deployment and unbeatable levels of uptime, reliability, and security. While the cloud already offers appealing flexibility and costs, the last three capabilities–uptime, reliability, security–are not usually associated with cloud computing. By security, IBM means 100% data encryption automatically, from the moment the data arrives or is created. And it remains encrypted for the rest of its life, at rest or in transit.

Are those capabilities important? You bet. A Harris study commissioned by IBM found that 64 percent of all consumers have opted not to work with a business out of concerns over whether that business could keep their data secure. However, that same study found 76 percent of respondents would be more willing to share personal information if there was a way to fully take back and retrieve that data at any time. Thus the importance of the z15’s pervasive encryption and the new data passports.

IBM has previously brought out its latest z running dedicated Linux. Initially it was a way to expand the z market through a reduced cost z.  DancingDinosaur doesn’t know the cost of the LinuxONE III. In the past they have been discounted but given the $34 billion IBM spent to acquire Red Hat the new machines might not be such a bargain this time.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Meet SUSE Enterprise Linux Server 12

February 25, 2019

A surprising amount of competition has emerged lately for Linux on the mainframe, but SUSE continues to be among the top of the heap.  With the newest release last fall, SUSE Linux Enterprise 12, should secure its position for some time to come.

SUSE touts SLE 12 as the latest version of its reliable, scalable and secure platform for efficiently deploying and managing highly available enterprise-class IT services in physical, virtual, or cloud environments. New products based on SLE 12 feature enhancements should allow for better system uptime, improved operational efficiency, and accelerated innovation. As the foundation for all SUSE data center operating systems and extensions, according to the company, SUSE Linux Enterprise meets the performance requirements of data centers with mixed IT environments while reducing the risk of technological obsolescence and vendor lock-in.

With SLE 12 the company also introduces an updated customer portal, SUSE Customer Center, to make it easier for customers to manage their subscriptions, access patches and updates, and communicate with SUSE customer support. It promises a new way to manage a SUSE account and subscriptions via one interface, anytime, anywhere.

Al Gillen, program vice president for servers and system software at IDC, said, “The industry is seeing growing movement of mission-critical workloads to Linux, with that trend expected to continue well into the future.” For Gillen, the modular design of SLE 12, as well as other mission-critical features like full system rollback and live kernel patching, helps address some of the key reservations customers express, and should help accelerate the adoption of Linux on z.

It’s about time. Linux has been available on the z for 20 years. Only with the introduction of IBM LinuxONE a couple of years ago has IBM gotten serious about Linux on z.  Around that time IBM also ported the Go programming language to LinuxOne. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. Taking it even further, following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z.

And it didn’t stop there. IBM ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. As expected IBM has contributed code to the Go community.

Then IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services – all of which are available today, and can now be integrated with just a few lines of code. As soon as Apple introduced Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z, which has already been released.

With SUSE Linux Enterprise Server for x86_64, IBM Power Systems, and IBM System SUSE ES 12 has boosted its versatility, able to deliver business-critical IT services in a variety of physical, virtual, and cloud environments. New features like full system rollback, live kernel patching, and software modules increase data center uptime, improve operational efficiency, and accelerate the adoption of open source innovation. ES 12 further builds on SUSE’s leadership with Linux Containers technology and adds the Docker framework, which is now included as an integral part of the operating system.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Enhances Storage for 2019

February 14, 2019

It has been a while since DancingDinosaur last looked closely at IBM’s storage efforts. The latest 4Q18 storage briefing, actually was held on Feb. 5, 2019 but followed by more storage announcements 2/11 and 2/12 For your sake, this blog will not delve into each of these many announcements. You can, however, find them at the previous link.

Sacramento-San Joaquin River Delta–IBM RESEARCH

As IBM likes to say whenever it is trying to convey the value of data: “data is more valuable than oil.”  Maybe it is time to update this to say data is more valuable than fresh, clean water, which is quickly heading toward becoming the most precious commodity on earth.

IBM CEO Ginny Rometty, says it yet another way: “80% of the world’s data, whether it’s decades of underwriting, pricing, customer experience, risk in loans… That is all with our clients. You don’t want to share it. That is gold,” maybe more valuable even, say, the value of fresh water. But whatever metaphor you choose to use—gold, clean water, oil, something else you perceive as priceless, this represents to IBM the value of data. To preserve the value it represents this data must be economically stored, protected, made accessible, analyzed, and selectively shared. That’s where IBM’s storage comes in.

And IBM storage has been on a modest multi-year storage growth trend.  Since 2016, IBM reports shipping 700 new NVMe systems, 850 VeraStack systems, 3000 DS8880 systems, 5500 PB of capacity, attracted 6,800 new IBM Spectrum (virtualized) storage customers, and sold 3,000 Storwize All-flash system along with 12,000 all-flash arrays shipped.

The bulk of the 2/5 storage announcements fell into 4 areas:

  1. IBM storage for containers and cloud
  2. AI storage
  3. Modern data protection
  4. Cyber resiliency

Except for modern data protection, much of this may be new to Z and Power data centers. However, some of the new announcements will interest Z shops. In particular, 219-135 –Statement of direction: IBM intends to deliver Managed-from-Z, a new feature of IBM Cloud Private for Linux on IBM Z. This will enable organizations to run and manage IBM Cloud Private applications from IBM Linux on Z or LinuxONE platforms. The new capability furthers IBM’s commitment to deliver multi-cloud and multi-architecture cloud-native technologies on the platform of the customer’s choice. Watson, too, will now be available on more platforms through newly announced Watson Anywhere—a version of IBM’s cognitive platform that can run Watson on-premises, in IBM’s cloud, or any other cloud, be it private or public.

Another interesting addition to the IBM storage line, the FlashSystem 9100. IBM FlashSystem 9100, as IBM explains it, combines the performance of flash and Non-Volatile Memory Express (NVMe) end-to-end with the reliability and innovation of IBM FlashCore technology and the rich features of IBM Spectrum Virtualize, — all packed into a 2U enterprise-class storage system. Providing intensive data driven multi-cloud storage capacity, FlashSystem 9100 is deeply integrated with the software defined (virtualized) capabilities of IBM Spectrum Storage, allowing organizations to easily add multi-cloud solutions that best support their business..

Finally, 219-029 –IBM Spectrum Protect V8.1.7 and IBM Spectrum Protect Plus V10.1.3 deliver new application support and optimization for long term data retention. Think of it this way: as the value of data increases, you will want to retain and protect it in more data in more ways for longer and longer. For this you will want the kind of flexible and cost-efficient storage available through Spectrum Protect.

In addition, at Think, IBM announced Watson Anywhere, a version of Watson that runs on-premises, in IBM’s cloud, or any other cloud, be it private or public.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet IBM Q System One

February 1, 2019

A couple of weeks ago, IBM slipped in a new quantum machine at CES. The new machine, dubbed IBM Q System One, is designed for both scientific and commercial computing. IBM described it as the first integrated universal approximate quantum computing system.

Courtesy of IBM

Approximate refers to the short coherence time of the qubits, explains Michael Houston, manager, Analyst Relations. Or, to put it another way: how long the qubits remain stable enough to run reliable and repeatable calculations. IBM Q systems report an industry-best average of 100 microseconds. That’s not enough time for a round of golf, but probably long enough to start running some serious quantum analytics.

As described by IBM, the new machine family, the Q systems, are designed to one day tackle problems that are currently seen as too complex or too exponential in scale for classical (conventional) systems to handle. Such Q Systems may use quantum computing to find new ways to model financial data or isolate key global risk factors to make better investments or find the optimal path across global systems for ultra-efficient logistics or optimizing fleet operations for improved deliveries.

The design of IBM Q System One includes a 9x9x9 cube case constructed of half-inch thick borosilicate glass to form a sealed, airtight enclosure that opens effortlessly using roto-translation, a motor-driven rotation around two displaced axes engineered to simplify the system’s maintenance and upgrade process while minimizing downtime. Overall, the entire system was intended to enable the most stable qubits, which allows for the machine to deliver the reliable commercial use.

A series of independent aluminum and steel frames not only unify, but also decouple the system’s cryostat, control electronics, and exterior casing, helping to avoid potential vibration interference that leads to phase jitter and qubit decoherence.

The object of all of this, Houston explains, is to deliver a sophisticated, modular, and compact design optimized for stability, reliability, and continuous commercial use. For the first time ever, IBM Q System One enables universal approximate superconducting quantum computers to operate beyond the confines of the research lab.

In effect, think of the Q System One as bringing the quantum machine to the data center, starting with Q System’s design that squeezes all the quantum computing electronics, controllers, and other components into a 9x9x9 foot cube made of half-inch thick glass to create a sealed, airtight enclosure that will allow the system to cool the qubits to low Kelvin temperatures and keep them cold enough and undisturbed from any interference for long enough to perform meaningful work. All the Q System One’s components and control mechanisms are intended to keep the qubits at 10 mK  (-442F) to operate

This machine, notes IBM, should look familiar to conventional computer data center managers. Maybe, if you think a 9x9x9, half-inch thick borosilicate glass cube is a regular feature of any data center you have worked in

In effect, IBM is applying the same approach to quantum computing that it has followed for decades with its conventional computers–providing everything you need to get it operating in your data center. Just plan to bring in some trained quantum technicians, specialists, and, don’t forget, a handful of people who can program such a machine.

Other than that, the IBM Q System One consists of a number of custom components that work together–remember they said integrated: Specifically, the new machine will include:

  • Quantum hardware designed to be stable and auto-calibrated to give repeatable and predictable high-quality qubits;
  • Cryogenic engineering that delivers a continuous cold and isolated quantum environment;
  • High precision electronics in compact form factors to tightly control large numbers of qubits;
  • Quantum firmware to manage the system health and enable system upgrades without downtime for users

Are you up for it? Maybe you’d prefer to try before you buy. The IBM Q Quantum Computation Center, opening later this year in Poughkeepsie, extends the IBM Q Network to commercial quantum computing programs,

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: