BMC Finalizes Compuware Acquisition 

June 4, 2020

On June 1 BMC completed its acquisition of Compuware. Both were leading mainframe independent software vendors (ISV) and leading providers of mainframe application development, delivery, and performance solutions. Recently the mainframe ISV space has picked up the action. Just a week ago DancingDinosaur was writing about the renaming of Syncsort to Precisely after completing its acquisition of Pitney Bowes, a company best known for its postage metering.


Given IBM’s lackluster performance as a mainframe software application vendor, albeit somewhat constrained by legalities, a healthy mainframe ISV market is good for everyone that wants to thrive in the mainframe space. And there are others DancingDinosaur hasn’t covered recently, such as DataKinetics, a mainframe performance and optimization provider, and Software Diversified Services (SDS), which specializes in mainframe security.

In some ways DancingDinosaur is saddened that the number of independent mainframe ISVs has dropped by one, but is hopeful that those that remain are going to be stronger, more innovative, and better for the mainframe space overall. As BMC says in its announcement: Customers to benefit from an integrated DevOps toolchain with mainframe operations management and agile application development and delivery. Everybody with a stake in the mainframe space should wish them success.

As BMC puts it: the strategic combination of the two companies builds on the success of BMC’s Automated Mainframe Intelligence (AMI) and Compuware’s Topaz suite, ISPW technology, and classic product portfolios to modernize mainframe environments. BMC with Compuware now enables automation and intelligent operations with agile development and delivery – empowering the next generation of mainframe developers and operations teams to excel when working with mainframe programming languages, applications, data, infrastructure, and security.

And the industry analysts say in the announcement: “Adding Compuware’s Topaz software development environment to the BMC portfolio is another step in the direction of targeting the enterprise developer. With Topaz, developers take a modern approach to building, testing, and deploying mainframe applications. This move should allow BMC to spread the word that modern tools matter for the mainframe engineer,” wrote Christopher Condo, Chris Gardner, and Diego Lo Giudice at Forrester Research.

In addition: fifty percent of respondents in a 2019 Forrester study reported that they plan to grow their use of the mainframe over the next two years and 93% of respondents in the 2019 BMC Mainframe Survey believe in the long-term and new workload strength of the platform.

For the mainframe shop, the newly unified portfolio will enable enterprises to:

  • Leverage the processing power, stability, security, and agile scalability of the mainframe
  • Scale Agile and DevOps methods with a fully integrated DevOps toolchain – allowing for mainframe applications to get to market more quickly and efficiently without compromising quality.
  • Combine the self-analyzing, self-healing, and self-optimizing power of the BMC AMI suite of products to increase mainframe availability, efficiency, and security while mitigating risk; along with the Compuware Topaz suite, to empower the next generation of developers to build, analyze, test, deploy, and manage mainframe applications
  • Create a customer experience to meet the business demands of the digital age – jumpstarting their Autonomous Digital Enterprise journey

BMC’s AMI brings an interesting twist. Specifically, it aims to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. Key elements of such a self-managing mainframe in the areas of security for advanced network and system security include improved adherence to PCI DSS, HIPAA, SOX, FISMA, GDPR, ISO 27001, IRS Pub. 1075, NERC, and other industry standards for protecting data. Most helpful should be BMC AMI for Security to execute out-of-the-box scorecards for frequently audited areas. 

Similarly, AMI can address areas like  capacity management to optimize mainframe capacity by addressing bottlenecks before they occur, boost staff productivity, and deliver a right-sized, cost-optimized mainframe environment. Or DevOps AMI for the mainframe through application orchestration tools to automatically capture database changes and communicate them to the database administrator (DBA) while enforcing DevOps best practices.

ISVs also can ignite a spark under IBM, especially now that it has Red Hat, as is the case of IBM enabling Wazi, a cloud native devop tool for the z. That’s why we want a strong ISV community.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Syncsort Now Precisely After Pitney Bowes Acquisition

May 29, 2020

After announcing its acquisition of Pitney Bowes last August and completing the deal in December, Syncsort earlier this month rebranded itself as Precisely. The company, a long established mainframe ISV, is trying to position Precisely as a major player among enterprises seeking to handle quantities of data in various ways.

Precisely’s combined and updated Syncsort and Pitney Bowes product lines to span what the rebranded operation now describes as  “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools.

The rebranded company’s solution portfolio spans five areas based on the use case. 

  • Integrate is its data integration line that features Precisely Connect, Ironstream, Assure, and Syncsort.
  • Verify unit of data quality tools includes Precisely Spectrum Quality, Spectrum Context, and Trillium.
  • Location intelligence (Locate) touts Precisely Spectrum Spatial, Spectrum Geocoding, MapInfo, and Confirm
  • Enrich features Precisely Streets, Boundaries, Points Of Interest, Addresses, and Demographics. 
  • Engage unit aims to create seamless, personalized and omnichannel communications on any medium, anytime

According to the company, the updated product line will span what it describes as “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools. Adds Josh Rogers, CEO, Syncsort, now Precisely,  “With the combination of Syncsort and Pitney Bowes software and data, we are creating in Precisely a new company that is focused on helping enterprises advance their use of data through expertise across data domains, disciplines and platforms.”

Rogers continued: “Advancements in storage, compute, analytics, and machine learning have opened up a world of possibilities for enhanced decision-making, but inaccuracies and inconsistencies in data have held back innovation and stifled value creation. Achieving data integrity is the next business imperative. Put simply, better data means better decisions, and Precisely offers the industry’s most complete portfolio of data integrity products, providing the link between data sources and analytics that helps companies realize the value of their data and investments.”

Precisely may again be onto something by emphasizing the quality of data for decision making, which is just an amplification of the old GIGO (Garbage In Garbage Out), especially now as the volume, variety, and availability of data skyrockets. When edge devices begin generating new and different data it will further compound these challenges. Making data-driven decisions already has become increasingly complex for even the largest enterprises.

Despite the proliferation of cloud-based analytics tools, according to published studies in Forbes, Harvard Business Review, and elsewhere CEOs found that 84 percent do not trust the data they are basing decisions on, and with good reason, as another study found almost half of newly created data records have at least one critical error. Meanwhile, the cost of noncompliance with new governmental regulations, including GDPR and CCPA, has created an even greater urgency for trusted data.

Out of the gate, Precisely has more than 2,000 employees and 12,000 customers in more than 100 countries, with 90 of those part of the Fortune 100. The company boasts annual revenue of over $600 million.

Prior to its acquisition Pitney Bowes delivered solutions, analytics, and APIs in the areas of ecommerce fulfillment, shipping and returns; cross-border ecommerce; office mailing and shipping; presort services; and financing.

Syncsort provides data integration and optimization software alongside location Intelligence, data enrichment, customer information management, and engagement solutions. Together, the two companies serve more than 11,000 enterprises and hundreds of channel partners worldwide.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Supercomputers Battle COVID-19

May 22, 2020

When the world finally defeats the coronavirus and Covad-19, a small part in the victory will go to massive computer power. As Dario Gil, Director of IBM Research, noted; 16 systems with more than 400 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting are among the firepower.

Back in March DancingDinosaur reported here that IBM’s Summit, which the company was boasting as the world’s most powerful supercompter was able to simulate 8,000 chemical compounds in a matter of days in a hunt for something that could impact the COVID-19 infection. 

Writing this today, late in May, we already know that teams of medical researchers, scientists, technology experts, and a vast array of talents are working almost non-stop to find, develop, test, and mass produce a cure, with luck in the form of a vaccine. We should also note all the countless nurses, doctors, aides, assistants and various hospital and food and logistics staff of all types and outside support roles who are involved in keeping things working, feeding staff, wheeling patients around, and otherwise helping to save lives.

As Gil explains: high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling–all the required science disciplines that need to be involved in whatever success is ultimately achieved. You can probably throw in chemistry and a few other areas of electronics and engineering as well. Without massive computer horsepower these experiments would take years to complete if worked by hand, or months if handled on slower, traditional computing platforms.

These machines—more than 25 U.S.-based supercomputers with more than 400 petaflops of computing power—are now available for free to those working toward a vaccine or treatment against the virus, through the COVID-19 High Performance Computing Consortium.

It was created with government, academia and industry—including competitors, working side by side. IBM is co-leading the effort with the U.S. Department of Energy, which operates the National Laboratories of the United States. Google, Microsoft, Amazon, and Hewlett Packard Enterprise have joined, as well as NASA, the National Science Foundation, Pittsburgh Supercomputing Center, and six National Labs—Lawrence Livermore, Lawrence Berkeley, Argonne, Los Alamos, Oak Ridge, and Sandia, and others. And then there are academic institutions, including MIT, Rensselaer Polytechnic Institute, the University of Texas, Austin, and the University of California, San Diego.

The White House has been getting deservedly bashed for its slowness, incompetence, and narrow-minded bungling. However, Gil reports the White House’s Office of Science and Technology Policy has taken up an effort that can make a real difference. He adds; I want to offer this promise: IBM will continue to explore everything in our power to use our technology and expertise to drive meaningful progress in this global fight.

The first thing Gil’s team did was to spread the word to people who might be working on this on any or all fronts—from drug discovery and development with AI-led simulations to genomics, epidemiology and health systems.

He goes on:  We need to understand the whole life cycle of this virus, all the gearboxes that drive it—how it encounters and infects the host cell and replicates inside it, preventing it from producing vital particles. We need to know the molecular components, the proteins involved in the virus’ biochemistry, and then to use computational modeling to see how we can interrupt the cycle. That’s the standard scientific methodology of drug discovery, but we want to amplify it and speed it up.

The virus has been exploding in humans for months, providing an abundance of samples for computer modeling and analysis, Gil continued. Scientists already are depositing samples into public data sources such as GenBank and Protein Data Bank. There are many unknowns and assumptions but a lot of proposals involve using the available protein structures to come up with potential molecular compounds that could lead to a therapeutic treatment or a vaccine. Let’s hope they have great success, the sooner the better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Apps and Ecosystem Critical for 5G Edge Success

May 18, 2020

According to the gospel of IBM, Edge computing with 5G creates opportunities in every industry. It brings computation and data storage closer to where data is generated, enabling better data control, reduced costs, faster insights and actions, and continuous operations.

Edge computing IBM Cloud Architecture

By 2025, 75% of enterprise data will be processed more efficiently on devices at the edge, compared to only 10% today. It will eliminate the need to relay data acquired, and often used for decision making in the field back to a data center for processing and storage. 

In short, the combination of 5G and smart devices on the edge aids this growing flow of data and processing through the proliferation of a variety of clouds: private, public, multi, and hybrid. But more is needed.

To get things rolling, IBM announced a handful of applications and tools and an edge ecosystem. As IBM notes: organizations across industries can now fully realize the benefits of edge computing, including running AI and analytics at the edge to achieve insights closer to where the work is done and the results applied. These new solutions include:

  • IBM Edge Application Manager – an autonomous management tool to enable AI, analytics and IoT enterprise workloads to be deployed and remotely managed, delivering real-time analysis and insight at scale. It aims to enable the management of up to 10,000 edge nodes simultaneously by a single administrator. It is the first to be powered by Open Horizon, which is folded into the Linux Foundation. 
  • IBM Telco Network Cloud Manager – runs on Red Hat OpenShift and Red Hat Open Stack,  a cloud computing platform that virtualizes resources from industry-standard hardware, organizes them into clouds, and manages them to provide new services now and going forward as 5G adoption expands.
  • A portfolio of edge-enabled applications and services, including IBM Visual Insights, IBM Production Optimization, IBM Connected Manufacturing, IBM Asset Optimization, IBM Maximo Worker Insights and IBM Visual Inspector. All aim to deliver the flexibility to deploy AI and cognitive applications and services at the edge and at scale. 
  • Red Hat OpenShift, which manages containers with automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes cluster services, and applications—on any cloud.
  • Dedicated IBM Services teams for edge computing and telco network clouds that draw on IBM’s expertise to deliver 5G and edge-enabled capabilities across all industries.

In addition, IBM is announcing the IBM Edge Ecosystem, through which an increasingly broad set of ISVs, GSIs and more will be helping enterprises capture the opportunities of edge computing with a variety of solutions built upon IBM’s technology. IBM is also creating the IBM Telco Network Cloud Ecosystem, bringing together a set of partners across the telecommunications industry that offer a breadth of network functionality that helps providers deploy their network cloud platforms. 

These open ecosystems of equipment manufacturers, networking and IT providers, and software providers include Cisco, Dell Technologies, Juniper Networks, Intel, NVIDIA, Samsung, Packet, Equinix Company, Hazelcast, Sysdig, Turbonomic, Portworx, Humio, Indra Minsait, Eurotech, Arrow Electronics, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks, and ADVA as members. 

Making the promise of edge computing a reality requires an open ecosystem with diverse participants. It also requires open standards-based, cloud native solutions that can be deployed and autonomously managed at massive scale throughout the edge and can move data and applications seamlessly between private data centers, hybrid multiclouds, and the edge. IBM has already enlisted dozens of organizations in what it describes as its open edge ecosystem.  You can try to join the IBM ecosystem or start organizing your own.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

5G Joins Edge Technology and Hybrid Multicloud

May 11, 2020

At IBM’s virtual Think Conference the first week in May the company made a big play for edge computing and 5G together. 

From connected vehicles to intelligent manufacturing equipment, the amount of data from devices has resulted in unprecedented volumes of data at the edge. IBM is convinced the data volumes will compound as 5G networks increase the number of connected mobile devices.

z15 T02  and the LinuxONE 111 LT2

Edge computing  and 5G networks promise to reduce latency while improving speed, reliability, and processing. This will deliver faster and more comprehensive data analysis, deeper insights, faster response times, and improved experiences for employees, customers, and their customers.

First gaining prominence with the Internet of Things (IoT) a few years back IBM defined edge computing as a distributed computing framework that brings enterprise applications closer to where data is created and often remains, where it can be processed. This is where decisions are made and actions taken.

5G stands for the Fifth Generation of cellular wireless technology. Beyond higher speed and reduced latency, 5G standards will have a much higher connection density, allowing networks to handle greater numbers of connected devices combined with network slicing to isolate and protect designated applications.

Today, 10% of data is processed at the edge, an amount IBM expects to grow to 75% by 2025. Specifically, edge computing enables:

  • Better data control and lower costs by minimizing data transport to central hubs and reducing vulnerabilities and costs
  • Faster insights and actions by tapping into more sources of data and processing that data there, at the edge
  • Continuous operations by enabling systems that run autonomously, reduce disruption, and lower costs because data can be processed by the devices themselves on the spot and where decisions can be made

In short: the growing number of increasingly capable devices, faster 5G processing, and the increased pressure to drive the edge computing market beyond what the initial IoT proponents, who didn’t have 5G yet, envisioned. They also weren’t in a position to imagine the growth in the processing capabilities of edge devices in just the past year or two.

But that is starting to happen now, according to IDC: By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today.

Also unimagined was the emergence of the hybrid multicloud, which IBM has only recently started to tout. The convergence of 5G, edge computing, and hybrid multicloud, according to the company, is redefining how businesses operate. As more embrace 5G and edge, the ability to modernize networks to take advantage of the edge opportunity is only now feasible. 

And all of this could play very well with the new z machines, the z15 T02  and LinuxONE lll LT2. These appear to be sufficiently capable to handle the scale of business edge strategies and hybrid cloud requirements for now. Or the enterprise class z15 if you need more horsepower.

By moving to a hybrid multicloud model, telcos can process data at both the core and network edge across multiple clouds, perform cognitive operations and make it easier to introduce and manage differentiated digital services. As 5G matures it will become the network technology that underpins the delivery of these services. 

Enterprises adopting a hybrid multicloud model that extends from corporate data centers (or public and private clouds) to the edge is critical to unlock new connected experiences. By extending cloud computing to the edge, enterprises can perform AI/analytics faster, run enterprise apps to reduce impacts from intermittent connectivity, and minimize data transport to central hubs for cost efficiency. 

Deploying a hybrid multicloud model from corporate data centers to the edge is central to capitalizing on  new connected experiences. By extending cloud computing to the edge, organizations can run AI/analytics faster  while minimizing data transport to central hubs for cost efficiency. By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today. It’s time to start thinking about making edge part of your computer strategy. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Power9 Certified for SAP HANA Enterprise Cloud

April 28, 2020

SAP HANA has again this year been designated a top performer in the cloud-native, multi-tenant, business intelligence segment by Gartner.  Driving its popularity is the broad interest in its wide base of SAP enterprise applications and the SAP Analytics Cloud,  a cloud-native, multi-tenant platform with a broad set of analytic capabilities. 

Behind the SAP Cloud, increasingly, is IBM’s POWER 9 servers. Specifically, the SAP-managed, private cloud environment runs on IBM POWER9 systems, specifically the E980, which brings the industry’s largest virtualized server scalability at 24TB, more than enough for even the largest SAP HANA database applications to run in memory, where they experience the best performance. In truth, most HANA users don’t require 24 TB but it can be there if they need it.

.

IBM Power E980

IBM Power Systems has been certified for the SAP HANA Enterprise Cloud as a critical infrastructure platform provider for large in-memory usage. The goal is to simplify the IT infrastructure for the managed, private cloud environment. The service will run on IBM POWER9-based Power Systems E980 servers, which offer the industry’s largest virtualized server scalability for the HANA database. The E980 server lineup starts as small as 2 sockets and runs up to 16 sockets. 

The IBM Power9, notes IBM, more than provides the IT infrastructure for this mission-critical managed environment. The Power9 is a scalable and secured service that is designed to accelerate a user’s evolution on the path to cloud readiness, explains Vicente Moranta, Vice President, Offering Management for IBM’s Enterprise Linux on Power Systems. It provides capabilities that span the software and hardware stack through a comprehensive menu of functional and technical services with the level of control in the SAP cloud that clients should expect on premises, all in one privately SAP-managed environment.

SAP HANA Enterprise Cloud users can take advantage of the firmware-based virtualization built into the IBM POWER platform as PowerVM, a virtualization engine implemented at the firmware level. PowerVM delivers better capabilities while avoiding the problem of noisy neighbors where multiple clients on the box can interfere. It helps with this through micro-partitions and other advanced features. As a result, it delivers the largest SAP HANA scalability in a scale-up system. 

This combination is the result of a three-year collaboration between IBM Power Systems and SAP to provide virtualization on-demand via hypervisor-defined features. These features give an SAP HANA LPAR the ability to match what a client wants, effectively avoiding long acquisition cycles and wasteful over-provisioning. Specifically it provides what amounts to virtual on demand and accurately configured systems for great granularity. It avoids the need for SAP users to revert to bare metal servers due to virtualization issues. SAP manages this work itself through Power9 to achieve optimum performance. 

The latest 2019 Information Technology Intelligence Consulting (ITIC) Reliability Update polled over 800 corporations from July through early September 2019. The study compared the reliability and availability of over a dozen of the most widely deployed mainstream server platforms. Among the mainstream server distributions, IBM’s Power Systems, led by the Power9 topped the field registering a record low of 1.75 minutes per server downtime. Each of the mainstream servers studied delivered a solid five nines (99.999%) of inherent hardware reliability.

Not surprisingly, one server beat them all: the IBM Z mainframe system delivered what ITIC called true fault tolerance, delivering six nines (99.9999%) uptime to 89% of enterprise users. That translates into 0.74 second per server due to any inherent flaws in the server hardware. Just imagine how much you could accomplish in that 0.74 second?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Meet the 2 Newest Mainframes

April 17, 2020

The z15 T02 is the new rev on the z15 previously released. The  LinuxONE 111 LT2 is built on the new z15 and runs a built-in Red Hat distribution of Linux.

z15 T02  and LinuxONE 111 LT2 

The initial z15 was introduced last fall not as the biggest z in terms of capacity, speed, and performance but as a machine IBM stated at the time was a new enterprise platform delivering the ability to manage the privacy of customer data across hybrid multicloud environments. With the z15, clients can manage who gets access to data via policy-based controls, with an industry-first capability to revoke access to data even across the hybrid cloud. All that and more is included in the new version,

The new z15 T02 is still built on the same chips but is a bit different. For starters, it is an air cooled, single frame machine. But they brought over many of the capabilities built into the original z15, including Cloud native development and deployment, encryption everywhere protecting eligible data, and resiliency and availability, while sticking it into a simplified package. Some of these capabilities, like pervasive encryption,  go back to the z14 but it is nice to see them in the slimmed down 19” frame.

Somehow it should cost less but it might not. The new z15 T02 also qualifies for what IBM refers to as a new Technology Transition Offering (TTO) for the z15 Model T02 called Technology Update Pricing (TU8). Technology Update Pricing for the z15 Model T02 uses the reporting mechanisms and existing MSU per hour tiers of the Advanced Entry Workload License Charges (AEWLC) pricing metric while extending the software price-performance that is provided by AEWLC.

Since the z15 Model T02 is a z/OS machine it also participates in IBM’s various pricing schemes which the company cannily crafts to discount some aspects of the machine in select circumstances.  

The LinuxONE 111 LT2, which, with luck,  is priced to try to appear to compete with x86-based Linux boxes. It supports from 4 to 65 IFLs. In terms of memory, it ranges from 2 TB to 16 TB. Try getting that in a cheap single chassis x86 Linux box. It also comes with IBM Secure Execution for Linux, which provides scalable isolation for individual workloads to help protect from insider attacks. IBM Secure Execution can help protect and isolate workloads on-premises for IBM LinuxONE and IBM Z hybrid cloud environments.

The new LinuxONE model also is part of the recent Confidential Computing industry  movement around using technology to protect data in-use. IBM’s Secure Execution for Linux furthers the Confidential Computing agenda through the implementation of a new hardware-based Trusted Execution Environment (TEE) on the IBM Z and LinuxONE platforms. Hardware enabled protections such as Secure Execution can move companies closer to realizing a Zero Trust environment through isolation and access control over their data assets.

The new machines also are designed not only for hybrid clouds but for cloud-native development. For that you can chose from most of the latest tools: Elastic, Spunk, OPENAPI, Zone, git, Jenkins, docker, kubernetes, Open Service Broker, Postgre SQL, mongoDB, and more.

In terms of cloud native development, IBM promises:

  • Unparalleled trust and security for mission-critical workloads and data.
  • Integrated IBM z/OS data and apps with cloud native apps.
  • New containerized services with built-in, cutting-edge security and resiliency to foster new business models.
  • Integrated with a cloud orchestration layer and Kubernetes via Red Hat OpenShift and IBM Cloud Pak on IBM Z.
  • Managed cost growth with IBM Z Tailored Fit Pricing.
  • Cuts time to market by 52%  on a single system.

IBM has been offering what it calls pervasive encryption since the z14.  With the new machines that has morphed into what IBM refers to as Encryption Everywhere, which entails;

  • Hardware accelerated encryption
  • File or data set encryption
  • Database encryption
  • Protected key for high speed encryption
  • Data-centric security
  • Controlled access to diagnostic data shared with partners and ecosystems

Finally, among the z goodies IBM has packed into the new machines is IBM Z Instant Recovery, which uses technologies available exclusively on IBM z15 to minimize the duration and impact of downtime and accelerate the recovery of mission-critical applications; with no increase in IBM software licensing costs or MSU consumption. Among its benefits is to  return to your pre-shutdown SLAs in up to 50 percent less time.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

URGENT NEED Open Mainframe Project COBOL

April 10, 2020

An announcement from the Open Mainframe Project noted that this week, New Jersey Governor Phil Murphy put out a call for volunteers who have COBOL skills because – like most states – New Jersey depends on mainframes to control and manage data. Other states are following with the same call for COBOL programmers. Surprised?

DancingDinosaur has repeated so often that it amounts to a mantra that the mainframe is not dead. And despite all the excitement generated by containers and new programming languages for a new generation of cloud-based application mainframes running proven and long-tested COBOL applications continue to do the bulk of the heavy lifting. And it becomes indisputably obvious when millions on people in a week are laid off and try to file an unemployment claim. Collectively, they have brought the systems and processes to their knees. 

As the announcement  observed: More than 10 million people in the United States have filed for unemployment amid the COVID-19 global pandemic and the resulting financial crisis that ensued. As these numbers continue to grow, and with that a big technology skills gap is starting to emerge as well. 

Then it adds: Mainframes are seen as antiquated by today’s standards but in reality, they are the driving force behind modernization including being part of the modern hybrid cloud model. Notice that the governors are not calling for Node.js, Python, Ruby, or a slew of other new programming tools. They are asking for COBOL.

Open Mainframe Project is an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources. The group  mobilized across its membership including: Broadcom, IBM, Phoenix Software, Rocket Software, SUSE, Vicom Infinity and Zoss Team, for help in response to this urgent need from public sector officials. Specifically, the group announced three initiatives:

  1. Calling all COBOL Programmers Forum – an Open Mainframe Project forum where developers and programmers who would like to volunteer or are available for hire can post their profiles.  Whether they are actively looking for employment, retired skilled veterans, students who have successfully completed COBOL courses, or professionals wanting to volunteer, they can specify their level of expertise and availability to assist.  Employers can then connect with these resources as needed. The forum can be found here: https://community.openmainframeproject.org/c/calling-all-cobol-programmers/15
  2. COBOL Technical Forum – a new forum specific to COBOL technical questions which will be monitored by experienced COBOL programmers. This will allow all levels of programmers to quickly learn new techniques and draw from a broad range of experience and expertise to address common questions and challenges arising during this unprecedented time. The technical questions can be asked in this forum:  https://community.openmainframeproject.org/c/cobol-technical-questions/16
  3. Open Source COBOL Training – Open Source COBOL Training – the Open Mainframe Project Technical Advisory Council has approved hosting a new open source project that will lead collaboration for training materials on COBOL. The courseware was contributed by IBM based on its work with clients and institutes of higher education. These materials will be provided under an open source license and available in the coming days on the Open Mainframe Project GitHub organization

Notice in the first point above the request is for both hires and volunteers. Maybe out of this terrible pandemic the mainframe will at least get a little respect.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Mayflower Autonomous Ship

March 31, 2020

Growing up in Massachusetts, DancingDinosaur was steadily inundated with historical milestones: the Boston Massacre, Pilgrims landing at Plymouth Rock, we even had a special school holiday, Evacuation Day. It is only celebrated in Boston, to commemorate the day the colonials forced the British out of Boston. This year commemorates the 400th anniversary of the Mayflower’s arrival, which evolved into Thanksgiving and subsequently turned into a great day for high school football games.

IBM took the occasion as an  opportunity to build a completely autonomous ship and sail it from England to Massachusetts unmanned to mark that  anniversary, September 2020. The project , dubbed the Mayflower Autonomous Ship  (MAS) was launched as an occasion for IBM to introduce IBM Edge Computing in a dramatic way.

IBM defines edge computing as decentralized data and application processing across hundreds to millions of endpoints residing outside of a traditional datacenter or public cloud. MAS relies on IBM’s Edge Computing

“You take the human factor out of ships and it allows you to completely reimagine the design. You can focus purely on the mechanics and function of the ship,” writes Brett Phaneuf, Managing Director of MAS. The idea was to create an autonomous and crewless vessel that would cross the Atlantic, tracing the route of the original 1620 Mayflower and performing vital research along the way.

For MAS to survive the voyage, he continued,  the team opted for a trimaran design, which is both hydro- and aero-dynamic Using aluminum and composite materials, MAS will be lightweight, about 5 tons and 15 meters in length. That’s half the length and less than 3 percent of the weight of the original Mayflower, which took almost two months for a voyage that the MAS team planned to complete in less than two weeks.

For power, MAS will use solar panels to charge on-board batteries, which will power MAS’s motor – even at night. A single wingsail will allow MAS to harness wind power as well as make it more visible to other ships. MAS will be able to clock speeds of around 20 knots, compared to the original Mayflower’s 2.5 knots.

When it comes to modern technologies, the original Mayflower used a ship’s compass for navigation. To measure speed, it towed a ‘log-line’ – a wooden board attached to a hemp line with knots tied in it at uniform intervals (hence the term ‘knots’ still used to measure a ship’s speed today).

MAS, however, will have a state-of-the-art inertial navigation and precision GNSS positioning system. It will have a full suite of the latest oceanographic and meteorological instruments, a satellite communications system, and 2D LIDAR and RADAR sensors.

With no crew on board, MAS needs to make its own decisions at sea. MAS’ mission control system will be built on IBM Power Systems servers. MAS is currently using real data from Plymouth Sound to train IBM PowerAI Vision technology to recognize ships, debris, whales and other hazards which come into view on MAS’s on-board video cameras.

When a hazard is detected, MAS will use IBM Operational Decision Manager software to decide what to do. It may change course, or, in case of emergencies, speed out of the way by drawing additional power from its on-board back-up generator. Connectivity in the middle of the Atlantic is patchy, so MAS will use edge devices on board to store and process data locally when need be. Every time it gets  a connection, the ship will connect to the IBM Cloud and put the systems back into sync.

MAS will carry three research pods that carry scientific instrumentation to ensure scientists can gather the data they need to understand and protect the ocean, especially in the face of threats from pollution and global warming.

By leveraging AI, machine learning, and other new technologies IBM hopes it will start a new era of marine exploration. Through the University of Birmingham’s Human Interface Technologies Team, MAS plans to open the experience of the mission to millions of other ‘virtual pilgrims’ around the world via a mixed reality experience that uses the latest Virtual and Augmented Reality technologies. Bon Voyage!

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Brings Red Hat Ansible to Z

March 23, 2020

From the day IBM announced its $34 billion acquisition of Red Hat last October, DancingDinosaur had two questions:  1) how could the company recoup its investment in the open source software company and 2) what did it imply for the future of the z.


With about a billion dollars in open source revenue,  Red Hat was the leading open source software player, but to get from a billion dollars to $34 billion is a big leap. In Feb.  IBM announced Red Hat’s OpenShift middleware would work with the z and LinuxONE. OpenShift is a DevOps play for hybrid cloud environments, a big interest of IBM.

Along with the availability of OpenShift for z IBM also announced that Cloud Pak for Applications is available for the z and LinuxONE. In effect, this supports the modernization of existing apps and the building of new cloud-native apps. This will be further enhanced by the delivery of new Cloud Paks for the z and LinuxONE announced by IBM last summer. Clearly the z is not being abandoned now.

Last week, IBM announced the availability of Red Hat Ansible Certified Content for IBM Z, enabling Ansible users to automate IBM Z applications and IT infrastructure.This means that no matter what mix of infrastructure or clients you are working with, IBM is bringing automation for the z,  helping you manage and integrate it across the hybrid environment through a single control panel.

Ansible functionality for z/OS, according to IBM,  will empower z clients to simplify configuration and access resources, leverage existing automation, and streamline automation of operations using the same technology stack that they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution via Content Collections, Red Hat Ansible Certified Content for z provides easy automation building blocks to accelerate the automation of z/OS and z/OS-based software. These initial core collections include connection plugins, action plugin modules, and a sample playbook to automate tasks for z/OS such as creating data sets, retrieving job output, and submitting jobs.

For those not familiar with Ansible, as Wikipedia explains, Ansible  is an open-source software provisioning, configuration management, and application-deployment tool.  Find more on Ansible, just click https://en.wikipedia.org/wiki/Ansible_(software).

IBM needed to modify Ansible to work with z and hybrid clouds. Red Hat Ansible Certified Content for IBM Z, allows Ansible users to automate z applications and IT infrastructure. The Certified Content will be available in Automation Hub, with an upstream open source version offered on Ansible Galaxy. This means that no matter what mix of infrastructure or clients you  are working with, IBM is bringing automation for z to let you manage across this hybrid environment through a single control panel.

Ansible functionality for z/OS will empower z teams to simplify the configuration and access of resources, leverage existing automation, and streamline automation of operations using the same technology stack they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution with Content Collections, Red Hat Ansible Certified Content for Z allows easy automation building blocks that can accelerate the automation of z/OS and z/OS-based software.

Over the last several months, IBM improved the z developer experience by bringing DevOps and industry-standard tools like Git and Jenkins to the z. For instance it announced IBM Z Open Editor, IBM Developer for z/OS V14.2.1, and, Zowe, an open source framework for z/OS, which DancingDinosaur covered in Aug. 2018.  In February IBM announced the availability of Red Hat OpenShift on IBM Z, which enables developers to run, build, manage and modernize cloud native workloads on their choice of architecture.

Now, Ansible allows developers and operations to break down traditional internal and historical technology silos to centralize automation — while leveraging the performance, scale, control and security provided by the z. 

What more goodies for z will IBM pull from its Red Hat acquisition?  Stockholders should hope it is at least $34 billion worth or more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/


%d bloggers like this: