IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across IBM’s storage portfolio. The company’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup.  The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included is the next generation of High-Performance Flash Enclosures (HPFE Gen2), the DS8880F family delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across our storage portfolio. IBM’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z through flash. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

Designed to provide top performance for mission-critical applications, DS8880F is based on the same fundamental system architecture as IBM Watson. DS8880F, explains IBM, forms the three-tiered architecture that balances system resources for optimal throughput.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included in the next generation of High-Performance Flash Enclosures (HPFE Gen2). The DS8880F family also delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

GAO Blames Z for Government Inefficiency

October 19, 2018

Check out the GAO report from May 2016 here.  The Feds spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M). In a related report, the IRS reported it used assembly language code and COBOL, both developed in the 1950s, for IMF and IDRS. Unfortunately, the GAO conflates the word “mainframe” to refer to outdated UNISYS mainframes with the modern, supported, and actively developed IBM Z mainframes, notes Ross Mauri, IBM general manager, Z systems.

Mainframes-mobile in the cloud courtesy of Compuware

The GAO repeatedly used “mainframe” to refer to outdated UNISYS mainframes alongside the latest advanced IBM Z mainframes.  COBOL, too, maintains active skills and training programs at many institutions and receives investment across many industries. In addition to COBOL, the IBM z14 also runs Java, Swift, Go, Python and other open languages to enable modern application enhancement and development. Does the GAO know that?

The GAO uses the word “mainframe” to refer to outdated UNISYS mainframes as well as modern, supported, and actively developed IBM Z mainframes. In a recent report, the GAO recommends moving to supported modern hardware. IBM agrees. The Z, however, does not expose mainframe investments to a rise in procurement and operating costs, nor to skilled staff issues, Mauri continued.

Three investments the GAO reviewed in the operations and maintenance clearly appear as legacy investments facing significant risks due to their reliance on obsolete programming languages, outdated hardware, and a shortage of staff with critical skills. For example, IRS reported that it used assembly language code and COBOL (both developed in the 1950s) for IMF and IDRS. What are these bureaucrats smoking?

The GAO also seems confused over the Z and the cloud. IBM Cloud Private is designed to run on Linux-based Z systems to take full advantage of the cloud through open containers while retaining the inherent benefits of Z hardware—security, availability,  scalability, reliability; all the ities enterprises have long relied on the z for. The GAO seems unaware that the Z’s automatic pervasive encryption immediately encrypts everything at rest or in transit. Furthermore, the GAO routinely addresses COBOL as a deficiency while ISVs and other signatories of the Open Letter consider it a modern, optimized, and actively supported programming language.

The GAO apparently isn’t even aware of IBM Cloud Private. IBM Cloud Private is compatible with leading IT systems manufacturers and has been optimized for IBM Z. All that you need to get started with the cloud is the starter kit available for IBM OpenPOWER LC (Linux) servers, enterprise Power Systems, and Hyperconverged Systems powered by Nutanix. You don’t even need a Z; just buy a low cost OpenPOWER LC (Linux) server online and configure it as desired.

Here is part of the letter that Compuware sent to the GAO, Federal CIOs, and members of Congress. It’s endorsed by several dozen members of the IT industry. The full letter is here:

In light of a June 2018 GAO report to the Internal Revenue Service suggesting the agency’s mainframe- and COBOL-based systems present significant risks to tax processing, we the mainframe IT community—developers, scholars, influencers and inventors—urge the IRS and other federal agencies to:

  • Reinvest in and modernize the mainframe platform and the mission-critical applications which many have long relied upon.
  • Prudently consider the financial risks and opportunity costs associated with rewriting and replacing proven, highly dependable mainframe applications, for which no “off-the-shelf” replacement exists.
  • Understand the security and performance requirements of these mainframe applications and data and the risk of migrating to platforms that were never designed to meet such requirements.

The Compuware letter goes on to state: In 2018, the mainframe is still the world’s most reliable, performant and securable platform, providing the lowest cost high-transaction system of record. Regarding COBOL it notes that since 2017 IBM z14 supports COBOL V6.2, which is optimized bi-monthly.

Finally, about attracting new COBOL workers: COBOL is as easy to work with it as any other language. In fact, open source Zowe has demonstrated appeal to young techies, providing solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. What don’t they get?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Secure Containers for the Z

October 11, 2018

What’s all this talk about secure containers? Mainframe data center managers have long used secure containers, only they call them logical partitions (LPARs). Secure service containers must be some x86 thing.

Courtesy: Mainframe Watch Belgium

Writing the first week in Oct., Ross Mauri, General Manager IBM Z, observes: Today’s executives in a digitally empowered world want IT to innovate and deliver outstanding user experiences. But, as you know, this same landscape increases exposure and scrutiny around the protection of valuable and sensitive data. IBM’s answer: new capabilities for the IBM z14 and LinuxONE platforms that handle digital transformation while responding to immediate market needs and delivering effective solutions.

The containers provide a secure service container that hosts container-based applications for hybrid and private cloud workloads on IBM LinuxONE and Z servers as an IBM Cloud Private software solution.  This secure computing environment for microservices-based applications can be deployed without requiring code changes to exploit inherent security capabilities. In the process, it provides:

  • Tamper protection during installation time
  • Restricted administrator access to help prevent the misuse of privileged user credentials
  • Automatic encryption of data both in flight and at rest

This differs from an LPAR. According to IBM, the LPAR or logical partition are, in practice, equivalent to separate mainframes. This is not trivial power. Each LPAR runs its own operating system. This can be any mainframe operating system; there is no need to run z/OS, for example, in each LPAR. The installation planners  also may elect to share I/O devices across several LPARs, but this is a local decision.

The system administrator can assign one or more system processors for the exclusive use of an LPAR. Alternately, the administrator can allow all processors to be used on some or all LPARs. Here, the system control functions (often known as microcode or firmware) provide a dispatcher to share the processors among the selected LPARs. The administrator can specify a maximum number of concurrent processors executing in each LPAR. The administrator can also provide weightings for different LPARs; for example, specifying that LPAR1 should receive twice as much processor time as LPAR2. If the code in one LPAR crashes, it has no effect on the other LPARs. Not sure this is the case with the new microservices containers.

Mauri tries to make the case for the new containers. These containers allow applications and data to inherit a layer of security with Secure Service Containers that, in turn, inherit the embedded capabilities at the core of IBM Z and LinuxONE to help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives. DancingDinosaur does not know what “hyper protect” means in this context. Sounds like marketing-speak.

Also Mauri explains that IBM Secure Service Containers help protect the privacy of sensitive company data and customer data from administrators with elevated credentials. At the same time they allow development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications.

In fact, IBM continues the explanation by saying it selected this unique and class-leading data privacy assurance technology to allow applications and data to inherit yet another layer of security through Secure Service Containers. “We’ve embedded capabilities at the core of IBM Z and LinuxONE that help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives.” IBM does like the hyper protect phrase; wish DancingDinosaur knew what it meant. A Google search comes up with hyper Protect Crypto Services, which IBM concedes is still an experimental phase, so, in fact, it doesn’t mean anything yet. Maybe in the future.

IBM Secure Service Containers help protect the privacy of sensitive company and customer data from administrators with elevated credentials—a serious risk—while, at the same time, allowing development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications. OK, DancingDinosaur can accept this but it seems only marginally different from what you can do with good ole LPARs. Maybe the difference only becomes apparent when you attempt to build the latest generation microservices-based apps.

If your choice comes down to secure service containers or LPARs, guess you need to look at what kind of apps you want to deploy. All DancingDinosaur can add is LPARs are powerful, known, and proven technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM AI Toolset Focuses on 9 Industries

October 4, 2018

Recently, IBM introduced new AI solutions and services pre-trained for nine industries and professions including agriculture, customer service, human resources, supply chain, manufacturing, building management, automotive, marketing, and advertising. In each area the amount of data makes it more difficult for managers to keep up due to volume, velocity, and complexity of the data. The solutions generally utilize IBM’s Watson Data Platform.

For example, supply chain companies now should incorporate weather data, traffic reports, and even regulatory reports to provide a fuller picture of global supply issues. Similarly, industrial organizations are seeking to reduce product inspection resource requirements significantly through the use of visual and acoustic inspection capabilities, notes IBM.

Recent IBM research from its Institute for Business Value revealed that 82% of businesses are now considering AI deployments. Why? David Kenny, Senior Vice President, IBM Cognitive Solutions, explains: “As data flows continue to increase, people are overwhelmed by the amount of information [forcing them] to act on it every day, but luckily the information explosion coincides with another key technological advance; artificial intelligence (AI). In the 9 industries targeted by IBM, the company provides the industry-specific algorithms and system training required for making AI effective in each segment.

Let’s look at a selection of these industry segments starting with Customer Service where 77% of top performing organizations report seeing customer satisfaction as a key value driver for AI by giving customer service agents increased ability to respond quickly to questions and complex inquiries. It was first piloted at Deluxe Corporation, which saw improved response times and increased client satisfaction.

Human resources also could benefit from a ready-made AI solution. The average hiring manager flips through hundreds of applicants daily, notes IBM, spending approximately 6 seconds on each resume. This isn’t nearly enough time to make well-considered decisions. The new AI tool for HR analyzes the background of current top performing employees from diverse backgrounds and uses that data to help flag promising applicants.

In the area of industrial equipment, AI can be used to reduce product inspection resource requirements significantly by using AI-driven visual and acoustic inspection capabilities. At a time of intense global competition, manufacturers face a variety of issues that impact productivity including workforce attrition, skills-gaps, and rising raw material costs—all exacerbated by downstream defects and equipment downtime. By combining the Internet of Thing (IoT) and AI, IBM contends, manufacturers can stabilize production costs by pinpointing and predicting areas of loss; such as energy waste, equipment failures, and product quality issues.

In agriculture, farmers can use AI to gather data from multiple sources—weather, IoT-enabled tractors and irrigators, satellite imagery, and more—and see a single, overarching, predictive view of data as it relates to a farm. For the individual grower, IBM notes, this means support for making more informed decisions that help improve yield. Water, an increasingly scarce resource in large swaths of the world, including parts of the U.S., which have been experienced persistent droughts. Just remember the recent wildfires.

Subway hopes AI can increase in restaurant visits by leveraging the connection between weather and quick service (QSR) foot traffic to drive awareness of its $4.99 Foot long promotion via The Weather Channel mobile app. To build awareness and ultimately drive in-store visits to its restaurants Subway reported experiencing a 31% lift in store traffic and a 53% reduction in campaign waste due to AI.

DancingDinosaur had no opportunity to verify any results reported above. So always be skeptical of such results until they are verified to you.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Z Acceptance Grows in BMC 2018 Survey

September 27, 2018

Did Zowe, introduced publicly just a few weeks ago, arrive in the nick of time, like the cavalry rescuing the mainframe from an aging workforce? In the latest BMC annual mainframe survey released in mid September, 95% of millennials are positive about the mainframe’s long-term prospects for supporting new and legacy applications. And 63% of respondents were under the age of 50, up ten points from the previous year.

The mainframe veterans, those with 30 or even 40 years of experience, are finally moving out. DancingDinosaur itself has been writing about the mainframe for about 35 years. With two recently married daughters even a hint of a grandchild on the way will be the signal for me to stop. In the meantime, read on.

Quite interesting from the BMC survey was the very high measures among executives believing in the long-term viability of the mainframe. More interesting to DancingDinosaur, however, was the interest in and willingness to use new mainframe technology like Linux and Java, which are not exactly new arrivals to the mainframe world; as we know, change takes time.

For example 28% of respondents cited as a strength the availability of new technology on the mainframe and their high level of confidence in that new technology. And this was before word about Zowe and what it could do to expand mainframe development got out. A little over a quarter of the respondents also cited using legacy apps to create new apps. Organizations are finally waking up to leveraging mainframe assets.

Also interesting was that both executives and technical staff cite application modernization among the top priorities. No complaints there. Similarly, BMC notes executive perception of the mainframe as a long-term solution is the highest in three years, a six point increase over 2016! While cost still remains a concern, BMC continues, the relative merits of the Z outweigh the costs and this perception continues to shift positively year after year.

The mainframe regularly has been slammed over the years as too costly. Yet. IBM has steadily lowered the cost of the mainframe in term of price performance. Now IBM is talking about applying AI to boost the efficiency, management, and operation of the mainframe data center.

The past May Gartner published a report confirming the value gains of the latest z14 and LinuxONE machines: The z14 ZR1 delivers an approximately 13% total capacity improvement over the z13’s maximum capacity for traditional z/OS environments. This is due to an estimated 10% boost in processor performance, as well as system design enhancements that improve the multiprocessor ratio. In the same report Gartner recommends including IBM’s LinuxONE Rockhopper II in RFPs for highly scalable, highly secure, Linux-based server solutions.

Several broad trends are coming together to feed the growing positive feelings the mainframe has experienced in recent years as revealed in the latest survey responses. “Absolute security and 24×7 availability have never been more important than now,” observes BMC’s John McKenny, VP of Strategy for ZSolutions Optimization. Here the Z itself plays a big part with pervasive encryption and secure containers.

Other trends, particularly digitization and mobility are “placing incredible pressure on both IT and mainframes to manage a greater volume, variety, and velocity of transactions and data, with workloads becoming more volatile and unpredictable,” said Bill Miller, president of ZSolutions at BMC. The latest BMC mainframe survey confirms executive and IT concerns in that area and the mainframe as an increasingly preferred response.

Bottom line: expect the mainframe to hang around for another decade or two at least. Long before then, DancingDinosaur will be a dithering grandfather playing with grandchildren and unable to get myself off the floor.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

LinuxONE is a Bargain

September 21, 2018

LinuxONE may be the best bargain you’ll ever find this season, and you don’t have to wait until Santa brings it down your chimney. Think instead about transformation and digital disruption.  Do you want to be in business in 3 years? That is the basic question that faces every organization that exists today, writes Kat Lind, Chief Systems Engineer, Solitaire Interglobal Ltd, author of the white paper Scaling the Digital Mountain.

Then there is the Robert Frances Group’s  Top 10 Reasons to Choose LinuxONE. DancingDinosaur won’t rehash all ten. Instead, let’s selectively pick a few, starting with the first one, Least Risk Solution, which pretty much encapsulates the LinuxONE story. It reduces business, compliance, financial, operations, and project risks. Its availability, disaster recovery, scalability and security features minimize the business and financial exposures. In addition to pervasive encryption it offers a range of security capabilities often overlooked or downplayed including; logical partition (LPAR) isolation, and secure containers.

Since it is a z dedicated to Linux, unlike the z13 or z14 z/OS machines that also run Linux but not as easily or efficiently,  As the Robert Frances Group noted: it also handles Java, Python; and other languages and tools like Hadoop, Docker, other containers, Chef, Puppet, KVM, multiple Linux distributions, open source, and more.  It also can be used in a traditional legacy environment or used as the platform of choice for cloud hosting. LinuxONE supports tools that enable DevOps similar to those on x86 servers.

And LinuxONE delivers world class performance. As the Robert Frances Group puts it: LinuxONE is capable of driving processor utilization to virtually 100% without a latency impact, performance instabilities, or performance penalties. In addition, LinuxONE uses the fastest commercially available processors, running at 5.2GHz, offloads I/O to separate processors enabling the main processors to concentrate on application workloads, and enables much more data in memory, up to 32TB.

In addition, you can run thousands of virtual machine instances on a single LinuxONE server. The cost benefit of this is astounding compared to managing the equivalent number of x86 servers. The added labor cost alone would break your budget.

In terms of security, LinuxONE is a no brainer. Adds Lind from Solitaire:  Failure in this area erodes an organization’s reputation faster than any other factor. The impact of breaches on customer confidence and follow-on sales has been tracked, and an analysis of that data shows that after a significant incursion, the average customer fall-off exceeds 41% accompanied by a long-running drop in revenues. Recovery involves a significant outlay of service, equipment, and personnel expenses to reestablish a trusted position, as much as 18.6x what it cost to get the customer initially. And Lind doesn’t even begin to mention the impact when the compliance regulators and lawyers start piling on. Anything but the most minor security breach will put you out of business faster than the three years Lind asked at the top of this piece.

But all the above is just talking in terms of conventional data center thinking. DancingDinosaur has put his children through college doing TCO studies around these issues. Lind now turns to something mainframe data centers are just beginning to think about; digital disruption. The strategy and challenges of successfully navigating the chaos of cyberspace translates into a need to have information on both business and security and how they interact.

Digital business and security go hand in hand, so any analysis has to include extensive correlation between the two. Using data from volumes of customer experience responses, IT operational details, business performance, and security, Solitaire examined the positioning of IBM LinuxONE in the digital business market. The results of that examination boil down into three: security, agility, and cost. These areas incorporate the primary objectives that organizations operating in cyberspace today regard as the most relevant. And guess who wins any comparative platform analysis, Lind concludes: LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

 

 

 

 

 

 

 

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Can IBM find a place for Watson?

September 7, 2018

After beating 2 human Jeopardy game champions three times in a row in 2011 IBM’s Watson has been hard pressed to come up with a comparable winning streak. Initially IBM appeared to expect its largest customers to buy richly configured Power Servers to run Watson on prem. When they didn’t get enough takers the company moved Watson to the cloud where companies could lease it for major knowledge-driven projects. When that didn’t catch on IBM started to lease Watson’s capabilities by the drink, promising to solve problems in onesies and twosies.

Jeopardy champs lose to Watson

Today Watson is promising to speed AI success through IBM’s Watson Knowledge Catalog. As IBM puts it: IBM Watson Knowledge Catalog powers intelligent, self-service discovery of data, models, and more; activating them for artificial intelligence, machine learning, and deep learning. Access, curate, categorize and share data, knowledge assets, and their relationships, wherever they reside.

DancingDinosaur has no doubt that Watson is stunning technology and has been rooting for its success since that first Jeopardy round seven years ago. Over that time, Watson and IBM have become a case study in how not to price, package, and market powerful yet expensive technology. The Watson Knowledge Catalog is yet another pricing and packaging experiment.

Based on the latest information online, Watson Knowledge Catalog is priced according to number of provisioned catalogs and discovery connections. There are two plans available: Lite and Professional. The Lite plan allows 1 catalog and 5 free discovery connections while the Professional plan provides unlimited of both. Huh? This statement begs for clarification and there probably is a lot of information and fine print required to answer the numerous questions the above description raises, but life is too short for DancingDinosaur to rummage around on the Watson Knowledge Catalog site to look for answers. Doesn’t this seem like something Watson itself should be able to clarify with a single click?

But no, that is too easy. Instead IBM takes the high road, which DancingDinosaur calls the education track.  Notes Jay Limburn, Senior Technical Staff Member and IBM Offering Manager: there are two main challenges that might impede you from realizing the true value of your data and slowing your journey to adopting artificial intelligence (AI). They are 1) inefficient data management and 2) finding the right tools for all data users.

Actually, the issues start even earlier. In attempting AI most established organizations start at a disadvantage, notes IBM. For example:

  • Most enterprises do not know what and where their data is
  • Data science and compliance teams are handicapped by the lack of data accessibility
  • Enterprises with legacy data are even more handicapped than digitally savvy startups
  • AI projects will expose problems with limited data and poor quality; many will simply fail just due to that.
  • The need to differentiate through monetization increases in importance with AI

These are not new. People have been whining about this since the most rudimentary data mining attempts were made decades ago. If there is a surprise it is that they have not been resolved by now.

Or maybe they finally have with the IBM Watson Knowledge Catalog. As IBM puts it, the company will deliver what promises to be the ultimate data Catalog that actually illuminates data:

  • Knows what data your enterprise has
  • Where it resides
  • Where it came from
  • What it means
  • Provide quick access to it
  • Ensure protection of use
  • Exploit Machine Learning for intelligence and automation
  • Enable data scientists, data engineers, stewards and business analysts
  • Embeddable everywhere for free, with premium features available in paid editions

OK, after 7 years Watson may be poised to deliver and it has little to do with Jeopardy but with a rapidly growing data catalog market. According to a Research and Markets report, the data catalog market is expected to grow from $210 million in 2017 to $620 million by 2022. How many sales of the Professional version gets IBM a leading share.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Can Zowe Bring Young Developers to the Z

August 31, 2018

Are you ever frustrated by the Z? As powerful as it gets mainframes remain a difficult nut to crack, particularly for newcomers who have grown up with easier technologies. Even Linux on Z is not as simple or straightforward as on other platforms. This poses a problem for Z-based shops that are scrambling to replace retiring mainframers.

IBM – Jon Simon/Feature Photo Service

Shopping via smartphone

Certainly other organizations, mainly mainframe ISVs like Compuware and Syncsort, have succeeded in extending the GUI deeper into the Z but that alone is not enough. It remains too difficult for newcomers to take their newly acquired computer talents and readily apply them to the mainframe. Maybe Zowe can change this.

And here’s how:  Recent surveys show that flexibility, agility and speed are key.  Single platforms are out, multi-platforms, and multi-clouds are in. IBM’s reply: let’s bring things together with the announcement of Zowe, pronounced like joey starting with a z. Zowe represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. Launched with partners CA Technologies and Rocket Software along with the support of the Open Mainframe Project, the goal is to drive innovation for the community of next-generation mainframe developers and enable interoperability and scalability between products. Zowe promotes a faster team on-ramp to mainframe productivity, collaboration, knowledge sharing, and communication.

In short, IBM and partners are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Zowe has four components:

  1. Zowe APIs: z/OS has a set of Representational State Transfer (REST) operating system APIs. These are made available by the z/OS Management Facility (z/OSMF). Zowe uses these REST APIs to submit jobs, work with the Job Entry Subsystem (JES) queue, and manipulate data sets. Zowe Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. Zowe Explorers create an extensible z/OS framework that provides new z/OS REST services to enterprise tools and DevOps processes.
  2. Zowe API Mediation Layer: This layer has several key components, including that API Gateway built using Netflix Zuul and Spring Boot technology to forward API requests to the appropriate corresponding service through the micro-service endpoint UI and the REST API Catalog. This publishes APIs and their associated documentation in a service catalog. There also is a Discovery Service built on Eureka and Spring Boot technology, acting as the central point in the API Gateway. It accepts announcements of REST services while providing a repository for active services.
  3. Zowe Web UI: Named zLUX, the web UI modernizes and simplifies working on the mainframe and allows the user to create modern applications. This is what will enable non-mainframers to work productively on the mainframe. The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode compared to the command-line interface.
  4. Zowe Command Line Interface (CLI): Allows users to interact with z/OS from a variety of other platforms, such as cloud or distributed systems, submit jobs, issue Time Sharing Option (TSO) and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents. With this extensible and scriptable interface, you can tie in mainframes to the latest distributed DevOps pipelines and build in automation.

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services too.

The mainframe may be older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications and for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation while making experienced professionals more efficient. Start your free Zowe trial here. BTW, Zowe’s code will be made available under the open-source Eclipse Public License 2.0.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 


%d bloggers like this: