Posts Tagged ‘System z’

IBM Introduces New Flash Storage Family

February 14, 2020

IBM describes this mainly as a simplification move. The company is eliminating 2 current storage lines, Storwize and Flash Systems A9000, and replacing them with a series of flash storage systems that will scale from entry to enterprise. 

Well, uh, not quite enterprise as Dancing Dinosaur readers might think of it. No changes are planned for the DS8000 storage systems, which are focused on the mainframe market, “All our existing product lines, not including our mainframe storage, will be replaced by the new FlashSystem family,” said Eric Herzog, IBM’s chief marketing officer and vice president of worldwide storage channel in a published report earlier this week

The move will rename two incompatible storage lines out of the IBM product lineup and replace them with a line that provides compatible storage software and services from entry level to the highest enterprise, mainframe excluded, Herzog explained. The new flash systems family promises more functions, more features, and lower prices, he continued.

Central to the new Flash Storage Family is NVMe, which comes in multiple flavors.  NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

At the top of the new family line is the NVMe and multicloud ultra-high throughput storage system. This is a validated system with IBM implementation. IBM promises unmatched NVMe performance, SCM, and  IBM FlashCore technology. In addition it brings the features of IBM Spectrum Virtualize to support the most demanding workloads.

Image result for IBM flash storage family

IBM multi-cloud flash storage family system

Get NVMe performance, SCM and  IBM FlashCore technology, and the rich features of IBM Spectrum Virtualize to support your most demanding workloads.

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

Next up are the IBM FlashSystem 9200 and IBM FlashSystem 9200R, IBM tested and validated rack solutions designed for the most demanding environments. With the extreme performance of end-to-end NVMe, the IBM FlashCore technology, and the ultra-low latency of Storage Class Memory (SCM). It also brings IBM Spectrum Virtualize and AI predictive storage management with proactive support by Storage Insights. FlashSystem 9200R is delivered assembled, with installation and configuration completed by IBM to ensure a working multicloud solution.

Gain the performance of all-flash and NVMe with SCM support for flash acceleration and the reliability and innovation of IBM FlashCore technology, plus the rich features of IBM Spectrum Virtualize — all in a powerful 2U storage system.

Combine the performance of flash and NVMe with the reliability and innovation of IBM FlashCore® and the rich features of IBM Spectrum Virtualize™, bringing high-end capability to clients needing enterprise mid-range storage.

In the middle of the family is the IBM FlashSystem 7200 and FlashSystem 7200H. As IBM puts it, these offer end-to-end NVMe, the innovation of IBM FlashCore technology, the ultra-low latency of Storage Class Memory (SCM), the flexibility of IBM Spectrum Virtualize, and the AI predictive storage management and proactive support of Storage Insights. It comes in a powerful 2U storage all flash or hybrid flash array. The IBM FlashSystem 7200 brings mid-range storage while allowing the organization to add  multicloud technology that best supports the business.

At the bottom of the line is the NVMe entry enterprise all flash storage solution, which brings  NVMe end-to-end capabilities and flash performance to the affordable FlashSystem 5100. As IBM describes it, the FlashSystem® 5010 and IBM FlashSystem 5030 (formerly known as IBM Storwize V5010E and Storwize V5030E–they are still there, just renamed) are all-flash or hybrid flash solutions intended to provide enterprise-grade functionalities without compromising affordability or performance. Built with the flexibility of IBM Spectrum Virtualize and AI-powered predictive storage management and proactive support of Storage Insights. IBM FlashSystem 5000 helps make modern technologies such as artificial intelligence accessible to enterprises of all sizes. In short, these promise entry-level flash storage solutions designed to provide enterprise-grade functionality without compromising affordability or performance

IBM likes the words affordable and affordability in discussing this new storage family. But, as is typical with IBM, nowhere will you see a price or a reference to cost/TB or cost/IOPS or cost of anything although these are crucial metrics for evaluating any flash storage system. DancingDinosaur expects this after 20 years of writing about the z. Also, as I wrote at the outset, the z is not even included in this new flash storage family so we don’t even have to chuckle if they describe z storage as affordable.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Meet IBM’s New CEO

February 6, 2020

Have to admire Ginny Rometty. She survived 19 consecutive losing quarters (one quarter shy of 5 years), which DancingDinosaur and the rest of the world covered with monotonous regularity, and she was not bounced out until this January. Memo to readers: Keep that in mind if you start feeling performance heat from top management. Can’t imagine another company that would tolerate it but what do I know.

Arvind Krishna becomes the Chief Executive Officer and a member of the I BM Board of Directors effective April 6, 2020. Krishna is currently IBM Senior Vice President for Cloud and Cognitive Software, and was a principal architect of the company’s acquisition of Red Hat. The cloud/Red Hat strategy has only just started to show signs of payback.

As IBM writes: Under Rometty’s leadership, IBM acquired 65 companies, built out key capabilities in hybrid cloud, security, industry and data, and AI both organically and inorganically, and successfully completed one of the largest technology acquisitions in history (Red Hat).  She reinvented more than 50% of IBM’s portfolio, built a $21 billion hybrid cloud business and established IBM’s leadership in AI, quantum computing, and blockchain, while divesting nearly $9 billion in annual revenue to focus the portfolio on IBM’s high value, integrated offerings. Part of that was the approximately $34 billion Red Hat acquisition, IBM’s, and possibly the IT industry’s, biggest to date. Rometty isn’t going away all that soon; she continues in some executive Board position.

It is way too early to get IBM 1Q2020 results, which will be the last quarter of Rometty’s reign. The fourth quarter of 2019, at least was positive, especially after all those quarters of revenue loss. The company reported  $21.8 billion in revenue, up 0.1 percent. Red Hat revenue was up 24 percent. Cloud and cognitive systems were up 9 percent while systems, which includes the z, was up 16 percent. 

Total cloud revenue, the new CEO Arvind Krishna’s baby, was up 21 percent. Even with z revenue up more than cloud and cognitive systems, it is probably unlikely IBM will easily find a buyer for the z soon. If IBM dumps it, they will probably have to pay somebody to take it despite the z’s faithful, profitable blue chip customer base. 

Although the losing streak has come to an end Krishna still faces some serious challenges.  For example, although DancingDinosaur has been enthusiastically cheerleading quantum computing as the future there is no proven business model there. Except for some limited adoption by a few early adopters, there is no widespread groundswell of demand for quantum computing and the technology has not yet proven itself useful. Also there is no ready pool of skilled quantum talent. If you wanted to try quantum computing would you even know what to try or where to find skilled people?

Even in the area of cloud computing where IBM finally is starting to show some progress the company has yet to penetrate the top tier of players. These players–Amazon, Google, Microsoft/Azur–are not likely to concede market share.

So here is DancingDinosaur’s advice to Krishna: Be prepared to scrap for every point of cloud share and be prepared to spin a compelling case around quantum computing. Finally, don’t give up the z until the accountants and lawyers force you, which they will undoubtedly insist on.To the contrary, slash the z prices and make it an irresistible bargain. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Montana Sidelines the Mainframe

January 21, 2020

Over the past 20+ years DancingDinosaur has written this story numerous times. It never ends exactly the way they think it will. Here is the one I encountered this past week.

IBM z15

But that doesn’t stop the pr writers from finding a cute way to write the story. This time the  writers turned to references to the moon landings and trilby hats (huh?). Looks like a Fedora to me, but what do I know; I only wear baseball hats. But they always have to come up with something that makes the mainframe sound completely outdated. In this case they wrote: Mainframe computers, a technology that harkens back to an era of moon landings and men in trilby hats, are still widely used throughout government, but not in Montana for much longer.

At least they didn’t write that the mainframe was dead and gone or forgotten. Usually, I follow up on stories like this months later and call whichever IT person is still there. I congratulate him or her and ask how it went. That’s when I usually start hearing ums and uhs. It turns out the mainframe is still there, handling those last few jobs they just can’t replace yet.

Depending on how playful I’m feeling that day, I ask him or her what happened to the justification presented at the start of the project. Or I might ask what happened to the previous IT person. 

Sometimes, I might even refer them to a recent DancingDinosaur piece that explains about Linux on the mainframe or Java or describes mainframes running the latest Docker container technology or microservices. I’m not doing this for spite; I’m just trying to build up my readership. DancingDinosaur hates losing any reader, even if it’s late in their game.  So I always follow up with a link to DancingDinosaur

In an interview published by StateScoop, Chief Information Officer Tim Bottenfield described how for the last several years, the last remaining agencies using the state’s mainframe have migrated their data away from it and are now developing modern applications that can be moved to the state’s private and highly virtualized cloud environment. By spring 2021, Montana expects to be mainframe-free. Will make a note to call Bottenfield in Spring 2021 and see how they are doing.  Does anyone want to bet if the mainframe actually is completely out of service and gone by then?

As you all know, mainframes can be expensive to maintain, particularly if it’s just to keep a handful of applications running, which usually turn out to be mission-critical applications. Of the three major applications Montana still runs on its mainframe, two are used by the Montana Department of Public Health and Human Services, which is in the process of recoding those programs to work on modern platforms, as if the z15 isn’t  modern.

They haven’t told us whether these applications handle payments or deliver critical services to citizens. Either way it will not be pretty if such applications go down. The third is the state’s vehicle titling and registration system, which is being rebuilt to run out of the state’s data center. Again, we don’t know much about the criticality of these systems. But think how you might feel if you can’t get accurate or timely information from one of these systems. I can bet you wouldn’t be a happy camper; neither would I.

Systems like these are difficult to get right the first time, if at all. This is especially true if you will be using the latest hybrid cloud and services technologies. Yes, skilled mainframe people are hard to find and retain but so are any technically skilled and experienced people. If I were a decade younger, I could be attracted to the wide open spaces of Montana as a relief from the congestion of Boston. But I’m not the kind of hire Montana needs or wants. Stay tuned for when I check back in Spring 2021.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

BMC’s AMI Brings Machine Learning to Z

November 9, 2018

On Oct 18 BMC announced AMI, an automated mainframe intelligence capability that promises higher performing, self-managing mainframe environments to meet the growing demands created by digital business growth and do it through the use of AI-like capabilities.

AMI delivers a self-managing mainframe

BMC’s AMI solutions combine built-in domain expertise, machine learning, intelligent automation, and predictive analytics to help enterprises automatically manage, diagnose, heal, secure, and optimize mainframe processes. BMC doesn’t actually call it AI but they attribute all the AI buzzwords to it.

BMC cited Gartner saying: by 2020, thirty percent of data centers that fail to apply artificial intelligence and machine learning effectively in support of enterprise business will cease to be operationally and economically viable.  BMC is tapping machine learning in conjunction with its analysis of dozens of KPIs and millions of metrics a day to proactively identify, predict, and fix problems before they become an issue. In the process, BMC intends relieve the burden on enterprise teams and free up IT staff to work on high-value initiatives by removing manual processes through intelligent automation. Ultimately, the company hopes to keep its customers, as Gartner put it, operationally and economically viable.

In effect, mainframe-based organizations can benefit from BMC’s expertise in collecting deep and broad z/OS operational metrics from a variety of industry data sources, built-in world-class domain expertise, and multivariate analysis.

A lot of this already is available in the Z itself through a variety of tools, particularly zAware, described by IBM as a firmware feature consisting of an integrated set of analytic applications that monitor software running on z/OS and model normal system behavior. Its pattern recognition techniques identify unexpected messages, providing rapid diagnosis of problems caused by system changes.

But BMC is adding two new ingredients that should take this further, Autonomous Solutions and Enterprise Connectors.

Autonomous Solutions promise to enable IT operations that automatically anticipate and repair performance degradations and disruptive outages before they occur, without manual intervention. This set of intelligent, integrated solutions that compasses BMC AMI for Security Management, BMC AMI for DevOps, BMC AMI for Performance and Availability Management, and BMC AMI Cost and Capacity Management.

Enterprise Connectors move business-critical data from the mainframe to the entire enterprise and simplify the enterprise-wide management of business applications. The connectors promise a complete view of enterprise data by streaming mainframe metrics and related information in real-time to a variety of data receivers, including leading Security Information and Event Management (SIEM) solutions such as Splunk, IBM QRadar, ArcSight, LogRhythm, McAfee Enterprise Security Manager, and others. Note, BMC’s AMI Data Extractor for IMS solution is available now, additional extractors will be available early in 2019.

To bolster its mainframe business further. BMC in early October announced the acquisition of the assets of CorreLog, Inc., which provides real-time security management to mainframe customers. When combined with BMC’s offerings in systems, data, and cost management, it enables end-to-end solutions to ensure the availability, performance, and security of mission critical applications and data residing on today’s modern mainframe the merged operation. CorreLog brings capabilities for security and compliance auditing professionals who need more advanced network and system security, and improved adherence to key industry standards for protecting data.

The combination of CorreLog’s security offerings with BMC’s mainframe capabilities provides organizations with enhanced security capabilities including:

Real-time visibility into security events from mainframe environments, delivered directly into SIEM/SOC systems. It also brings a wide variety of security alerts, including IBM IMS and Db2, event log correlation, which provides up-to-the second security notifications for faster remediation in the event of a breach, and a 360-degree view of mainframe threat activity. The CorreLog deal is expected to close later this quarter.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

GAO Blames Z for Government Inefficiency

October 19, 2018

Check out the GAO report from May 2016 here.  The Feds spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M). In a related report, the IRS reported it used assembly language code and COBOL, both developed in the 1950s, for IMF and IDRS. Unfortunately, the GAO conflates the word “mainframe” to refer to outdated UNISYS mainframes with the modern, supported, and actively developed IBM Z mainframes, notes Ross Mauri, IBM general manager, Z systems.

Mainframes-mobile in the cloud courtesy of Compuware

The GAO repeatedly used “mainframe” to refer to outdated UNISYS mainframes alongside the latest advanced IBM Z mainframes.  COBOL, too, maintains active skills and training programs at many institutions and receives investment across many industries. In addition to COBOL, the IBM z14 also runs Java, Swift, Go, Python and other open languages to enable modern application enhancement and development. Does the GAO know that?

The GAO uses the word “mainframe” to refer to outdated UNISYS mainframes as well as modern, supported, and actively developed IBM Z mainframes. In a recent report, the GAO recommends moving to supported modern hardware. IBM agrees. The Z, however, does not expose mainframe investments to a rise in procurement and operating costs, nor to skilled staff issues, Mauri continued.

Three investments the GAO reviewed in the operations and maintenance clearly appear as legacy investments facing significant risks due to their reliance on obsolete programming languages, outdated hardware, and a shortage of staff with critical skills. For example, IRS reported that it used assembly language code and COBOL (both developed in the 1950s) for IMF and IDRS. What are these bureaucrats smoking?

The GAO also seems confused over the Z and the cloud. IBM Cloud Private is designed to run on Linux-based Z systems to take full advantage of the cloud through open containers while retaining the inherent benefits of Z hardware—security, availability,  scalability, reliability; all the ities enterprises have long relied on the z for. The GAO seems unaware that the Z’s automatic pervasive encryption immediately encrypts everything at rest or in transit. Furthermore, the GAO routinely addresses COBOL as a deficiency while ISVs and other signatories of the Open Letter consider it a modern, optimized, and actively supported programming language.

The GAO apparently isn’t even aware of IBM Cloud Private. IBM Cloud Private is compatible with leading IT systems manufacturers and has been optimized for IBM Z. All that you need to get started with the cloud is the starter kit available for IBM OpenPOWER LC (Linux) servers, enterprise Power Systems, and Hyperconverged Systems powered by Nutanix. You don’t even need a Z; just buy a low cost OpenPOWER LC (Linux) server online and configure it as desired.

Here is part of the letter that Compuware sent to the GAO, Federal CIOs, and members of Congress. It’s endorsed by several dozen members of the IT industry. The full letter is here:

In light of a June 2018 GAO report to the Internal Revenue Service suggesting the agency’s mainframe- and COBOL-based systems present significant risks to tax processing, we the mainframe IT community—developers, scholars, influencers and inventors—urge the IRS and other federal agencies to:

  • Reinvest in and modernize the mainframe platform and the mission-critical applications which many have long relied upon.
  • Prudently consider the financial risks and opportunity costs associated with rewriting and replacing proven, highly dependable mainframe applications, for which no “off-the-shelf” replacement exists.
  • Understand the security and performance requirements of these mainframe applications and data and the risk of migrating to platforms that were never designed to meet such requirements.

The Compuware letter goes on to state: In 2018, the mainframe is still the world’s most reliable, performant and securable platform, providing the lowest cost high-transaction system of record. Regarding COBOL it notes that since 2017 IBM z14 supports COBOL V6.2, which is optimized bi-monthly.

Finally, about attracting new COBOL workers: COBOL is as easy to work with it as any other language. In fact, open source Zowe has demonstrated appeal to young techies, providing solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. What don’t they get?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Z Acceptance Grows in BMC 2018 Survey

September 27, 2018

Did Zowe, introduced publicly just a few weeks ago, arrive in the nick of time, like the cavalry rescuing the mainframe from an aging workforce? In the latest BMC annual mainframe survey released in mid September, 95% of millennials are positive about the mainframe’s long-term prospects for supporting new and legacy applications. And 63% of respondents were under the age of 50, up ten points from the previous year.

The mainframe veterans, those with 30 or even 40 years of experience, are finally moving out. DancingDinosaur itself has been writing about the mainframe for about 35 years. With two recently married daughters even a hint of a grandchild on the way will be the signal for me to stop. In the meantime, read on.

Quite interesting from the BMC survey was the very high measures among executives believing in the long-term viability of the mainframe. More interesting to DancingDinosaur, however, was the interest in and willingness to use new mainframe technology like Linux and Java, which are not exactly new arrivals to the mainframe world; as we know, change takes time.

For example 28% of respondents cited as a strength the availability of new technology on the mainframe and their high level of confidence in that new technology. And this was before word about Zowe and what it could do to expand mainframe development got out. A little over a quarter of the respondents also cited using legacy apps to create new apps. Organizations are finally waking up to leveraging mainframe assets.

Also interesting was that both executives and technical staff cite application modernization among the top priorities. No complaints there. Similarly, BMC notes executive perception of the mainframe as a long-term solution is the highest in three years, a six point increase over 2016! While cost still remains a concern, BMC continues, the relative merits of the Z outweigh the costs and this perception continues to shift positively year after year.

The mainframe regularly has been slammed over the years as too costly. Yet. IBM has steadily lowered the cost of the mainframe in term of price performance. Now IBM is talking about applying AI to boost the efficiency, management, and operation of the mainframe data center.

The past May Gartner published a report confirming the value gains of the latest z14 and LinuxONE machines: The z14 ZR1 delivers an approximately 13% total capacity improvement over the z13’s maximum capacity for traditional z/OS environments. This is due to an estimated 10% boost in processor performance, as well as system design enhancements that improve the multiprocessor ratio. In the same report Gartner recommends including IBM’s LinuxONE Rockhopper II in RFPs for highly scalable, highly secure, Linux-based server solutions.

Several broad trends are coming together to feed the growing positive feelings the mainframe has experienced in recent years as revealed in the latest survey responses. “Absolute security and 24×7 availability have never been more important than now,” observes BMC’s John McKenny, VP of Strategy for ZSolutions Optimization. Here the Z itself plays a big part with pervasive encryption and secure containers.

Other trends, particularly digitization and mobility are “placing incredible pressure on both IT and mainframes to manage a greater volume, variety, and velocity of transactions and data, with workloads becoming more volatile and unpredictable,” said Bill Miller, president of ZSolutions at BMC. The latest BMC mainframe survey confirms executive and IT concerns in that area and the mainframe as an increasingly preferred response.

Bottom line: expect the mainframe to hang around for another decade or two at least. Long before then, DancingDinosaur will be a dithering grandfather playing with grandchildren and unable to get myself off the floor.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Compuware Expedites DevOps on Z

July 13, 2018

Compuware continues its quarterly introduction of new capabilities for the mainframe, a process that has been going on for several years by now. The latest advance, Topaz for Enterprise Data, promises to expedite the way DevOps teams can access the data they need while reducing the complexity, labor, and risk through extraction, masking, and visualization of the mainframe. The result: the ability to leverage all available data sources to deliver high-value apps and analytics fast.

Topaz for Enterprise Data expedites data access for DevOps

The days when mainframe shops could take a methodical and deliberate approach—painstakingly slow—to accessing enterprise data have long passed. Your DevOps teams need to dig the value out of that data and put it into the hands of managers and LOB teams fast, in hours, maybe just minutes so they can jump on even the most fleeting opportunities.

Fast, streamlined access to high-value data has become an urgent concern as businesses seek competitive advantages in a digital economy while fulfilling increasingly stringent compliance requirements. Topaz for Enterprise Data enables developers, QA staff, operations teams, and data scientists at all skill and experience levels to ensure they have immediate, secure access to the data they need, when they need it, in any format required.

It starts with data masking, which in just the last few months has become a critical concern with the rollout of GDPR across the EU. GDPR grants considerable protections and options to the people whose data your systems have been collecting. Now you need to protect personally identifiable information (PII) and comply with regulatory mandates like GDPR and whatever similar regs will come here.

Regs like these don’t apply just to your primary transaction data. You need data masking with all your data, especially when large, diverse datasets of high business value residing on the mainframe contain sensitive business or personal information.

This isn’t going to go away anytime soon so large enterprises must start transferring responsibility for the stewardship of this data to the next generation of DevOps folks who will be stuck with it. You can bet somebody will surely step forward and say “you have to change every instance of my data that contains this or that.” Even the most expensive lawyers will not be able to blunt such requests. Better to have the tools in place to respond to this quickly and easily.

The newest tool, according to Compuware, is Topaz for Enterprise Data. It will enable even a mainframe- inexperienced DevOps team to:

  • Readily understand relationships between data even when they lack direct familiarity with specific data types or applications, to ensure data integrity and resulting code quality.
  • Quickly generate data for testing, training, or business analytics purposes that properly and accurately represents actual production data.
  • Ensure that any sensitive business or personal data extracted from production is properly masked for privacy and compliance purposes, while preserving essential data relationships and characteristics.
  • Convert file types as required.

Topaz users can access all these capabilities from within Topaz’s familiar Eclipse development environment, eliminating the need to learn yet another new and complicated tool.

Those who experience it apparently like what they find. Noted Lynn Farley, Manager of Data Management at TCF Bank: “Testing with production-like obfuscated data helps us develop and deliver better quality applications, as well as remain compliant with data privacy requirements, and Topaz provides our developers with a way to implement data privacy rules to mask multiple data types across platforms and with consistent results.”

Rich Ptak, principal of IT analyst firm Ptak Associates similarly observed: “Leveraging a modern interface for fast, simple access to data for testing and other purposes is critical to digital agility,” adding it “resolves the long-standing challenge of rapidly getting value from the reams of data in disparate sources and formats that are critical to DevOps and continuous improvement.”

“The wealth of data that should give large enterprises a major competitive advantage in the digital economy often instead becomes a hindrance due to the complexity of sourcing across platforms, databases, and formats,” said Chris O’Malley,Comp CEO of Compuware. As DancingDinosaur sees it, by removing such obstacles Compuware reduces the friction between enterprise data and business advantage.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Preps Z World for GDPR

June 1, 2018

Remember Y2K?  That was when calendars rolled over from the 1999 to 2000. It was hyped as an event that would screw up computers worldwide. Sorry, planes did not fall out of the sky overnight (or at all), elevators didn’t plummet to the basement, and hospitals and banks did not cease functioning. DancingDinosaur did OK writing white papers on preparing for Y2K. Maybe nothing bad happened because companies read papers like those and worked on changing their date fields.

Starting May 25, 2018 GDPR became the new Y2K. GRDP, the EC’s (or EU) General Data Protection Regulation (GDPR), an overhaul of existing EC data protection rules, promises to strengthen and unify those laws for EC citizens and organizations anywhere collecting and exchanging data involving its citizens. That is probably most of the readers of DancingDinosaur. GDRP went into effect at the end of May and generated a firestorm of trade business press but nothing near what Y2K did.  The primary GDPR objectives are to give citizens control over their personal data and simplify the regulatory environment for international business.

According to Bob Yelland, author of How it Works: GDPR, a Little Bee Book above, 50% of global companies  say they will struggle to meet the rules set out by Europe unless they make significant changes to how they operate, and this may lead many companies to appoint a Data Protection Officer, which the rules recommend. Doesn’t it feel a little like Y2K again?

The Economist in April wrote: “After years of deliberation on how best to protect personal data, the EC is imposing a set of tough rules. These are designed to improve how data are stored and used by giving more control to individuals over their information and by obliging companies to handle what data they have more carefully. “

As you would expect, IBM created a GDPR framework with five phases to help organizations achieve readiness: Assess, Design, Transform, Operate, and Conform. The goal of the framework is to help organizations manage security and privacy effectively in order to reduce risks and therefore avoid incidents.

DancingDinosaur is not an expert on GDPR in any sense, but from reading GDPR documents, the Z with its pervasive encryption and automated secure key management should eliminate many concerns. The rest probably can be handled by following good Z data center policy and practices.

There is only one area of GDPR, however, that may be foreign to North American organizations—the parts about respecting and protecting the private data of individuals.

As The Economist wrote: GDPR obliges organizations to create an inventory of the personal data they hold. With digital storage becoming ever cheaper, companies often keep hundreds of databases, many of which are long forgotten. To comply with the new regulation, firms have to think harder about data hygiene. This is something North American companies probably have not thought enough about.

IBM recommends you start by assessing your current data privacy situation under all of the GDPR provisions. In particular, discover where protected information is located in your enterprise. Under GDPR, individuals have rights to consent to access, correct, delete, and transfer personal data. This will be new to most North American data centers, even the best managed Z data centers.

Then, IBM advises, assess the current state of your security practices, identify gaps, and design security controls to plug those gaps. In the process find and prioritize security vulnerabilities, as well as any personal data assets and affected systems. Again, you will want to design appropriate controls. If this starts sounding a little too complicated just turn it over to IBM or any of the handful of other vendors who are racing GDPR readiness services into the market. IBM offers Data Privacy Consulting Services along with a GDPR readiness assessment.

Of course, you can just outsource it to IBM or others. IBM also offers its GDPR framework with five phases. The goal of the framework is to help organizations subject to GDPR manage security and privacy with the goal of reducing risks and avoiding problems.

GDPR is not going to be fun, especially the obligation to comply with each individual’s rights regarding their data. DancingDinosaur suspects it could even get downright ugly.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: