Posts Tagged ‘IBM’

IBM Pushes Hybrid Cloud

December 14, 2018

Between quantum computing, blockchain, and hybrid cloud IBM is pursuing a pretty ambitious agenda. Of the three, hybrid promises the most immediate payback. Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a new Forrester report, Predictions 2019: Cloud Computing

Of course, IBM didn’t wait until 2019. It purchased Red Hat Linux at the end of Oct. 2018. DancingDinosaur covered it here a few days later. At that time IBM Chairman Ginni Rometty called the acquisition of Red Hat a game-changer. “It changes everything about the cloud market,” she noted. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer.

Forrester continues, predicting that in 2019 the cloud will reach its more interesting young adult years, bringing innovative development services to enterprise apps rather than just serving up cheaper, temporary servers and storage, which is how it has primarily grown over the past decade. Who hasn’t turned to one or another cloud provider to augment its IT resources as needed, whether backup or server capacity, and network?

As Forrester puts it: The six largest hyperscale cloud leaders — Alibaba, Amazon Web Services [AWS], Google, IBM, Microsoft Azure, and Oracle — will all grow larger in 2019, as service catalogs and global regions expand. Meanwhile, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion in 2019, expanding at more than 20%, the research firm predicts.

Hybrid clouds, which provide two or more cloud providers or platforms, are emerging as the preferred way for enterprises to go.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but that also monitors trends and usage to prevent outages.

Of course, IBM also offers a solution for this; the company’s Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud.

Along with hybrid clouds containers are huge in Forrester’s view. Powered by cloud-native open source components and tools, companies will start rolling out their own digital application platforms that will span clouds, include serverless and event-driven services, and form the foundation for modernizing core business apps for the next decade, the researchers observed. Next year’s hottest trend, according to Forrester, will be making containers easier to deploy, secure, monitor, scale, and upgrade. “Enterprise-ready container platforms from Docker, IBM, Mesosphere, Pivotal, Rancher, Red Hat, VMware, and others are poised to grow rapidly,” the researchers noted.

This may not be as straightforward as the researchers imply. Each organization must select for itself which private cloud strategy is most appropriate, they note. They anticipate greater private cloud structure emerging in 2019. It noted that organizations face three basic private cloud paths: building internally, using vSphere sprinkled with developer-focused tools and software-defined infrastructure; and having its cloud environment custom-built with converged or hyperconverged software stacks to minimize the tech burden. Or lastly, building its cloud infrastructure internally with OpenStack, relying on the hard work of its own tech-savvy team. Am sure there are any number of consultants, contractors, and vendors eager to step in and do this for you.

If you aren’t sure, IBM is offering a number of free trials that you can play with.

As Forrester puts it: Buckle up; for 2019 expect the cloud ride to accelerate.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Syncsort Expands Ironstream with EView

December 10, 2018

While IBM is focused on battling the hyperscalers for cloud dominance and trying to overcome the laws of physics with quantum computing a second tier of mainframe ISVs are trying to advance mainframe data center performance. Syncsort. For instance, late in November Syncsort acquired EView Technology, Raleigh NC, to integrate mainframe and IBM i data into its enterprise IT management platform, Ironstream.

eview-discovery-for-servicenow-big-picture

How EView works with the mainframe

EView would seem a predictable choice for a Syncsort strategic acquisition. It also can be seen as yet another sign that value today lies in efficient data integration and analysis. In this case, Syncsort bolstered its capability to harvest log data originating on IBM i and mainframes through the acquisition of EView Technology, which builds big iron connectors for mainstream systems management tools.

Meanwhile, through multiple acquisitions Syncsort’s Ironstream has emerged as a leading option for forwarding critical security and operational machine data from mainframes and IBM i servers for deeper analysis. This, in turn, enables the data to be streamed and correlated with data from the rest of the enterprise within Splunk and other Security Information and Event Management (SIEM) and IT Operations Analytics (ITOA) products.

For Syncsort EView was a typical acquisition target. It served mainframe and IBM i customers and EView would expand Ironstream functionality. Not surprisingly, each company’s products are architected differently. EView sends it data through a lightweight agent as an intermediary and makes active use of ServiceNow, a ready‑built foundation that transforms how a business operates, while Ironstone takes a more direct approach by sending data directly to Splunk.

Each approach has its strengths, says David Hodgson, Syncsort’s Chief Product Officer. One possibility: Syncsort could augment the EView agent with Ironstream while giving customers a choice. Those decisions will be taken up in earnest in January.

Furthermore, in addition to Splunk and the Elastic Stack, Ironstream will now be able to integrate this data with ServiceNow Discovery, Microsoft System Center, and Micro Focus Operations Manager. With the EView acquisition, Syncsort just expands it footprint in mainframe data analytics. “ServiceNow in particular is attracting excitement,” said Hodgson. In addition, customers can augment their EView agent with Ironstream, effectively giving customers a new choice.

Adds Josh Rogers, CEO, Syncsort. “The acquisition of EView strengthens and extends the reach of our Ironstream family of products, making data from traditional systems readily available to more of the key management platforms our customers depend on for those insights.”

In addition, EView’s enterprise-proven Intelligent Agent Technology will bolster Syncsort’s ability to offer organizations more options in integrating different data sources with advanced management platforms for a more comprehensive view.

Syncsort’s Ironstream is now part of the growing Syncsort Integrate family of products. It has emerged as an industry leading solution for forwarding critical security and operational machine data from mainframes and IBM i servers for analytic purposes. This enables the data to be streamed and correlated with data from the rest of the enterprise within Splunk and other SIEM and ITOA solutions.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Are Quantum Computers Even Feasible

November 29, 2018

IBM has toned down its enthusiasm for quantum computing. Even last spring it already was backing off a bit at Think 2018. Now the company is believes that quantum computing will augment classical computing to potentially open doors that it once thought would remain locked indefinitely.

First IBM Q computation center

With its Bristlecone announcement Google trumped IBM with 72 qubits. Debating a few dozen qubits more or less may prove irrelevant. A number of quantum physics researchers have recently been publishing papers that suggest useful quantum computing may be decades away.

Mikhail Dyakonov writes in his piece titled: The Case Against Quantum Computing, which appeared last month in Spectrum IEEE.org. Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France.

As Dyakonov explains: In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. But you already know this because DancingDinosaur covered it here and several times since.

But this is what you might not know: With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described as a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and, according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

Dyakonov continues: In contrast to a classical bit a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the statement that a qubit can exist simultaneously in both of its ↑ and ↓ states. Yes, quantum mechanics often defies intuition.

So while IBM, Google, and other classical computer providers quibble about 50 qubits or 72 or even 500 qubits, to Dyakonov this is ridiculous. The real number of qubits will be astronomical as he explains: Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That’s a very big number indeed; much greater than the number of subatomic particles in the observable universe.

Just in case you missed the math, he repeats: A useful quantum computer [will] need to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

Before you run out to invest in a quantum computer with the most qubits you can buy you would be better served joining IBM’s Q Experience and experimenting with it on IBM’s nickel. Let them wrestle with the issues Dyakonov brings up.

Then, Dyakonov concludes: I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems.  I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never.

I hope my high school science teacher who enthusiastically introduced me to quantum physics has long since retired or, more likely, passed on. Meanwhile, DancingDinosaur expects to revisit quantum regularly in the coming months or even years.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

BMC’s AMI Brings Machine Learning to Z

November 9, 2018

On Oct 18 BMC announced AMI, an automated mainframe intelligence capability that promises higher performing, self-managing mainframe environments to meet the growing demands created by digital business growth and do it through the use of AI-like capabilities.

AMI delivers a self-managing mainframe

BMC’s AMI solutions combine built-in domain expertise, machine learning, intelligent automation, and predictive analytics to help enterprises automatically manage, diagnose, heal, secure, and optimize mainframe processes. BMC doesn’t actually call it AI but they attribute all the AI buzzwords to it.

BMC cited Gartner saying: by 2020, thirty percent of data centers that fail to apply artificial intelligence and machine learning effectively in support of enterprise business will cease to be operationally and economically viable.  BMC is tapping machine learning in conjunction with its analysis of dozens of KPIs and millions of metrics a day to proactively identify, predict, and fix problems before they become an issue. In the process, BMC intends relieve the burden on enterprise teams and free up IT staff to work on high-value initiatives by removing manual processes through intelligent automation. Ultimately, the company hopes to keep its customers, as Gartner put it, operationally and economically viable.

In effect, mainframe-based organizations can benefit from BMC’s expertise in collecting deep and broad z/OS operational metrics from a variety of industry data sources, built-in world-class domain expertise, and multivariate analysis.

A lot of this already is available in the Z itself through a variety of tools, particularly zAware, described by IBM as a firmware feature consisting of an integrated set of analytic applications that monitor software running on z/OS and model normal system behavior. Its pattern recognition techniques identify unexpected messages, providing rapid diagnosis of problems caused by system changes.

But BMC is adding two new ingredients that should take this further, Autonomous Solutions and Enterprise Connectors.

Autonomous Solutions promise to enable IT operations that automatically anticipate and repair performance degradations and disruptive outages before they occur, without manual intervention. This set of intelligent, integrated solutions that compasses BMC AMI for Security Management, BMC AMI for DevOps, BMC AMI for Performance and Availability Management, and BMC AMI Cost and Capacity Management.

Enterprise Connectors move business-critical data from the mainframe to the entire enterprise and simplify the enterprise-wide management of business applications. The connectors promise a complete view of enterprise data by streaming mainframe metrics and related information in real-time to a variety of data receivers, including leading Security Information and Event Management (SIEM) solutions such as Splunk, IBM QRadar, ArcSight, LogRhythm, McAfee Enterprise Security Manager, and others. Note, BMC’s AMI Data Extractor for IMS solution is available now, additional extractors will be available early in 2019.

To bolster its mainframe business further. BMC in early October announced the acquisition of the assets of CorreLog, Inc., which provides real-time security management to mainframe customers. When combined with BMC’s offerings in systems, data, and cost management, it enables end-to-end solutions to ensure the availability, performance, and security of mission critical applications and data residing on today’s modern mainframe the merged operation. CorreLog brings capabilities for security and compliance auditing professionals who need more advanced network and system security, and improved adherence to key industry standards for protecting data.

The combination of CorreLog’s security offerings with BMC’s mainframe capabilities provides organizations with enhanced security capabilities including:

Real-time visibility into security events from mainframe environments, delivered directly into SIEM/SOC systems. It also brings a wide variety of security alerts, including IBM IMS and Db2, event log correlation, which provides up-to-the second security notifications for faster remediation in the event of a breach, and a 360-degree view of mainframe threat activity. The CorreLog deal is expected to close later this quarter.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across IBM’s storage portfolio. The company’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup.  The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included is the next generation of High-Performance Flash Enclosures (HPFE Gen2), the DS8880F family delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across our storage portfolio. IBM’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z through flash. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

Designed to provide top performance for mission-critical applications, DS8880F is based on the same fundamental system architecture as IBM Watson. DS8880F, explains IBM, forms the three-tiered architecture that balances system resources for optimal throughput.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included in the next generation of High-Performance Flash Enclosures (HPFE Gen2). The DS8880F family also delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

GAO Blames Z for Government Inefficiency

October 19, 2018

Check out the GAO report from May 2016 here.  The Feds spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M). In a related report, the IRS reported it used assembly language code and COBOL, both developed in the 1950s, for IMF and IDRS. Unfortunately, the GAO conflates the word “mainframe” to refer to outdated UNISYS mainframes with the modern, supported, and actively developed IBM Z mainframes, notes Ross Mauri, IBM general manager, Z systems.

Mainframes-mobile in the cloud courtesy of Compuware

The GAO repeatedly used “mainframe” to refer to outdated UNISYS mainframes alongside the latest advanced IBM Z mainframes.  COBOL, too, maintains active skills and training programs at many institutions and receives investment across many industries. In addition to COBOL, the IBM z14 also runs Java, Swift, Go, Python and other open languages to enable modern application enhancement and development. Does the GAO know that?

The GAO uses the word “mainframe” to refer to outdated UNISYS mainframes as well as modern, supported, and actively developed IBM Z mainframes. In a recent report, the GAO recommends moving to supported modern hardware. IBM agrees. The Z, however, does not expose mainframe investments to a rise in procurement and operating costs, nor to skilled staff issues, Mauri continued.

Three investments the GAO reviewed in the operations and maintenance clearly appear as legacy investments facing significant risks due to their reliance on obsolete programming languages, outdated hardware, and a shortage of staff with critical skills. For example, IRS reported that it used assembly language code and COBOL (both developed in the 1950s) for IMF and IDRS. What are these bureaucrats smoking?

The GAO also seems confused over the Z and the cloud. IBM Cloud Private is designed to run on Linux-based Z systems to take full advantage of the cloud through open containers while retaining the inherent benefits of Z hardware—security, availability,  scalability, reliability; all the ities enterprises have long relied on the z for. The GAO seems unaware that the Z’s automatic pervasive encryption immediately encrypts everything at rest or in transit. Furthermore, the GAO routinely addresses COBOL as a deficiency while ISVs and other signatories of the Open Letter consider it a modern, optimized, and actively supported programming language.

The GAO apparently isn’t even aware of IBM Cloud Private. IBM Cloud Private is compatible with leading IT systems manufacturers and has been optimized for IBM Z. All that you need to get started with the cloud is the starter kit available for IBM OpenPOWER LC (Linux) servers, enterprise Power Systems, and Hyperconverged Systems powered by Nutanix. You don’t even need a Z; just buy a low cost OpenPOWER LC (Linux) server online and configure it as desired.

Here is part of the letter that Compuware sent to the GAO, Federal CIOs, and members of Congress. It’s endorsed by several dozen members of the IT industry. The full letter is here:

In light of a June 2018 GAO report to the Internal Revenue Service suggesting the agency’s mainframe- and COBOL-based systems present significant risks to tax processing, we the mainframe IT community—developers, scholars, influencers and inventors—urge the IRS and other federal agencies to:

  • Reinvest in and modernize the mainframe platform and the mission-critical applications which many have long relied upon.
  • Prudently consider the financial risks and opportunity costs associated with rewriting and replacing proven, highly dependable mainframe applications, for which no “off-the-shelf” replacement exists.
  • Understand the security and performance requirements of these mainframe applications and data and the risk of migrating to platforms that were never designed to meet such requirements.

The Compuware letter goes on to state: In 2018, the mainframe is still the world’s most reliable, performant and securable platform, providing the lowest cost high-transaction system of record. Regarding COBOL it notes that since 2017 IBM z14 supports COBOL V6.2, which is optimized bi-monthly.

Finally, about attracting new COBOL workers: COBOL is as easy to work with it as any other language. In fact, open source Zowe has demonstrated appeal to young techies, providing solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. What don’t they get?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Secure Containers for the Z

October 11, 2018

What’s all this talk about secure containers? Mainframe data center managers have long used secure containers, only they call them logical partitions (LPARs). Secure service containers must be some x86 thing.

Courtesy: Mainframe Watch Belgium

Writing the first week in Oct., Ross Mauri, General Manager IBM Z, observes: Today’s executives in a digitally empowered world want IT to innovate and deliver outstanding user experiences. But, as you know, this same landscape increases exposure and scrutiny around the protection of valuable and sensitive data. IBM’s answer: new capabilities for the IBM z14 and LinuxONE platforms that handle digital transformation while responding to immediate market needs and delivering effective solutions.

The containers provide a secure service container that hosts container-based applications for hybrid and private cloud workloads on IBM LinuxONE and Z servers as an IBM Cloud Private software solution.  This secure computing environment for microservices-based applications can be deployed without requiring code changes to exploit inherent security capabilities. In the process, it provides:

  • Tamper protection during installation time
  • Restricted administrator access to help prevent the misuse of privileged user credentials
  • Automatic encryption of data both in flight and at rest

This differs from an LPAR. According to IBM, the LPAR or logical partition are, in practice, equivalent to separate mainframes. This is not trivial power. Each LPAR runs its own operating system. This can be any mainframe operating system; there is no need to run z/OS, for example, in each LPAR. The installation planners  also may elect to share I/O devices across several LPARs, but this is a local decision.

The system administrator can assign one or more system processors for the exclusive use of an LPAR. Alternately, the administrator can allow all processors to be used on some or all LPARs. Here, the system control functions (often known as microcode or firmware) provide a dispatcher to share the processors among the selected LPARs. The administrator can specify a maximum number of concurrent processors executing in each LPAR. The administrator can also provide weightings for different LPARs; for example, specifying that LPAR1 should receive twice as much processor time as LPAR2. If the code in one LPAR crashes, it has no effect on the other LPARs. Not sure this is the case with the new microservices containers.

Mauri tries to make the case for the new containers. These containers allow applications and data to inherit a layer of security with Secure Service Containers that, in turn, inherit the embedded capabilities at the core of IBM Z and LinuxONE to help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives. DancingDinosaur does not know what “hyper protect” means in this context. Sounds like marketing-speak.

Also Mauri explains that IBM Secure Service Containers help protect the privacy of sensitive company data and customer data from administrators with elevated credentials. At the same time they allow development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications.

In fact, IBM continues the explanation by saying it selected this unique and class-leading data privacy assurance technology to allow applications and data to inherit yet another layer of security through Secure Service Containers. “We’ve embedded capabilities at the core of IBM Z and LinuxONE that help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives.” IBM does like the hyper protect phrase; wish DancingDinosaur knew what it meant. A Google search comes up with hyper Protect Crypto Services, which IBM concedes is still an experimental phase, so, in fact, it doesn’t mean anything yet. Maybe in the future.

IBM Secure Service Containers help protect the privacy of sensitive company and customer data from administrators with elevated credentials—a serious risk—while, at the same time, allowing development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications. OK, DancingDinosaur can accept this but it seems only marginally different from what you can do with good ole LPARs. Maybe the difference only becomes apparent when you attempt to build the latest generation microservices-based apps.

If your choice comes down to secure service containers or LPARs, guess you need to look at what kind of apps you want to deploy. All DancingDinosaur can add is LPARs are powerful, known, and proven technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: