Posts Tagged ‘LinuxONE’

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

BMC’s AMI Brings Machine Learning to Z

November 9, 2018

On Oct 18 BMC announced AMI, an automated mainframe intelligence capability that promises higher performing, self-managing mainframe environments to meet the growing demands created by digital business growth and do it through the use of AI-like capabilities.

AMI delivers a self-managing mainframe

BMC’s AMI solutions combine built-in domain expertise, machine learning, intelligent automation, and predictive analytics to help enterprises automatically manage, diagnose, heal, secure, and optimize mainframe processes. BMC doesn’t actually call it AI but they attribute all the AI buzzwords to it.

BMC cited Gartner saying: by 2020, thirty percent of data centers that fail to apply artificial intelligence and machine learning effectively in support of enterprise business will cease to be operationally and economically viable.  BMC is tapping machine learning in conjunction with its analysis of dozens of KPIs and millions of metrics a day to proactively identify, predict, and fix problems before they become an issue. In the process, BMC intends relieve the burden on enterprise teams and free up IT staff to work on high-value initiatives by removing manual processes through intelligent automation. Ultimately, the company hopes to keep its customers, as Gartner put it, operationally and economically viable.

In effect, mainframe-based organizations can benefit from BMC’s expertise in collecting deep and broad z/OS operational metrics from a variety of industry data sources, built-in world-class domain expertise, and multivariate analysis.

A lot of this already is available in the Z itself through a variety of tools, particularly zAware, described by IBM as a firmware feature consisting of an integrated set of analytic applications that monitor software running on z/OS and model normal system behavior. Its pattern recognition techniques identify unexpected messages, providing rapid diagnosis of problems caused by system changes.

But BMC is adding two new ingredients that should take this further, Autonomous Solutions and Enterprise Connectors.

Autonomous Solutions promise to enable IT operations that automatically anticipate and repair performance degradations and disruptive outages before they occur, without manual intervention. This set of intelligent, integrated solutions that compasses BMC AMI for Security Management, BMC AMI for DevOps, BMC AMI for Performance and Availability Management, and BMC AMI Cost and Capacity Management.

Enterprise Connectors move business-critical data from the mainframe to the entire enterprise and simplify the enterprise-wide management of business applications. The connectors promise a complete view of enterprise data by streaming mainframe metrics and related information in real-time to a variety of data receivers, including leading Security Information and Event Management (SIEM) solutions such as Splunk, IBM QRadar, ArcSight, LogRhythm, McAfee Enterprise Security Manager, and others. Note, BMC’s AMI Data Extractor for IMS solution is available now, additional extractors will be available early in 2019.

To bolster its mainframe business further. BMC in early October announced the acquisition of the assets of CorreLog, Inc., which provides real-time security management to mainframe customers. When combined with BMC’s offerings in systems, data, and cost management, it enables end-to-end solutions to ensure the availability, performance, and security of mission critical applications and data residing on today’s modern mainframe the merged operation. CorreLog brings capabilities for security and compliance auditing professionals who need more advanced network and system security, and improved adherence to key industry standards for protecting data.

The combination of CorreLog’s security offerings with BMC’s mainframe capabilities provides organizations with enhanced security capabilities including:

Real-time visibility into security events from mainframe environments, delivered directly into SIEM/SOC systems. It also brings a wide variety of security alerts, including IBM IMS and Db2, event log correlation, which provides up-to-the second security notifications for faster remediation in the event of a breach, and a 360-degree view of mainframe threat activity. The CorreLog deal is expected to close later this quarter.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across IBM’s storage portfolio. The company’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup.  The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included is the next generation of High-Performance Flash Enclosures (HPFE Gen2), the DS8880F family delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across our storage portfolio. IBM’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z through flash. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

Designed to provide top performance for mission-critical applications, DS8880F is based on the same fundamental system architecture as IBM Watson. DS8880F, explains IBM, forms the three-tiered architecture that balances system resources for optimal throughput.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included in the next generation of High-Performance Flash Enclosures (HPFE Gen2). The DS8880F family also delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

GAO Blames Z for Government Inefficiency

October 19, 2018

Check out the GAO report from May 2016 here.  The Feds spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M). In a related report, the IRS reported it used assembly language code and COBOL, both developed in the 1950s, for IMF and IDRS. Unfortunately, the GAO conflates the word “mainframe” to refer to outdated UNISYS mainframes with the modern, supported, and actively developed IBM Z mainframes, notes Ross Mauri, IBM general manager, Z systems.

Mainframes-mobile in the cloud courtesy of Compuware

The GAO repeatedly used “mainframe” to refer to outdated UNISYS mainframes alongside the latest advanced IBM Z mainframes.  COBOL, too, maintains active skills and training programs at many institutions and receives investment across many industries. In addition to COBOL, the IBM z14 also runs Java, Swift, Go, Python and other open languages to enable modern application enhancement and development. Does the GAO know that?

The GAO uses the word “mainframe” to refer to outdated UNISYS mainframes as well as modern, supported, and actively developed IBM Z mainframes. In a recent report, the GAO recommends moving to supported modern hardware. IBM agrees. The Z, however, does not expose mainframe investments to a rise in procurement and operating costs, nor to skilled staff issues, Mauri continued.

Three investments the GAO reviewed in the operations and maintenance clearly appear as legacy investments facing significant risks due to their reliance on obsolete programming languages, outdated hardware, and a shortage of staff with critical skills. For example, IRS reported that it used assembly language code and COBOL (both developed in the 1950s) for IMF and IDRS. What are these bureaucrats smoking?

The GAO also seems confused over the Z and the cloud. IBM Cloud Private is designed to run on Linux-based Z systems to take full advantage of the cloud through open containers while retaining the inherent benefits of Z hardware—security, availability,  scalability, reliability; all the ities enterprises have long relied on the z for. The GAO seems unaware that the Z’s automatic pervasive encryption immediately encrypts everything at rest or in transit. Furthermore, the GAO routinely addresses COBOL as a deficiency while ISVs and other signatories of the Open Letter consider it a modern, optimized, and actively supported programming language.

The GAO apparently isn’t even aware of IBM Cloud Private. IBM Cloud Private is compatible with leading IT systems manufacturers and has been optimized for IBM Z. All that you need to get started with the cloud is the starter kit available for IBM OpenPOWER LC (Linux) servers, enterprise Power Systems, and Hyperconverged Systems powered by Nutanix. You don’t even need a Z; just buy a low cost OpenPOWER LC (Linux) server online and configure it as desired.

Here is part of the letter that Compuware sent to the GAO, Federal CIOs, and members of Congress. It’s endorsed by several dozen members of the IT industry. The full letter is here:

In light of a June 2018 GAO report to the Internal Revenue Service suggesting the agency’s mainframe- and COBOL-based systems present significant risks to tax processing, we the mainframe IT community—developers, scholars, influencers and inventors—urge the IRS and other federal agencies to:

  • Reinvest in and modernize the mainframe platform and the mission-critical applications which many have long relied upon.
  • Prudently consider the financial risks and opportunity costs associated with rewriting and replacing proven, highly dependable mainframe applications, for which no “off-the-shelf” replacement exists.
  • Understand the security and performance requirements of these mainframe applications and data and the risk of migrating to platforms that were never designed to meet such requirements.

The Compuware letter goes on to state: In 2018, the mainframe is still the world’s most reliable, performant and securable platform, providing the lowest cost high-transaction system of record. Regarding COBOL it notes that since 2017 IBM z14 supports COBOL V6.2, which is optimized bi-monthly.

Finally, about attracting new COBOL workers: COBOL is as easy to work with it as any other language. In fact, open source Zowe has demonstrated appeal to young techies, providing solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. What don’t they get?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Secure Containers for the Z

October 11, 2018

What’s all this talk about secure containers? Mainframe data center managers have long used secure containers, only they call them logical partitions (LPARs). Secure service containers must be some x86 thing.

Courtesy: Mainframe Watch Belgium

Writing the first week in Oct., Ross Mauri, General Manager IBM Z, observes: Today’s executives in a digitally empowered world want IT to innovate and deliver outstanding user experiences. But, as you know, this same landscape increases exposure and scrutiny around the protection of valuable and sensitive data. IBM’s answer: new capabilities for the IBM z14 and LinuxONE platforms that handle digital transformation while responding to immediate market needs and delivering effective solutions.

The containers provide a secure service container that hosts container-based applications for hybrid and private cloud workloads on IBM LinuxONE and Z servers as an IBM Cloud Private software solution.  This secure computing environment for microservices-based applications can be deployed without requiring code changes to exploit inherent security capabilities. In the process, it provides:

  • Tamper protection during installation time
  • Restricted administrator access to help prevent the misuse of privileged user credentials
  • Automatic encryption of data both in flight and at rest

This differs from an LPAR. According to IBM, the LPAR or logical partition are, in practice, equivalent to separate mainframes. This is not trivial power. Each LPAR runs its own operating system. This can be any mainframe operating system; there is no need to run z/OS, for example, in each LPAR. The installation planners  also may elect to share I/O devices across several LPARs, but this is a local decision.

The system administrator can assign one or more system processors for the exclusive use of an LPAR. Alternately, the administrator can allow all processors to be used on some or all LPARs. Here, the system control functions (often known as microcode or firmware) provide a dispatcher to share the processors among the selected LPARs. The administrator can specify a maximum number of concurrent processors executing in each LPAR. The administrator can also provide weightings for different LPARs; for example, specifying that LPAR1 should receive twice as much processor time as LPAR2. If the code in one LPAR crashes, it has no effect on the other LPARs. Not sure this is the case with the new microservices containers.

Mauri tries to make the case for the new containers. These containers allow applications and data to inherit a layer of security with Secure Service Containers that, in turn, inherit the embedded capabilities at the core of IBM Z and LinuxONE to help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives. DancingDinosaur does not know what “hyper protect” means in this context. Sounds like marketing-speak.

Also Mauri explains that IBM Secure Service Containers help protect the privacy of sensitive company data and customer data from administrators with elevated credentials. At the same time they allow development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications.

In fact, IBM continues the explanation by saying it selected this unique and class-leading data privacy assurance technology to allow applications and data to inherit yet another layer of security through Secure Service Containers. “We’ve embedded capabilities at the core of IBM Z and LinuxONE that help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives.” IBM does like the hyper protect phrase; wish DancingDinosaur knew what it meant. A Google search comes up with hyper Protect Crypto Services, which IBM concedes is still an experimental phase, so, in fact, it doesn’t mean anything yet. Maybe in the future.

IBM Secure Service Containers help protect the privacy of sensitive company and customer data from administrators with elevated credentials—a serious risk—while, at the same time, allowing development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications. OK, DancingDinosaur can accept this but it seems only marginally different from what you can do with good ole LPARs. Maybe the difference only becomes apparent when you attempt to build the latest generation microservices-based apps.

If your choice comes down to secure service containers or LPARs, guess you need to look at what kind of apps you want to deploy. All DancingDinosaur can add is LPARs are powerful, known, and proven technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Z Acceptance Grows in BMC 2018 Survey

September 27, 2018

Did Zowe, introduced publicly just a few weeks ago, arrive in the nick of time, like the cavalry rescuing the mainframe from an aging workforce? In the latest BMC annual mainframe survey released in mid September, 95% of millennials are positive about the mainframe’s long-term prospects for supporting new and legacy applications. And 63% of respondents were under the age of 50, up ten points from the previous year.

The mainframe veterans, those with 30 or even 40 years of experience, are finally moving out. DancingDinosaur itself has been writing about the mainframe for about 35 years. With two recently married daughters even a hint of a grandchild on the way will be the signal for me to stop. In the meantime, read on.

Quite interesting from the BMC survey was the very high measures among executives believing in the long-term viability of the mainframe. More interesting to DancingDinosaur, however, was the interest in and willingness to use new mainframe technology like Linux and Java, which are not exactly new arrivals to the mainframe world; as we know, change takes time.

For example 28% of respondents cited as a strength the availability of new technology on the mainframe and their high level of confidence in that new technology. And this was before word about Zowe and what it could do to expand mainframe development got out. A little over a quarter of the respondents also cited using legacy apps to create new apps. Organizations are finally waking up to leveraging mainframe assets.

Also interesting was that both executives and technical staff cite application modernization among the top priorities. No complaints there. Similarly, BMC notes executive perception of the mainframe as a long-term solution is the highest in three years, a six point increase over 2016! While cost still remains a concern, BMC continues, the relative merits of the Z outweigh the costs and this perception continues to shift positively year after year.

The mainframe regularly has been slammed over the years as too costly. Yet. IBM has steadily lowered the cost of the mainframe in term of price performance. Now IBM is talking about applying AI to boost the efficiency, management, and operation of the mainframe data center.

The past May Gartner published a report confirming the value gains of the latest z14 and LinuxONE machines: The z14 ZR1 delivers an approximately 13% total capacity improvement over the z13’s maximum capacity for traditional z/OS environments. This is due to an estimated 10% boost in processor performance, as well as system design enhancements that improve the multiprocessor ratio. In the same report Gartner recommends including IBM’s LinuxONE Rockhopper II in RFPs for highly scalable, highly secure, Linux-based server solutions.

Several broad trends are coming together to feed the growing positive feelings the mainframe has experienced in recent years as revealed in the latest survey responses. “Absolute security and 24×7 availability have never been more important than now,” observes BMC’s John McKenny, VP of Strategy for ZSolutions Optimization. Here the Z itself plays a big part with pervasive encryption and secure containers.

Other trends, particularly digitization and mobility are “placing incredible pressure on both IT and mainframes to manage a greater volume, variety, and velocity of transactions and data, with workloads becoming more volatile and unpredictable,” said Bill Miller, president of ZSolutions at BMC. The latest BMC mainframe survey confirms executive and IT concerns in that area and the mainframe as an increasingly preferred response.

Bottom line: expect the mainframe to hang around for another decade or two at least. Long before then, DancingDinosaur will be a dithering grandfather playing with grandchildren and unable to get myself off the floor.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

LinuxONE is a Bargain

September 21, 2018

LinuxONE may be the best bargain you’ll ever find this season, and you don’t have to wait until Santa brings it down your chimney. Think instead about transformation and digital disruption.  Do you want to be in business in 3 years? That is the basic question that faces every organization that exists today, writes Kat Lind, Chief Systems Engineer, Solitaire Interglobal Ltd, author of the white paper Scaling the Digital Mountain.

Then there is the Robert Frances Group’s  Top 10 Reasons to Choose LinuxONE. DancingDinosaur won’t rehash all ten. Instead, let’s selectively pick a few, starting with the first one, Least Risk Solution, which pretty much encapsulates the LinuxONE story. It reduces business, compliance, financial, operations, and project risks. Its availability, disaster recovery, scalability and security features minimize the business and financial exposures. In addition to pervasive encryption it offers a range of security capabilities often overlooked or downplayed including; logical partition (LPAR) isolation, and secure containers.

Since it is a z dedicated to Linux, unlike the z13 or z14 z/OS machines that also run Linux but not as easily or efficiently,  As the Robert Frances Group noted: it also handles Java, Python; and other languages and tools like Hadoop, Docker, other containers, Chef, Puppet, KVM, multiple Linux distributions, open source, and more.  It also can be used in a traditional legacy environment or used as the platform of choice for cloud hosting. LinuxONE supports tools that enable DevOps similar to those on x86 servers.

And LinuxONE delivers world class performance. As the Robert Frances Group puts it: LinuxONE is capable of driving processor utilization to virtually 100% without a latency impact, performance instabilities, or performance penalties. In addition, LinuxONE uses the fastest commercially available processors, running at 5.2GHz, offloads I/O to separate processors enabling the main processors to concentrate on application workloads, and enables much more data in memory, up to 32TB.

In addition, you can run thousands of virtual machine instances on a single LinuxONE server. The cost benefit of this is astounding compared to managing the equivalent number of x86 servers. The added labor cost alone would break your budget.

In terms of security, LinuxONE is a no brainer. Adds Lind from Solitaire:  Failure in this area erodes an organization’s reputation faster than any other factor. The impact of breaches on customer confidence and follow-on sales has been tracked, and an analysis of that data shows that after a significant incursion, the average customer fall-off exceeds 41% accompanied by a long-running drop in revenues. Recovery involves a significant outlay of service, equipment, and personnel expenses to reestablish a trusted position, as much as 18.6x what it cost to get the customer initially. And Lind doesn’t even begin to mention the impact when the compliance regulators and lawyers start piling on. Anything but the most minor security breach will put you out of business faster than the three years Lind asked at the top of this piece.

But all the above is just talking in terms of conventional data center thinking. DancingDinosaur has put his children through college doing TCO studies around these issues. Lind now turns to something mainframe data centers are just beginning to think about; digital disruption. The strategy and challenges of successfully navigating the chaos of cyberspace translates into a need to have information on both business and security and how they interact.

Digital business and security go hand in hand, so any analysis has to include extensive correlation between the two. Using data from volumes of customer experience responses, IT operational details, business performance, and security, Solitaire examined the positioning of IBM LinuxONE in the digital business market. The results of that examination boil down into three: security, agility, and cost. These areas incorporate the primary objectives that organizations operating in cyberspace today regard as the most relevant. And guess who wins any comparative platform analysis, Lind concludes: LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

 

 

 

 

 

 

 

Hybrid Cloud to Streamline IBM Z

June 27, 2018

2020 is the year, according to IDC,  when combined IT infrastructure spending on private and public clouds will eclipse spending on traditional data centers. The researcher predicts the public cloud will account for 31.68 percent of IT infrastructure spending in 2020, while private clouds will take a 19.82 percent slice of the spending pie, totaling more than half (51.5 percent) of all infrastructure spending for the first time, with the rest going to traditional data centers.

Source: courtesy of IBM

There is no going back. By 2021 IDC expects the balance to continue tilting further toward the cloud, with combined public and private cloud dollars making up 53.15 percent of infrastructure spending. Enterprise spending on cloud, according to IDC, will grow over $530 billion as over 90 percent of enterprises will be using a mix of multiple cloud services and platforms, both on and off premises.

Technology customers want choices. They want to choose their access device, interface, deployment options, cost and even their speed of change. Luckily, today’s hybrid age enables choices. Hybrid clouds and multi-cloud IT offer the most efficient way of delivering the widest range of customer choices.

For Z shops, this shouldn’t come as a complete surprise. IBM has been preaching the hybrid gospel for years, at least since x86 machines began making significant inroads into its platform business. The basic message has always been the same: Center the core of your business on the mainframe and then build around it— using x86 if you must but now try LinuxONE and hybrid clouds, both public and on-premises.

For many organizations a multi-cloud strategy using two or more different clouds, public or on-premise, offers the fastest and most efficient way of delivering the maximum in choice, regardless of your particular strategy. For example one might prefer a compute cloud while the other a storage cloud. Or, an organization might use different clouds—a cloud for finance, another for R&D, and yet another for DevOps.

The reasoning behind a multi-cloud strategy can also vary. Reasons can range from risk mitigation, to the need for specialized functionality, to cost management, analytics, security, flexible access, and more.

Another reason for a hybrid cloud strategy, which should resonate with DancingDinosaur readers, is modernizing legacy systems. According to Gartner, by 2020, every dollar invested in digital business innovation will require enterprises to spend at least three times that to continuously modernize the legacy application portfolio. In the past, such legacy application portfolios have often been viewed as a problem subjected to large-scale rip-and-replace efforts in desperate, often unsuccessful attempts to salvage them.

With the growth of hybrid clouds, data center managers instead can manage their legacy portfolio as an asset by mixing and matching capabilities from various cloud offerings to execute business-driven modernization. This will typically include microservices, containers, and APIs to leverage maximum value from the legacy apps, which will no longer be an albatross but a valuable asset.

While the advent of multi-clouds or hybrid clouds may appear to complicate an already muddled situation, they actually provide more options and choices as organizations seek the best solution for their needs at their price and terms.

With the Z this may be easier done than it initially sounds. “Companies have lots of records on Z, and the way to get to these records is through APIs, particularly REST APIs,” explains Juliet Candee, IBM Systems Business Continuity Architecture. Start with the IBM Z Hybrid Cloud Architecture. Then, begin assembling catalogs of APIs and leverage z/OS Connect to access popular IBM middleware like CICS. By using z/OS Connect and APIs through microservices, you can break monolithic systems into smaller, more composable and flexible pieces that contain business functions.

Don’t forget LinuxONE, another Z but optimized for Linux and available at a lower cost. With the LinuxONE Rockhopper II, the latest slimmed down model, you can run 240 concurrent MongoDB databases executing a total of 58 billion database transactions per day on a single server. Accelerate delivery of your new applications through containers and cloud-native development tools, with up to 330,000 Docker containers on a single Rockhopper II server. Similarly, lower TCO and achieve a faster ROI with up to 65 percent cost savings over x86. And the new Rockhopper II’s industry-standard 19-inch rack uses 40 percent less space than the previous Rockhopper while delivering up to 60 percent more Linux capacity.

This results in what Candee describes as a new style of building IT that involves much smaller components, which are easier to monitor and debug. Then, connect it all to IBM Cloud on Z using secure Linux containers. This could be a hybrid cloud combining IBM Cloud Private and an assortment of public clouds along with secure zLinux containers as desired.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Please note: DancingDinosaur will be away for the first 2 weeks of July. The next piece should appear the week of July 16 unless the weather is unusually bad.


%d bloggers like this: