Posts Tagged ‘Red Hat’

IBM Pushes Hybrid Cloud

December 14, 2018

Between quantum computing, blockchain, and hybrid cloud IBM is pursuing a pretty ambitious agenda. Of the three, hybrid promises the most immediate payback. Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a new Forrester report, Predictions 2019: Cloud Computing

Of course, IBM didn’t wait until 2019. It purchased Red Hat Linux at the end of Oct. 2018. DancingDinosaur covered it here a few days later. At that time IBM Chairman Ginni Rometty called the acquisition of Red Hat a game-changer. “It changes everything about the cloud market,” she noted. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer.

Forrester continues, predicting that in 2019 the cloud will reach its more interesting young adult years, bringing innovative development services to enterprise apps rather than just serving up cheaper, temporary servers and storage, which is how it has primarily grown over the past decade. Who hasn’t turned to one or another cloud provider to augment its IT resources as needed, whether backup or server capacity, and network?

As Forrester puts it: The six largest hyperscale cloud leaders — Alibaba, Amazon Web Services [AWS], Google, IBM, Microsoft Azure, and Oracle — will all grow larger in 2019, as service catalogs and global regions expand. Meanwhile, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion in 2019, expanding at more than 20%, the research firm predicts.

Hybrid clouds, which provide two or more cloud providers or platforms, are emerging as the preferred way for enterprises to go.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but that also monitors trends and usage to prevent outages.

Of course, IBM also offers a solution for this; the company’s Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud.

Along with hybrid clouds containers are huge in Forrester’s view. Powered by cloud-native open source components and tools, companies will start rolling out their own digital application platforms that will span clouds, include serverless and event-driven services, and form the foundation for modernizing core business apps for the next decade, the researchers observed. Next year’s hottest trend, according to Forrester, will be making containers easier to deploy, secure, monitor, scale, and upgrade. “Enterprise-ready container platforms from Docker, IBM, Mesosphere, Pivotal, Rancher, Red Hat, VMware, and others are poised to grow rapidly,” the researchers noted.

This may not be as straightforward as the researchers imply. Each organization must select for itself which private cloud strategy is most appropriate, they note. They anticipate greater private cloud structure emerging in 2019. It noted that organizations face three basic private cloud paths: building internally, using vSphere sprinkled with developer-focused tools and software-defined infrastructure; and having its cloud environment custom-built with converged or hyperconverged software stacks to minimize the tech burden. Or lastly, building its cloud infrastructure internally with OpenStack, relying on the hard work of its own tech-savvy team. Am sure there are any number of consultants, contractors, and vendors eager to step in and do this for you.

If you aren’t sure, IBM is offering a number of free trials that you can play with.

As Forrester puts it: Buckle up; for 2019 expect the cloud ride to accelerate.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

IBM Expands and Enhances its Cloud Offerings

June 15, 2018

IBM announced 18 new availability zones in North America, Europe, and Asia Pacific to bolster its IBM Cloud business and try to keep pace with AWS, the public cloud leader, and Microsoft. The new availability zones are located in Europe (Germany and UK), Asia-Pacific (Tokyo and Sydney), and North America (Washington, DC and Dallas).

IBM cloud availability zone, Dallas

In addition, organizations will be able to deploy multi-zone Kubernetes clusters across the availability zones via the IBM Cloud Kubernetes Service. This will simplify how they deploy and manage containerized applications and add further consistency to their cloud experience. Furthermore, deploying multi-zone clusters will have minimal impact on performance, about 2 ms latency between availability zones.

An availability zone, according to IBM, is an isolated instance of a cloud inside a data center region. Each zone brings independent power, cooling, and networking to strengthen fault tolerance. While IBM Cloud already operates in nearly 60 locations, the new zones add even more capacity and capability in these key centers. This global cloud footprint becomes especially critical as clients look to gain greater control of their data in the face of tightening data regulations, such as the European Union’s new General Data Protection Regulation (GDPR). See DancingDinosaur June 1, IBM preps z world for GDPR.

In its Q1 earnings IBM reported cloud revenue of $17.7bn over the past year, up 22 percent over the previous year, but that includes two quarters of outstanding Z revenue that is unlikely to be sustained,  at least until the next Z comes out, which is at least a few quarters away.  AWS meanwhile reported quarterly revenues up 49 percent to $5.4 billion, while Microsoft recently reported 93 percent growth for Azure revenues.

That leaves IBM trying to catch up the old fashioned way by adding new cloud capabilities, enhancing existing cloud capabilities, and attracting more clients to its cloud capabilities however they may be delivered. For example, IBM announced it is the first cloud provider to let developers run managed Kubernetes containers directly on bare metal servers with direct access to GPUs to improve the performance of machine-learning applications, which is critical to any AI effort.  Along the same lines, IBM will extend its IBM Cloud Private and IBM Cloud Private for Data and middleware to Red Hat’s OpenShift Container Platform and Certified Containers. Red Hat already is a leading provider of enterprise Linux to Z shops.

IBM has also expanded its cloud offerings to support the widest range of platforms. Not just Z, LinuxONE, and Power9 for Watson, but also x86 and a variety of non-IBM architectures and platforms. Similarly, notes IBM, users have gotten accustomed to accessing corporate databases wherever they reside, but proximity to cloud data centers still remains important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.

Contrary to simplifying things, the propagation of more and different types of clouds and cloud strategies complicate an organization’s cloud approach. Already, today companies are managing complex, hybrid public-private cloud environments. At the same time, eighty percent of the world’s data is sitting on private servers. It just is not practical or even permissible in some cases to move all the data to the public cloud. Other organizations are run very traditional workloads that they’re looking to modernize over time as they acquire new cloud-native skills. The new IBM cloud centers can host data in multiple formats and databases including DB2, SQLBase, PostreSQL, or NoSQL, all exposed as cloud services, if desired.

The IBM cloud centers, the company continues, also promise common logging and services between the on-prem environment and IBM’s public cloud environment. In fact, IBM will make all its cloud services, including the Watson AI service, consistent across all its availability zones, and offer multi-cluster support, in effect enabling the ability to run workloads and do backups across availability zones.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Gets Serious About Open Data Science (ODS) with Anaconda

April 21, 2017

As IBM rapidly ramps up cognitive systems in various forms, its two remaining platforms, z System and POWER, get more and more interesting. This week IBM announced it was bringing the Anaconda Open Data Science (ODS) platform to its Cognitive Systems and PowerAI.

Anaconda, Courtesy Pinterest

Specifically, Anaconda will integrate with the PowerAI software distribution for machine learning (ML) and deep learning (DL). The goal: make it simple and fast to take advantage of Power performance and GPU optimization for data-intensive cognitive workloads.

“Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale,” said Bob Picciano, senior vice president of IBM Cognitive Systems. Added Travis Oliphant, co-founder and chief data scientist, Continuum Analytics, which introduced the Anaconda platform: “By optimizing Anaconda on Power, developers will also gain access to the libraries in the PowerAI Platform for exploration and deployment in Anaconda Enterprise.”

With more than 16 million downloads to date, Anaconda has emerged as the Open Data Science platform leader. It is empowering leading businesses across industries worldwide with tools to identify patterns in data, uncover key insights, and transform basic data into the intelligence required to solve the world’s most challenging problems.

As one of the fastest growing fields of AI, DL makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. DL is transforming the businesses of leading consumer Web and mobile application companies, and it is catching on with more traditional business.

IBM developed PowerAI to accelerate enterprise adoption of open-source ML and DL frameworks used to build cognitive applications. PowerAI promises to reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture and is tuned for high performance, according to IBM. With PowerAI, organizations also can realize the benefit of enterprise support on IBM Cognitive Systems HPC platforms used in the most demanding commercial, academic, and hyperscale environments

For POWER shops getting into Anaconda, which is based on Python, is straightforward. You need a Power8 with IBM GPU hardware or a Power8 combined with a Nvidia GPU, in effect a Minsky machine. It’s essentially a developer’s tool although ODS proponents see it more broadly, bridging the gap between traditional IT and lines of business, shifting traditional roles, and creating new roles. In short, they envision scientists, mathematicians, engineers, business people, and more getting involved in ODS.

The technology is designed to run on the user’s desktop but is packaged and priced as a cloud subscription with a base package of 20 users. User licenses range from $500 per year to $30,000 per year depending on which bells and whistles you include. The number of options is pretty extensive.

According to IBM, this started with PowerAI to accelerate enterprise adoption of open-source ML/DL learning frameworks used to build cognitive applications. Overall, the open Anaconda platform brings capabilities for large-scale data processing, predictive analytics, and scientific computing to simplify package management and deployment. Developers using open source ML/DL components can use Power as the deployment platform and take advantage of Power optimization and GPU differentiation for NVIDIA.

Not to be left out, IBM noted growing support for the OpenPOWER Foundation, which recently announced the OpenPOWER Machine Learning Work Group (OPMLWG). The new OPMLWG includes members like Google, NVIDIA and Mellanox to provide a forum for collaboration that will help define frameworks for the productive development and deployment of ML solutions using OpenPOWER ecosystem technology. The foundation has also surpassed 300-members, with new participants such as Kinetica, Red Hat, and Toshiba. For traditional enterprise data centers, the future increasingly is pointing toward cognitive in one form or another.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Ubuntu Linux (beta) for the z System is Available Now

April 8, 2016

As recently as February, DancingDinosaur has been lauding IBM’s bolstering of the z System for Linux and support for the latest styles of app dev. As part of that it expected Ubuntu Linux for z by the summer. It arrived early.  You can download it for LinuxONE and the z now, hereubuntu-logo-300x225

Of course, the z has run Linux for over a decade. That was a customized version that required a couple of extra steps, mainly recompiling, if x86 Linux apps were to run seamlessly. This time Canonical and the Ubuntu community have committed to work with IBM to ensure that Ubuntu works seamlessly with IBM LinuxONE, z Systems, and Power Systems. The goal is to enable IBM’s enterprise platforms to play nicely with the latest app dev goodies, including NFV, containers, KVM, OpenStack, big data analytics, DevOps, and even IoT. To that end, all three parties (Canonical, the Ubuntu community, and IBM) commit to provide reference architectures, supported solutions, and cloud offerings, now and in the future.

Ubuntu is emerging as the platform of choice for organizations running scale-out, next-generation workloads in the cloud. According to Canonical, Ubuntu dominates public cloud guest volume and production OpenStack deployments with up to 70% market share. Global brands running Ubuntu at scale in the cloud include AT&T, Walmart, Deutsche Telecom, Bloomberg, Cisco and others.

The z and LinuxONE machines play right into this. They can support thousands of Linux images with no-fail high availability, security, and performance. When POWER 9 processors come to market it gets even better. At a recent OpenPOWER gathering the POWER 9 generated tremendous buzz with Google discussing its intentions of building a new data center server  based on an open POWER9 design that conforms to Facebook’s Open Compute Project server.

These systems will be aimed initially at hyperscale data centers. OpenPOWER processors combined with acceleration technology have the potential to fundamentally change server and data center design today and into the future.  OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.

According to Aaron Sullivan, Open Compute Project Incubation Committee Member and Distinguished Engineer at Rackspace. “OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.” This is true today and with POWER9, a reportedly 14nm processor coming around 2017, it will be even more so then. This particular roadmap looks out to 2020 when POWER10, a 10nm processor, is expected with the task of delivering extreme analytics optimization.

But for now, what is available for the z isn’t exactly chopped liver. Ubuntu is delivering scale-out capabilities for the latest development approaches to run on the z and LinuxONE. As Canonical promises: Ubuntu offers the best of open source for IBM’s enterprise customers along with unprecedented performance, security and resiliency. The latest Ubuntu version, Ubuntu 16.04 LTS, is in beta and available to all IBM LinuxOne and z Systems customers. See the link above. Currently SUSE and Red Hat are the leading Linux distributions among z data centers. SUSE also just announced a new distro of openSUSE Linux for the z to be called openSUSE Factory.

Also this week the OpenPOWER Foundation held its annual meeting where it introduced technology to boost data center infrastructures with more choices, essentially allowing increased data workloads and analytics to drive better business results. Am hoping that the Open Mainframe Project will emulate the Open POWER group and in a year or two by starting to introducing technology to boost mainframe computing along the same lines.

For instance OpenPOWER introduced more than 10 new OpenPOWER servers, offering expanded services for high performance computing and server virtualization. Or this: IBM, in collaboration with NVIDIA and Wistron, revealed plans to release its second-generation OpenPOWER high performance computing server, which includes support for the NVIDIA Tesla Accelerated Computing platform. The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via the NVIDIA NVLink, a high-speed interconnect technology.

In the same batch of announcements TYAN announced its GT75-BP012, a 1U, POWER8-based server solution with the ppc64 architecture. The ppc64 architecture is optimized for 64-bit big-endian PowerPC and Power Architecture processors.  Also of interest to DancingDinosaur readers may be the variation of the ppc64 that enables a pure little-endian mode with the POWER8 to enable the porting of x86 Linux-based software with minimal effort. BTW, the OpenPOWER-based platform, reportedly, offers exceptional capability for in-memory computing in a 1U implementation, part of the overall trend toward smaller, denser, and more efficient systems. The latest TYAN offerings will only drive more of it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Variety of System Vendors at IBM Edge2015

May 7, 2015

An interesting set of vendor sponsors and exhibitors are lined up for IBM Edge2015 in Las Vegas next week. For the past weeks DancingDinosaur has focused on a small selection of program sessions.  Now let’s take a look at some of the vendors that will be there.

DancingDinosaur loves the vendors because they’re usually the ones underwriting the free entertainment, food, and drinks as well as giving out the nifty stuff. (My daughters used to love going off to school with what they considered cool multi-colored pens, Day-Glo bouncing balls, folding Frisbees, and more, which I picked up free at different vendors’ booths.)

ibm enterprise cloud - cloud breakthrough year infographic_12-17-14b (1)

IBM enterprise cloud platform (click to enlarge)

Let’s start with Rocket Software. DancingDinosaur thinks of them mainly as a mainframe software provider with products for data management, performance optimization, catalog and system management, disaster recovery, storage management, and security. They also offer a bunch of interesting free utilities. At the end of April Rocket announced Rocket Discover, a self-service, intuitive data preparation and discovery solution to lets business managers and executives easily access, manipulate, prepare, and visualize data.

Both Brocade and Cisco will be there. In April, for instance, Brocade announced innovations for its campus LAN switch family. The switch is intended to help organization easily scale to meet increasing campus bandwidth demands. For instance it will deliver the industry’s highest 10 Gigabit Ethernet (GbE) port density for any switch in its class to accommodate what it refers to as the onslaught of user video and wireless traffic that is taxing campus networks.

In early May Cisco announced that Eletrobras, a Brazilian electric utility, would use Cisco’s technology for a smart metering initiative.  The project is expected to enable operational efficiency by improving service quality and control of non-technical losses, which, according to the company, reach 22% in the North and 10% in the Northeast of Brazil compared to required energy.

Of course Red Hat and SUSE, currently the leading Linux providers for the mainframe, will be there. DancingDinosaur has gotten some of his favorite baseball hats from each of these companies at previous IBM Edge conferences.

Red Hat introduced a new business resource planner as part of the latest releases of Red Hat JBoss BPM Suite and Red Hat JBoss BRMS. The planner, based on the open source OptaPlanner JBoss community project, is designed to help enterprises address complex scheduling and resource planning challenges. It also promises to increase operational adaptability in the face of rapidly changing and unpredictable business environments.

In late April SUSE announced the upcoming availability of SUSE Linux Enterprise Server for SAP Applications based on SUSE Linux Enterprise 12. New features, such as full operating system rollback, live kernel patching, and installation automation, should help simplify deployment and can increase uptime of mission-critical SAP solution-based workloads on Linux. SUSE customers should save time and resources as they experience improved performance and reliability.

Since the topic is Linux, let’s not forget Canonical’s Ubuntu, usually regarded as a desktop Linux distribution, is moving onto server platforms. At present Ubuntu is supported on POWER8 but not z. Ubuntu is included in numerous program sessions at Edge2015. For example, Ubuntu on Power – Using PowerKVM, presented by James Nash. The session covers various aspects to consider when moving to Ubuntu on the Power platform running in a PowerKVM environment.

In the exhibition area, where most people congregate for free food and drink after the program sessions there are over 30 exhibitors, including a handful of IBM units. For example, H&W Computer Systems  provides a handful of mainframe tools that enable you to run batch jobs during the business day without impacting CICS, automatically convert JES2 output to PDF or other formats, or use ISPF-like features to manage mainframe datasets. This is hardcore mainframe stuff.

An interesting exhibitor is ownCloud, an enterprise file sync and share system that is hosted in your data center, on your servers, using your storage. ownCloud provides Universal File Access through a single front-end to all of your disparate systems. Users can access company files on any device, anytime, from anywhere while IT can manage, control and audit file sharing activity to ensure security and compliance measures are met. (DancingDinosaur could actually use something like this—make note to check out this exhibitor.)

Recommend you spend a couple of late afternoons grazing through the exhibitor space, enjoying the food and drink, catching some demos, and collecting a new wardrobe of t-shirts and baseball caps.  And don’t forget to pick up some of the other funky stuff for your kids.

Of course, plan to save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. Also there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. If you are attending IBM Edge2015—now sold out—please look for me hanging out wherever people gather around available power outlets to recharge mobile devices.

Red Hat Summit Challenges IBM and zEnterprise

June 29, 2012

The Red Hat Summit in Boston this week showed off a slew of new products, some of which are sure to challenge IBM although none included hardware. As for the zEnterprise there was little beyond RHEL specifically aimed its way.

Although IBM has a long relationship with Red Hat, it is becoming clear that in some cases the two companies compete.  Despite moments of discomfort, IBM managers still are generally amenable to working closely with Red Hat. Coopetition, a mix of cooperation and competition, increasingly is the norm for all technology companies, not just IBM and Red Hat.  Both companies, for instance, have a strong interest in promoting KVM, which provides a low price hypervisor option. But as one IBM manager noted, price is only a door opener for a bigger discussion.  From that bigger discussion, IBM still holds more and stronger cards when it comes to delivering a complete customer solution.

The centerpiece of the conference was Red Hat’s announcement of four open hybrid solutions; hybrid here refers to the blend of public and private clouds. Find the announcement here. The four open hybrid cloud solutions consist of:

  • OpenShift Enterprise PaaS Solution combines Red Hat CloudForms, Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and JBoss Enterprise Middleware. It aims to deliver the speed and agility of PaaS desired by enterprise developers while addressing the governance and operational requirements of enterprise IT in an open and hybrid cloud.
  • Red Hat Hybrid Infrastructure-as-a-Service (IaaS) Solution, described by Red Hat as the industry’s first open hybrid cloud solution for enterprises, it includes the software needed to deploy and manage a hybrid cloud, including virtualization management with Red Hat Enterprise Virtualization; cloud management, governed self-service and systems management with Red Hat CloudForms; and guest operating system with Red Hat Enterprise Linux.
  • Red Hat Cloud with Virtualization Bundle, promises to move enterprises to the cloud for the price of virtualization and consists of Red Hat Enterprise Virtualization and Red Hat CloudForms, effectively combining virtualization and cloud management into the same project cycle.
  • Red Hat Storage, a software-only offering provides open source scale-out storage software for the management of unstructured data.  It is generally available with Red Hat Storage Server 2.0 today but the company has big ambitions for this.

All the above are software-only solutions. Red Hat is not getting into either the server or storage hardware business. Of these, the OpenShift PaaS product, which promises an easy on-ramp to open hybrid cloud computing with reduced complexity, sounds a little like IBM PureSystems and particularly the IBM PureSystems PureApplication System, which provides a fully integrated  hardware/software PaaS offering in a box.

Red Hat Storage, which is built around Red Hat’s recent Gluster acquisition, an open source software company that maintained GlusterFS, an open source, distributed file system capable of scaling to petabytes and beyond while handling thousands of clients. GlusterFS brings together storage building blocks over an Infiniband, RDMA, or TCP/IP interconnects to aggregate disk and memory resources and manage data within a single global namespace.

GlusterFS supports standard clients running standard applications over any standard IP network.  Red Hat’s GlusterFS gives users the ability to deploy scale-out, virtualized storage through a centrally managed and commoditized pool of storage while freeing them from monolithic legacy storage platforms.

The poster child for Red Hat Storage today is Pandora, which delivers music on demand over the net. Pandora put Red Hat Storage across 100 NAS nodes, producing what amounts to NAS in the cloud.  Where organizations have IBM storage, Red Hat Storage will virtualize and manage it as it does any other storage.

The most interesting discussion DancingDinosaur had revolved around Red Hat Enterprise Linux (RHEL). RHEL has experienced generally steady growth, and the RHEL team is cocky enough to target selected Microsoft workloads to drive further growth.  RHEL appears to be a dominant Linux distribution everywhere but on the z. Among z shops running Linux,  RHEL is treading water at around 33% share. SUSE is the dominant Linux distribution on z.  Given the upheaval SUSE has experienced you’d expect the zEnterprise to be a RHEL growth opportunity, especially with growing interest in multi-platform hybrid computing involving the z. Red Hat clearly needs to pay more attention to the z.

IBM Expands Red Hat Partnership

May 9, 2011

IBM further embraced open source last week at Red Hat’s user conference in Boston around virtualization and cloud initiatives. The relationship, however, has been growing for over a decade as Red Hat Enterprise Linux (RHEL) becomes increasingly popular on the System z. The arrival of x and Power blades for the zBX should only increase the presence of RHEL on the System z.

Last year IBM selected Red Hat Enterprise Virtualization (RHEV) as a platform option for its development and test cloud service. Dev and test has emerged as a natural for cloud computing given its demands for quick setup and take down.

Although there weren’t any major specific System z announcements almost all System z shops run a mix of platforms, including System x for Linux and Windows, the Power platform for AIX and Linux, and are making forays into private, public, and hybrid clouds. So there was plenty coming out of the conference that will interest mainframe shops even if it wasn’t System z-specific.

With that in mind, here are three new Red Hat initiatives that will be of interest in mainframe shops:

First, open virtualization based on Red Hat’s open source KVM hypervisor. This enables an organization to create multiple virtual versions of Linux and Windows environments on the same server. This will help save money through the consolidation of IT resources and without the expense and limitations of proprietary technology. RHEV, an open source option, delivers datacenter virtualization by combining its centralized virtualization management system with the KVM hypervisor, which has emerged as a top hypervisor behind VMware.

According to Red Hat, RHEV delivers 45% better consolidation capacity than its competitors according to a recent Spec 1 virtualization benchmark and brings architectural support for up to 4,096 processor cores and up to 64TB of memory in the host, 32 virtual CPUs in the guest, and 1TB of RAM. This exceeds the abilities of proprietary hypervisors for Linux and Windows. Red Hat also reports RHEV Virtualization Manager can enable savings of up to 80% relative to comparable proprietary virtualization products in the first year (initial acquisition cost) and up to 66% over a course of three years. Finally support for such security capabilities as multi-tenancy combined with its scalability make it a natural for cloud computing.

Second, Red Hat introduced a platform-as-a-service (PaaS) initiative, called OpenShift, to simplify cloud development and deployment and reduce risk. It is aimed at open source developers and provides them with a flexible platform for developing cloud applications using a choice of development frameworks for Java, Python, PHP and Ruby, including Spring, Seam, Weld, CDI, Rails, Rack, Symfony, Zend Framework, Twisted, Django and Java EE. It is based on a cloud interoperability standard, Deltacloud, and promises to end PaaS lock-in, allowing developers to choose not only the languages and frameworks they use but the cloud provider upon which their application will run.

By building on the Deltacloud cloud interoperability standard, OpenShift allows developers to run their applications on any supported Red Hat Certified Public Cloud Provider, eliminating the lock-in associated with first-generation PaaS vendors. In addition it brings the JBoss middleware services to the PaaS experience, such as the MongoDB services and other RHEL services.

Third, Red Hat introduced CloudForms, a product for creating and managing IaaS in private and hybrid clouds. It allows users to create integrated clouds consisting of a variety of computing resources and still be portable across physical, virtual and cloud computing resources.  CloudForms addresses key problems encountered in first-generation cloud products: the cost and complexity of virtual server sprawl, compliance nightmares and security concerns.

What will make CloudForms of particular interest to heterogeneous mainframe shops is its ability to create hybrid clouds using existing computing resources: virtual servers from different vendors, such as Red Hat and VMware; different cloud vendors, such as IBM and Amazon; and conventional in-house or hosted physical servers, both racks and blades. This level of choice helps to eliminate lock-in and the need to undergo migration from physical to virtual servers in order to obtain the benefits of cloud.

Open source is not generally a mainframe consideration, but open source looms large in the cloud. It may be time for System z shops to add some of Red Hat’s new technologies to their System z RHEL, virtualization, and cloud strategies as they move forward.


%d bloggers like this: