Posts Tagged ‘private cloud’

IBM On-Premises Cognitive Means z Systems Only

February 16, 2017

Just in case you missed the incessant drumbeat coming out of IBM, the company committed to cognitive computing. But that works for z data centers since IBM’s cognitive system is available on-premises only for the z. Another z first: IBM just introduced Machine Learning (key for cognitive) for the private cloud starting with the z.

ibm-congitive-graphic

There are three ways to get IBM cognitive computing solutions: the IBM Cloud, Watson, or the z System, notes Donna Dillenberger, IBM Fellow, IBM Enterprise Solutions. The z, however, is the only platform that IBM supports for cognitive computing on premises (sorry, no Power). As such, the z represents the apex of programmatic computing, at least as IBM sees it. It also is the only IBM platform that supports cognitive natively; mainly in the form of Hadoop and Spark, both of which are programmatic tools.

What if your z told you that a given strategy had a 92% of success. It couldn’t do that until now with IBM’s recently released cognitive system for z.

Your z system today represents the peak of programmatic computing. That’s what everyone working in computers grew up with, going all the way back to Assembler, COBOL, and FORTRAN. Newer languages and operating systems have arrived since; today your mainframe can respond to Java or Linux and now Python and Anaconda. Still, all are based on the programmatic computing model.

IBM believes the future lies in cognitive computing. Cognitive has become the company’s latest strategic imperative, apparently trumping its previous strategic imperatives: cloud, analytics, big data, and mobile. Maybe only security, which quietly slipped in as a strategic imperative sometime 2016, can rival cognitive, at least for now.

Similarly, IBM describes itself as a cognitive solutions and cloud platform company. IBM’s infatuation with cognitive starts with data. Only cognitive computing will enable organizations to understand the flood of myriad data pouring in—consisting of structured, local data but going beyond to unlock the world of global unstructured data; and then to decision tree-driven, deterministic applications, and eventually, probabilistic systems that co-evolve with their users by learning along with them.

You need cognitive computing. It is the only way, as IBM puts it: to move beyond the constraints of programmatic computing. In the process, cognitive can take you past keyword-based search that provides a list of locations where an answer might be located to an intuitive, conversational means to discover a set of confidence-ranked possibilities.

Dillenberger suggests it won’t be difficult to get to the IBM cognitive system on z . You don’t even program a cognitive system. At most, you train it, and even then the cognitive system will do the heavy lifting by finding the most appropriate training models. If you don’t have preexisting training models, “just use what the cognitive system thinks is best,” she adds. Then the cognitive system will see what happens and learn from it, tweaking the models as necessary based on the results and new data it encounters. This also is where machine learning comes in.

IBM has yet to document payback and ROI data. Dillenberger, however, has spoken with early adopters.  The big promised payback, of course, will come from the new insights uncovered and the payback will be as astronomical or meager as you are in executing on those insights.

But there also is the promise of a quick technical payback for z data centers managers. When the data resides on z—a huge advantage for the z—you just run analytics where the data is. In such cases you can realize up to 3x the performance, Dillenberger noted.  Even if you have to pull data from some other location too you still run faster, maybe 2x faster. Other z advantages include large amounts of memory, multiple levels of cache, and multiple I/O processors get at data without impacting CPU performance.

When the data and IBM’s cognitive system resides on the z you can save significant money. “ETL consumed huge amounts of MIPS. But when the client did it all on the z, it completely avoided the costly ETL process,” Dillenberger noted. As a result, that client reported savings of $7-8 million dollars a year by completely bypassing the x-86 layer and ETL and running Spark natively on the z.

As Dillenberger describes it, cognitive computing on the z is here now, able to deliver a payback fast, and an even bigger payback going forward as you execute on the insights it reveals. And you already have a z, the only on-premises way to IBM’s Cognitive System.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Leads in TBR Private and Hybrid Cloud Surveys

August 4, 2016

IBM has been named number one in private clouds by independent technology market research firm Technology Business Research (TBR) as well as number one in TBR’s hybrid cloud environments survey. Ironically, as fast as IBM has been trying to distance itself from its legacy platform heritage it brings an advantage when it comes to clouds for some customers. “A footprint in legacy IT solutions and management is a strong predictor of private cloud vendor success, as private cloud solutions are typically the first step toward hybrid IT environments,” wrote TBR Cloud Senior Analyst Cassandra Mooshian.

1800FLOWERS Taps IBM Commerce Cloud

Courtesy of IBM: 1800 FLOWERS Taps IBM Cloud

Coming out on top of IBM’s 2Q16 financials reported here, were the company’s strategic initiatives, mainly cloud, analytics, and mobile, which generated positive revenue results. The TBR reports provide welcome reinforcement for IBM strategy doubters. As reported by IBM, the annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent while security revenue increased 18 percent.

The TBR report also noted IBM leadership in overall vendor adoption for private cloud and in select private cloud segments due to its broad cloud and IT services portfolio, its variety of deployment options, and accompanying integration and optimization support. As a result, the company’s expertise and knowledge of both cloud and legacy technology make it easier for customers to opt for an IBM migration path to both private and hybrid clouds.

TBR also specifically called out of IBM cloud-friendly capabilities, including the comprehensive portfolio of cloud and hardware assets with security; cloud professional services that can span a customer’s entire IT environment; and a vertical approach to cloud combined with Watson technology. As for hybrid clouds, Kelsey Mason, Cloud Analyst at TBR, noted in the announcement: “Hybrid integration is the next stage in cloud adoption and will be the end state for many enterprise IT environments.” Enterprise hybrid adoption, TBR observed, now matches public adoption of a year ago, which it interprets as signaling a new level of maturity in companies’ cloud strategies.

What really counts, however, are customers who vote with their checkbooks.  Here IBM has been racking up cloud wins. For example, Pratt & Whitney, a United Technologies Corp. company in July announced it will move the engine manufacturer’s business, engineering, and manufacturing enterprise systems to a fully managed and supported environment on the IBM Cloud infrastructure.

Said Brian Galovich, vice president and chief information officer, Pratt & Whitney, in the published announcement:  “Working with IBM and moving our three enterprise systems to a managed cloud service will give us the ability to scale quickly and meet the increased demands for computing services, data processing and storage based on Pratt & Whitney’s forecasted growth over the next decade.

Also in July, Dixons Carphone Group, Europe’s largest telecommunications retail and services company as the result of a 2014 merger, announced plans to migrate to the IBM Cloud from IBM datacenters in the United Kingdom to integrate two distinct infrastructures and enable easy scaling to better manage the peaks and valleys of seasonal shopping trends. Specifically, the company expects to migrate about 2,500 server images from both enterprises with supporting database and middleware components from both infrastructures to an IBM hybrid cloud platform that comprises a private IBM Cloud with bare metal servers for production workloads and public IBM Cloud platform for non-production workloads.

As a merged company it saw an opportunity to consolidate the infrastructures by leveraging cloud solutions for flexibility, performance and cost savings. After assessing the long-term values and scalability of multiple cloud providers, the company turned to IBM Cloud for a smooth transition to a hybrid cloud infrastructure. “We can trust IBM Cloud to seamlessly integrate the infrastructures of both companies into one hybrid cloud that will enable us to continue focusing on other parts of the business,” said David Hennessy, IT Director, Dixons Carphone, in the announcement.

As IBM’s 2Q16 report makes clear, once both these companies might have bought new IBM hardware platforms but that’s not the world today. At least they didn’t opt for AWS or Azure.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

POWER Systems for Cloud & Linux at IBM Edge2015

April 23, 2015

In October, IBM introduced a new range of POWER systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 processor-based systems, delivering to clients a superior alternative to closed, commodity-based data center servers. DancingDinosaur covered it last October here. Expect this theme to play out big at IBM

Edge2015 in Las Vegas, May 10-15. Just a sampling of a few of the many POWER sessions makes that clear:

IBM Power S824L

Courtesy of Studio Stence, Power S824L (click to enlarge)

(lCV1655) Linux on Power and Linux on Intel: Side By Side, IT Economics Positioning; presenter Susan Proietti Conti

Based on real cases studied by the IBM Eagle team for many customers in different industries and geographies, this session explains where and when Linux on Power provides a competitive alternative to Linux on Intel. The session also highlights the IT economic value of architecture choices provided by the Linux/KVM/Power stack, based on open technologies brought by POWER8 and managed through OpenStack. DancingDinosaur periodically covers studies like these here and here.

(lCV1653) Power IT Economics Advantages for Cloud Service Providers and Private Cloud Deployment; presenter Susan Proietti Conti

Since the announcement of POWER8 and building momentum of the OpenPOWER consortium, there are new reasons for cloud service providers to look at Power technology to support their offerings. As an alternative open-based technology to traditional proprietary technologies, Power offers many competitive advantages that can be leveraged for cloud service providers to deliver IaaS services and other types of service delivery. This session illustrates what Power offers by highlighting client examples and the results of IT economics studies performed for different cloud service providers.

(lSY2653) Why POWER8 Is the Platform of Choice for Linux; presenter Gary Andrews

Linux is the platform of choice for running next generation workloads. With POWER8, IBM is investing heavily into Linux and is adding major enhancements to the Power platform to make it the server of choice for running Linux workloads. This session discusses the new features and how they can help run business faster and at lower costs on the Power platform. Andrews also points out many advanced features of Linux on Power that you can’t do with Linux on x86. He shows how competitive comparisons and performance tests demonstrate that POWER8 increases the lead over x86 latest processor family. In short, attend this session to understand the competitive advantages that POWER8 on Linux can deliver compared to Linux on x86.

(pBA1244) POWER8: Built for Big Data; presenter William Starke

Starke explains how IBM technologies from semiconductors through micro-architecture, system design, system software, and database and analytic software culminate in the POWER8 family of products optimized around big data analytics workloads. He shows how the optimization across these technologies delivers order-of-magnitude improvements via several example scenarios.

 (pPE1350) Best Practices Guide to Get Maximum Performance from IBM POWER8; presenter Archana Ravindar

This session presents a set of best practices that have been tried and tested in various application domains to get the maximum performance of an application on a POWER8 processor. Performance improvement can be gained at various levels: the system level, where system parameters can be tuned; the application level, where some parameters can be tuned as there is no one-size-fits-all scenario; and the compiler level, where options for every kind of application have shown to improve performance. Some options are unique to IBM and give an edge over competition in gaming applications. In cases where applications are still under development, Ravindar presents guidelines to ensure the code runs fastest on Power.

DancingDinosaur supports strategies that enable data centers to reuse existing resources like this one. (pCV2276) Developing a POWERful Cloud Strategy; presenter, Susan Schreitmueller

Here you get to examine decision points for how and when to use an existing Power infrastructure in a cloud environment. This session covers on-premises and off-premises, single vs. multi-tenant hosting, and security concerns. You also review IaaS, PaaS, and hybrid cloud solutions incorporating existing assets into a cloud infrastructure. Discover provisioning techniques to go from months to days and then to hours for new instances.

One session DancingDinosaur hasn’t found yet is whether it is less costly for an enterprise to virtualize a couple of thousand Linux virtual machines on one of the new IBM Power servers pictured above or on the z13 as an Enterprise Linux server purchased under the System z Solution Edition Program. Hmm, will have to ask around about that. But either way you’d end up with very low cost VMs compared to x86.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here, there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. Please join DancingDinosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM z Systems as a Cloud Platform

February 13, 2015

DancingDinosaur wrote a long paper for an audience of x86 users. The premise of the paper: the z Systems in many cases could be a better and even lower cost alternative to x86 for a private or hybrid cloud. The following is an excerpt from that paper.

 cloud_computing_providers

BTW, IBM earlier this month announced it signed a 10-year, large-scale services agreement with Shop Direct to move the multi-brand digital retailer to a hybrid cloud model to increase flexibility and quickly respond to changes in demand as it grows, one of many such IBM wins recently. The announcement never mentioned Shop Direct’s previous platform. But it or any company in a similar position could have opted to build its own hybrid (private/public) cloud platform.

A hybrid cloud a company builds today probably runs on the x86 platform and the Windows OS. Other x86-based clouds run Linux. As demand for the organization’s hybrid cloud grows and new capabilities are added traffic increases.  The conventional response is to scale out or scale up, adding more or faster x86 processors to handle more workloads for more users.

So, why not opt for a hybrid cloud running on the z? As a platform, x86 is far from perfect; too unstable and insecure for starters. By adopting a zEC12 or a z13 to host your hybrid cloud you get one of the fastest general commercial processors in the market and the highest security rating for commercial servers, (EAL 5+). But most x86-oriented data centers would balk. Way too expensive would be their initial reaction. Even if they took a moment to look at the numbers their IT staff would be in open revolt and give you every reason it couldn’t work.

The x86 platform, however, is not nearly as inexpensive as it was believed, and there are many ways to make the z cost competitive. Due to the eccentricities of Oracle licensing on the z Systems, for instance, organizations often can justify the entire cost of the mainframe just from the annual Oracle software license savings. This can amount to hundreds of thousands of dollars or more each year. And the entry level mainframe has a list price of $75,000, not much more than an x86 system of comparable MIPS. And that’s before you start calculating the cost of x86 redundancy, failover, and zero downtime that comes built into the mainframe or consider security. Plus with the z Systems Solution Edition program, IBM is almost giving the mainframe away for free.

Some x86 shops could think of the mainframe as a potent Linux machine that can handle thousands of Linux instances without breaking a sweat. The staff wouldn’t even have to touch z/OS. It also runs Java and Hadoop. And it delivers an astonishingly fast and efficient Linux environment that provides a level of performance that would require a much great number of x86 cores to try to match. And if you want to host an on-premises or hybrid cloud at enterprise scale it takes a lot of cores. The cost of acquiring all those x86 cores, deploying them, and managing them will break almost any budget.

Just ask Jim Tussing, Chief Technology Officer for infrastructure and operations at Nationwide Insurance (DancingDinosaur has covered Tussing before): “We had literally 3000 x86 servers deployed that were underutilized,” which is common in the x86 environment even with VMware or Hyper-V virtualization. At a time when Nationwide was seeking to increase the pace of innovation across its products and channels, but rolling out new environments were taking weeks or months to provision and deploy, again not unheard of in the x86 world. The x86 environment at Nationwide was choking the company.

So, Nationwide consolidated and virtualized as many x86 servers on a mainframe as possible, creating what amounted to an on-premises and hybrid cloud. The payoff: Nationwide reduced power, cooling, and floor space requirements by 80 percent. And it finally reversed the spiraling expenditure on its distributed server landscape, saving an estimated $15 million over the first three years, money it could redirect into innovation and new products. It also could provision new virtual server instances fast and tap the hybrid cloud for new capabilities.

None of this should be news to readers of DancingDinosaur. However some mainframe shops still face organizational resistance to mainframe computing. Hope this might help reinforce the z case.

DancingDinsosaur is Alan Radding, a long-time IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my IT writing at Technologywriter.com and here.

System z Clouds Pay Off

January 9, 2013

From its introduction last August, IBM has aimed the zEC12 at cloud use cases, especially private clouds. The zEC12’s massive virtualization capabilities make it possible to handle private cloud environments consisting of thousands of distributed systems running Linux on zEC12.

One zEC12, notes IBM, can encompass the capacity of an entire multi-platform data center in a single system. The newest z also enables organizations to run conventional IT workloads and private cloud applications on one system.  If you are looking at a zEC12 coupled with the zBX you can have a hybrid private cloud running Linux, Windows, and AIX workloads.

There are three main reasons why z-based data centers should consider a private cloud:

  1. The z does it so naturally and seamlessly
  2. It boosts IT efficiency, mainly through user self service
  3. It increases enterprise agility, especially when it comes to provisioning and deploying IT resources and applications fast

Organizations everywhere are adopting private clouds (probably because C-level execs are more comfortable with private cloud security).  The Open Data Center Alliance reports faster private cloud adoption than originally predicted. Over half its survey respondents will be running more than 40% of their IT operations in private clouds by 2015.

Mainframes make a particularly good private cloud choice. Nationwide, the insurance company, consolidated 3000 distributed servers to Linux virtual servers running on a variety of z mainframes, creating a multi-platform private mainframe cloud optimized for its different workloads. The goal was to improve efficiency.

Nationwide initially intended to isolate its Linux and z/OS workloads on different physical mainframes. This resulted in a total of seven machines – a mixture of z9 and z10 servers – of which two were dedicated to Linux. To optimize this footprint, however, Nationwide ended up consolidating all workloads to four IBM zEnterprise 196 servers and two z10 servers, putting Linux and z/OS workloads on the same machines because its confidence level with Linux on the mainframe and the maturity of the platform made the Nationwide IT team comfortable mixing workloads.

The key benefit of this approach was higher utilization and better economies of scale, effectively making the mainframes into a unified private cloud—a single set of resources, managed with the same tools but optimized for a variety of workloads. The payback:  elimination of both capital and operational expenditures, expected to save about $15 million over three years. The more compact and efficient zEnterprise landscape also means low costs in the future. Specifically, Nationwide is realizing an 80% reduction in power, cooling and floor space despite an application workload that is growing 30% annually, and practically all of it handled through the provisioning of new virtual servers on the existing mainframe footprint.

Another z cloud was built by the City and County of Honolulu. It needed to increase government transparency by providing useful, timely data to its citizens. The goal was to boost citizen involvement, improve delivery of services, and increase the efficiency of city operations.

Honolulu built its cloud using an IFL engine running Linux on the city’s z10 EC machine. Between Linux and IBM z/VM the city created a customized cloud environment. This provided a scalable self-service platform on which city employees could develop open source applications, and it empowered the general public to create and deploy citizen-centric applications. Other components included IBM XIV storage, IBM Maximo Asset Management, IBM Tivoli OMEGAMON, Tivoli Workload Scheduler, and Tivoli Storage Manager.

The results: reduction in application deployment time from one week to only hours, 68% lower licensing costs for one database, and a new property tax appraisal system that increased tax revenue by $1.4 million in just three months.

There are even more examples of z clouds. For z shops a private cloud should be pretty straightforward; you’re probably over half-way there already. All you need are a few more components and a well-defined business case.  Give me a call, and I’ll even help you pull the business case together.

Predictive Analysis on zEnterprise

January 9, 2012

IBM has been positioning the System z for a key role in data analysis.  To that end, it acquired SPSS and Cognos and made sure they ran on the z. More recently, growing interest in Big Data and real-time data analytics only affirm IBM’s belief that as far as data analytics goes the zEnterprise is poised to take the spotlight. This is not completely new; DancingDinosaur addressed it in October 2009.

Over the last several decades people would laugh if you suggested a mainframe for data analysis beyond the standard canned system reporting.  For ad-hoc querying, multi-dimensional analysis, and data visualization you needed distributed systems running a variety of specialized GUI tools. In addition, you’d want a small army of business analysts, PhDs, and various quants to handle the heavy lifting. The resulting queries could take days to run.

In a recent analyst briefing, Alan Meyer, senior manager for Data Warehousing on z, built the case for a different style of data analysis on the zEnterprise. He drew a picture of companies needing to make better informed decisions at the point of engagement while applications and business users are demanding the latest data faster than ever. At the same time there is no letup in pressure to lower cost, reduce complexity, and improve efficiency.

So what’s stopping companies from doing near real-time analytics and the big data thing? The culprits, according to Meyer, are duplicate data infrastructures, the complexity of integrating multiple IT environments, insufficient and inconsistent security, and insufficient processing power, especially when having to handle large volumes of data fast. The old approach clearly is too slow and costly.

The zEnterprise, it turns out, is the ideal vehicle for today’s demanding analytics.  It is architected for on-demand processing through pre-installed capacity paid for only when activated and while adding processors, disk, and memory without taking the system offline.  Virtualized top to bottom, the zEnterprise delivers the desired isolation while prioritization controls lets you define critical queries and workloads. Its industry-leading processors ensure the most complex queries run fast, and low latency enables near real-time analysis. Finally, multiple deployment options means you can start with a low-end z114 and grow through a fully configured z196 combined with a zBX loaded with blades, especially the IBM DB2 Analytics Accelerator (IDAA), a revamped version on the Smart Analytics Optimizer.

Last October IBM unveiled the IDAA and a host of other analytics tools under the smarter computing banner. But the IDAA is the zEnterprise’s analytic jewel. There IDAA incorporates Netezza, which speeds complex analytics through in-memory processing and a highly intelligent query optimizer. When run in conjunction with DB2 on the z, the results can be astonishing, with queries that normally require a few hours completed in just a few seconds, 1000 times faster according to some early users.

Netezza, when deployed as an appliance, streamlines database performance through hardware acceleration and optimization for deep analytics, multifaceted reporting, and complex queries. When embedded in the zEnterprise, it delivers the same kind of performance for mixed workloads—operational transaction systems, data warehouse, operational data stores, and consolidated data marts—but with the z’s extremely high availability, security, and recoverability. As a natural extension of the zEnterprise, where the data already resides in DB2 and OLTP systems, the z is able to deliver pervasive analytics across the organization while further speeding performance and ease of deployment and administration.

Already companies are reporting valuable results. Marriott Hotels reports using the system to book inventory down to the last room available to maximize yield. Chartis Insurance turned to it to meet SLAs that allow for no down time while requiring high performance and fast time to market. In the process, it achieved what it reports as seamless 99.99% up time, the fastest performance available, and time to market measured in days. Swiss Re turned to IDAA to put the right answers into the hands of decision makers across the business.

Today, IDAA and Netezza are just two components of a comprehensive zEnterprise data analytics portfolio that includes Cognos, SPSS, InfoSphere, Guardium, Optim, QMF, MDM, and more. IBM offers powerful data analytics capabilities for its Power and System x platforms, but the IDAA, which is the heart of the fast, near real time predictive analytics, is available only for the zEnterprise, either the z114 or the z196. Might be ideal for a private analytics cloud.

zEnterprise Private Cloud ROI

December 20, 2011

Many mainframe veterans think the System z has long acted as a private cloud, at least since SOA appeared on the System z, allowing users to access data and logic residing on the System z through nothing more than their browser. And they are right.

The distributed world, unlike the mainframe world, sees private clouds as something new and radical because it is not straightforward there to virtualize, service-enable, and integrate all the piece parts that make up a private cloud. The System z learned these tricks years ago, and the zEnterprise with x86 and p-blades in an attached zBX makes it even easier.

With the z114 and the System z Solution Edition for Cloud Computing program a mainframe-based private cloud becomes that much less expensive to acquire, especially since most of the piece parts already are included and optimized from the start. The System z Solution Edition for Cloud includes the z hardware, Tivoli software, and IBM services to deliver the foundation for the private cloud.

A private cloud, whether distributed or mainframe-based, does not come cheap. The payback, however, still is there; it just comes in a different form. The private cloud restructures IT around a services delivery model. Applications and users tap IT-based data and business logic as services. Cost savings are generated from the ensuing operational efficiency enabled through the standardization, automation and virtualization of IT services. When the organization progresses to the point where users can self-provision and self-configure the needed IT services through private cloud automation and management, the real efficiencies kick in.

According to IDC many of today’s private cloud business cases are being anchored by savings from application rationalization and IT staff productivity improvements in addition to expected optimization of hardware assets. But unlike the public cloud, which promises to shift IT spending from CAPEX to OPEX, private clouds actually drive increases in CAPEX since the organization is likely to invest in new hardware and software optimized for virtualized cloud services delivery and management automation.

With a mainframe private cloud, much of the investment in virtualized, optimized, and integrated hardware assets has already been made. The private cloud initially becomes more of an exercise in partitioning and reallocating those assets as a private cloud. Still, given the appeal of the IT services model, it is likely that the organization will boost its hardware assets to accommodate increasing demand and new services.

The greatest ROI of the private cloud, whether mainframe-based or distributed, comes from the business agility it enables. The virtualized pool of IT resources that makes up the private cloud can be easily reallocated as services to meet changing business needs. Instead of requiring weeks if not months to assemble and deploy the IT hardware and software resources necessary to support a new business initiative, those resources can be allocated from the pooled virtual resources in minutes or hours (provided, of course, sufficient resources are available). With a private cloud you can, in effect, change the business almost on-the-fly and with no additional investment.

As CIO, how are you going to put a value on this sudden agility? If it lets the organization effectively counter competitive challenges, seize new business opportunities, or satisfy new customer demands it could deliver astounding value. It all depends on the business leadership. If they aren’t terribly agile thinkers, however, the value might be minimal.

Other benefits from a private cloud include increased IT productivity and efficiency, the ability of business users to self-provision the desired IT resources (with appropriate policy-based automation controlling the provisioning behind the scenes), and an increased ability to monitor and measure IT consumption for purposes of chargeback or, as is more likely, show back. Such monitoring and measurement of IT consumption has long been a hallmark of the mainframe, whether a private cloud or not.

Even with a mainframe-based private cloud the organization will likely make additional investments, particularly in management automation to ensure efficient service delivery, monitoring, measurement, chargeback, self-provisioning, and orchestration. IBM Tivoli along with other mainframe ISVs like CA and BMC provide tools to do this.

In the end, the value of private cloud agility when matched with agile thinking business leadership should more than offset the additional investments required. And with a zEnterprise-based private hybrid cloud, which comes highly virtualized already, you have a head start on any distributed private cloud.

Storage Hypervisors and the zEnterprise

October 4, 2011

Does the IBM zEnterprise or even the System z need a storage hypervisor? I would have thought no, but a recent briefing by IBM’s Ron Riffe has changed my thinking. Riffe also has posted a very informative three-part blog on the topic of storage virtualization here.

Mainframes running z/OS already are thoroughly virtualized and don’t need a storage hypervisor. Many mainframe shops today, however, run other operating environments beyond z/OS, such as z/VM and Linux, and for those environments a storage hypervisor can add value. As mainframe shops move into the hybrid zEnterprise environment a storage hypervisor can play an even bigger role.

In general storage hypervisors serve the same function as server hypervisors in the distributed computing environment. They facilitate the pooling of shared physical resources among virtual machines by playing traffic cop to resolve contention for resources. The hypervisor also provides management capabilities. The use of a hypervisor usually improves resource utilization, saving money and increasing flexibility in the process. It is the server hypervisor that allows distributed shops to scale out their servers in an attempt to match the high levels of utilization and management efficiency that are taken for granted with the mainframe.

The storage hypervisor performs a similar job, enabling the sharing of storage resources and streamlining the management of diverse storage resources. IBM has long had the key components of a storage hypervisor in the IBM SAN Volume Controller (SVC), a storage virtualization platform, and in the Tivoli Storage Productivity Center, which provides the storage virtualization management capabilities that enable non-disruptive data mobility and management across heterogeneous storage tiers. The combination is an IBM storage hypervisor that allows data access and mobility between two physical data centers as a stretched cluster, up to 300km apart for synchronous data movement. If used with VMware vMotion or PowerVM Live Partition Mobility, Riffe noted, you can achieve transparent migration of virtual machines and their corresponding applications and data, a necessity for disaster avoidance.

The real interest in the storage hypervisor for the mainframe is being driven by private clouds. The System z and, better yet, the zEnterprise can be a prime candidate for the core of a private cloud. In today’s heterogeneous environment platforms, systems, and resources beyond the reach of the inherent virtualization capabilities of z/OS also must be addressed in any private cloud scenario. IBM’s storage hypervisor would be a useful addition to the private cloud, especially where mixed storage is sure to be involved.

With SVC as a component of the storage hypervisor, the organization can pool physical storage resources from almost any disk array vendor and move virtual volumes between any of the resources, either as a snapshot or a mirror copy. From there, it is straightforward to add things like I/O caching, thin provisioning, automated tiering, snapshots and data mirrors, and mobility for disaster avoidance. With the Tivoli Storage Productivity Center you can then add centralized management, health visualization, capacity and performance management, and a storage services catalog combined with automated provisioning and pay-per-use chargeback.

Other vendors are starting to roll out technology to facilitate private clouds. EMC, HP, and Hitachi all have private cloud offerings that include some capabilities of a storage hypervisor. Most recently, the latest versions of Symantec’s Veritas Storage Foundation, Cluster Server, and Operations Manager promise to get organizations to a private cloud and do so without needing to rip and replace existing resources. That’s nice, but the Symantec technology is focused on distributed systems and platforms.

If the goal is an efficient, transparent private cloud, virtualizing the servers is the easy part. It is storage that complicates things because data mobility is essential. Without highly virtualized storage across the wide range of storage devices and tiers the organization can’t move and manage the data as quickly or efficiently as it moves virtual servers. That’s why it needs a storage hypervisor.

For mainframe shops the idea of a storage hypervisor seems strange. As soon as you start to think of the mainframe as a core component of a private cloud that combines distributed and multi-vendor resources it makes more sense. At that point, the IBM storage hypervisor is the way to go.

IBM z196 takes to the cloud

November 29, 2010

IBM has fully embraced the cloud, and the z196 is expected to play a big role, especially with private clouds. IBM introduced a System z Solution Edition for Cloud Computing that bundles an Enterprise Linux z server and z/VM with Tivoli automated management. There are no Solution Edition packages for the z196 yet, but IBM says they will be coming in 2011.

Organizations, however, don’t have to wait for a Solution Edition for the z196 to play in the cloud. The machine comes ready for cloud computing. IBM apparently thinks application development and testing will be among the first workloads.

On the other hand, one of the first organizations using the z for cloud computing is the  University of Bari in Italy, which built a cloud-based fish auction to streamline the go-to-market process for the local fishing industry.  Now they are looking at doing the same for other commodities, like wine.

No doubt cloud computing is hot. You’ll find it in one form or another on each of the lists IT analysts compile of technologies to watch in the coming year. If DancingDinosaur compiled such a list, it certainly would include private cloud. In October Gartner predicted that the next three years will see the delivery of a range of cloud service approaches that fall between open public clouds to closed private clouds. That seems like a pretty safe bet.

IBM is hedging its cloud bets to cover every possibility. It found 60% of its customers using or planning to use some type of private cloud computing in the next 12 months. By comparison, 20-30% expects to tap public cloud capabilities, mainly for web conferencing, email, and CRM/sales force automation. Given those workloads, the only surprise is that the public cloud numbers aren’t substantially higher.

DancingDinosaur questions the long term economics of public clouds, especially for large enterprises. The mainframe , however, promises to be an ideal platform for private clouds, which makes more sense economically. The reason comes down to three things that the mainframe already does well: virtualization, scalability, and security.

Through z/VM the mainframe can rapidly provision thousands of virtual Linux servers and share resources across the system. With the z196 and the zBX extension cabinet, the mainframe literally can encompass the capacity and platforms of an entire heterogeneous data center in a single system and efficiently manage it all through the Unified Resource Manager.

IBM takes the idea even further:  With the z’s share-all design for system components, an organization can reduce the components in the data center by 90%, resulting in massive simplification. Correspondingly, total operating costs can be significantly reduced.

Similarly, as IBM also points out, the mainframe is a proven foundation for secure multi-tenant business workloads from the application layer to the data source and all points between. It is a trusted repository for highly secure information and an open platform supporting everything from Web 2.0 agile development environments using REST and other browser interfaces to enterprise class middleware.

Finally, IBM notes that the mainframe is designed for business resiliency and support for workloads that require high service levels. It is that reason—lack of SLA support—which often drives managers to opt for private clouds where they can define, customize, and control services and service delivery levels.

IBM points to  Electronics Corporation of Tamil Nadu Limited (ELCOT), an Indian government owned provider of information and communications services to government agencies in Tamil Nadu as an early example of a z private cloud. To deliver the services and reduce costs ELCOT turned to a z9 as a consolidation server. The z9 has the capacity to run a workload that is equivalent to 250 Linux/x86 server workloads and support Web services, SOA, Linux, and Eclipse infrastructure and deliver it aa a private cloud.

Just imagine how much more workload a z196 private cloud could handle.

 


%d bloggers like this: