Automated System z Workload Capping Can Save Big Bucks

June 20, 2014

IBM’s Monthly License Charge (MLC) pricing can be a powerful tool to significantly lower the cost of software licensing for a mainframe shop. The problem: it is frightfully complicated. DancingDinosaur has attended conferences that scheduled multi-part sessions just to cover the basic material. Figuring out which pricing program you qualify for is itself a challenge and you probably want a lawyer looking over your shoulder. Find IBM’s System z pricing page here.

One particularly galling challenge is estimating and capping the 4-hour utilization for each LPAR.  You can easily find yourself in a situation where you exceed the cap on one LPAR, resulting in a surcharge, while you have headroom to spare on other LPARs. The trick is to stay on top of this by constantly monitoring workloads and shift activity among LPARs to ensure you don’t exceed a cap.

This requires a skilled mainframe staffer with both a high level of z/OS skill and familiarity with z workloads and LPARs. While you’re at it throw in knowledge of data center operations and the organization’s overall business direction. Finding such an expert is costly and not easily spared for constant monitoring. It’s a task that lends itself to automation.

And that’s exactly what BMC did earlier this week when it introduced Intelligent Capping (iCap) for zSeries mainframes. On average, according to BMC, companies that actively manage and effectively prioritize their mainframe workloads save 10-15 percent more on their monthly license charges than those who use a more passive approach. Furthermore, instead of assigning a costly mainframe workload guru to manually monitor and manage this, BMC promises that the costs can be reduced while also diminishing risk to the business through the use of its intelligent iCap software that understands workloads, makes dynamic adjustments, and automates workload capping.

The savings, according to BMC, can add up fast. In one example, BMC cited saving 161 MSUs, which translated for that organization to over $55k that month. Given that a mainframe shop spends anywhere from few hundred thousand to millions of dollars per month on MLC charges savings of just a few percent can be significant. One BMC customer reportedly expects intelligent capping to save it 12% each month. Caveat: DancingDinosaur has not yet been able to speak with any BMC iCap customer to verify these claims.

But assuming they are true, iCap is a no-brainer for any mainframe shop paying anything but the most minimal MLC. BMC charges for iCap based on the customer’s capacity. It is willing to discuss a shared gain model by which the iCap charges are based on how much is saved but none of those deals apparently have been finalized.

This seems like a straightforward challenge for a mainframe management tool vendor but DancingDinosaur has found only a few actually doing it—BMC, Softwareonz, and IBM. Softwareonz brings AutoSoftCapping. The product promises to maximize software cost efficiency for IBM zSeries platforms, and specifically z/OS. It does so by automatically adjusting defined capacity by LPAR based upon workload while maintaining a consistent overall defined capacity for your CPC.

Softwareonz, Seattle, estimates it saves 2% on monthly charges, on the low end. At the high end, it has run simulations suggesting 20% savings.  AutoSoftCapping only works for datacenters running their z on the VWLC pricing model. Customers realistically can save 8-10%. Again, DancingDinosaur has not yet validated any savings with an actual Softwareonz customer.

Without automation, you have to do this manually, by adjusting defined capacity based on actual workloads. Too often that leave the organization with the choice of constraining workloads and thereby inhibiting performance or over-provisioning the cap and thereby driving up costs through wasted capacity.

So, if automatic MLC capping is a no brainer, why isn’t everybody doing it? Softwareonz sees several reasons, the primary one being the fear of the cap negatively impacting the VWLC four-hour rolling average. Nobody wants to impact their production workloads. Of course, the whole reason to apply intelligence to the automation is to reduce software costs without impacting production workloads. BMC offers several ways to ease the organization into this as they become more comfortable and confident in the tool.

Another reason suggested is that the System z operational team is protecting its turf from the inroads of automation. A large z shop might use a team of half a dozen or more people dedicated to monitoring and managing workloads manually. Bring in automation like iCAP or AutoSoftCapping and they expect pink slips to follow.

Of course, IBM brings the z/OS Capacity Provisioning tool for z/OS (v1.9 and above), which can be used to add and remove capacity through a Capacity Provisioning Manager (CPM) policy. This can be used to automatically control the defined capacity limit or the group capacity limits. The user interface for defining CPM policies is through z/OSMF.

If you are subject to MLC pricing, consider an automated tool. BTW, there also are consultants who will do this for you.

A note: IBM Enterprise Cloud System, covered by DancingDinosaur a few weeks ago here, is now generally available. It is an OpenStack-based converged offering that includes compute, storage, software, and services built around the zBC12. Check out the most recent details here.

Also take note: IBM Enterprise2014 is coming to Las Vegas in early Oct, Details here. The conference combines System z University and Power System University plus more. You can bet there will be multiple sessions on MLC pricing in its various permutations and workload capping.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or visit his website, www.technologywriter.com

 

Industrial Strength SDS for the Cloud

June 12, 2014

The hottest thing in storage today is software defined storage (SDS). Every storage vendor is jumping on the SDS bandwagon.

The presentation titled Industrial -Strength SDS for the Cloud, by Sven Oehme, IBM Senior Research Scientist, drew a packed audience at Edge 2014 and touched on many of the sexiest acronyms in IBM’s storage portfolio.  These included not just GPFS but also GSS (also called GPFS Storage Server), GNR, LROC (local read-only cache), and even worked in Linear Tape File System (LTFS).

The session promised to outline the customer problems SDS solves and show how to deploy it in large scale OpenStack environments with IBM GPFS.  Industrial strength generally refers to large-scale, highly secure and available multi-platform environments.

The session abstract explained that the session would show how GPFS enables resilient, robust, reliable, storage deployed on low-cost industry standard hardware delivering limitless scalability, high performance, and automatic policy-based storage tiering from flash to disk to tape, further lowering costs. It also promised to provide examples of how GPFS provides a single, unified, scale-out data plane for cloud developers across multiple data centers worldwide. GPFS unifies OpenStack VM images, block devices, objects, and files with support for Nova, Cinder, Swift and Glance (OpenStack components), along with POSIX interfaces for integrating legacy applications. C’mon, if you have even a bit of IT geekiness, doesn’t that sound tantalizing?

One disclaimer before jumping into some of the details; despite having written white papers on SDS and cloud your blogger can only hope to approximate the rich context provided at the session.

Let’s start with the simple stuff; the expectations and requirements for cloud  storage:

  • Elasticity, within and across sites
  • Secure isolation between tenants
  • Non-disruptive operations
  • No degradation by failing parts as components fail at scale
  • Different tiers for different workloads
  • Converged platform to handle boot volumes as well as file/object workload
  • Locality awareness and acceleration for exceptional performance
  • Multiple forms of data protection

Of course, affordable hardware and maintenance is expected as is quota/usage and workload accounting.

Things start getting serious with IBM’s General Parallel File System (GPFS). This what IBMers really mean when they refer to Elastic Storage, a single name space provided across individual storage resources, platforms, and operating systems. Add in different classes of storage devices (fast or slow disk, SSD, Flash, even LTFS tape), storage pools, and policies to control data placement and you’ve got the ability to do storage tiering.  You can even geographically distribute the data through IBM’s Active Cloud Engine, initially a SONAS capability sometimes referred to as Active File Manager. Now you have a situation where users can access data by the same name regardless of where it is located. And since the system keeps distributed copies of the latest data it can handle a temporary loss of connectivity between sites.

To protect the data add in declustered software RAID, aka GNR or even GSS (GPFS Storage Server). The beauty of this is it reduces the space overhead of replication through declustered parity (80% vs. 33% utilization) while delivering extremely fast rebuild.  In the process you can remove hardware storage controllers from the picture by doing the migration and RAID management in software on your commodity servers.

dino industrial sds 1

In the above graphic, focus on everything below the elongated blue triangle. Since it is being done in software, you can add an Object API for object storage. Throw in encryption software. Want Hadoop? Add that too. The power of SDS.  Sweet

The architecture Oehme lays out utilizes generic servers with direct-attached switched JBOD (SBOD). It also makes ample use of LROC, which provides a large read cache that benefits many workloads, including SPECsfs, VMware, OpenStack, other virtualization, and database workloads.

A key element in Oehme’s SDS for the cloud is OpenStack. From a storage standpoint OpenStack Cinder, which provides access to block storage as if it were local, enables the efficient sharing of data between services. Cinder supports advanced features, such as snapshots, cloning, and backup. On the back end, Cinder supports Linux servers with iSCSI and LVM; storage controllers; shared filesystems like GPFS, NFS, GlusterFS; and more.

Since Oehme’s  is to produceindustrial-strength SDS for the Cloud it needs to protect data. Data protection is delivered through backups, snapshots, cloning, replication, file level encryption, and declustered RAID, which spans all disks in the declustered array and results in faster RAID rebuild (because there are more disks available for RAID rebuild.)

The result is highly virtualized, industrial strength SDS for deployment in the cloud. Can you bear one more small image that promises to put this all together? Will try to leave it as big as can fit. Notice it includes a lot of OpenStack components connecting storage elements. Here it is.

dino industrial sds 2

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter @mainframeblog

Learn more about Alan Radding at technologywriter.com

Expanding Mainframe Linux and Cloud Computing

June 9, 2014

In case you wondered if IBM is seriously committed to both mainframe Linux and cloud computing on the System z platform you need only look at the June 2 announcement that the company is opening the first dedicated System z Linux and cloud computing competency center in Beijing.  According to the announcement, the new center is specifically intended to help organizations there take advantage of Linux and cloud computing solutions on the mainframe, and help accelerate adoption of Linux on System z in China.

This is just the most recent of a number of developments that boosted the System z profile. Even at the recent IBM Edge 2014 conference, which was not about the System z at all (a System z and Power conference, Enterprise 2014, is coming up in October) still managed to slip in some System z sessions and content, including one about protecting DB2 data on z/OS using tape and other sessions that included the System z and Power enterprise servers in discussions on various aspects of cloud computing or the use of flash.

Following the Mainframe50 announcement earlier in the spring, IBM introduced more System z enhancements including the IBM Enterprise Cloud System, an OpenStack-based converged offering that includes compute, storage, software, and services and built around the zBC12; IBM Wave for z/VM, which simplifies z/VM virtualization management and expedites an organization’s path to the cloud; and a new IBM Cloud Management Suite for System z, which handles dynamic provisioning and performance monitoring.

An interesting aspect of this announcement is the IBM’s focus on Linux. It has taken a decade for Linux to gain traction in System z data centers but patience is finally paying off.  Linux has proven instrumental in bringing new mainframe users to the platform (DancingDinosuar previously reported on Algar, a Brazilian telco) ; according to IBM, more than 50% of all new mainframe accounts since 2010 run Linux. To that end, DancingDinosaur has long recommended the Enterprise Linux Server Solution Edition program, a deeply discounted package hardware, middleware, and software. It represents the best and maybe the only bargain IBM regularly offers.

Linux itself has proven remarkably robust and has achieved widespread acceptance among enterprises running a variety of platforms. According to the IDC, Linux server demand is rising due to demand from cloud infrastructure deployments. The researcher expects that demand to continue into the future. In the first quarter of 2014, Linux server revenue accounted for 30 percent of overall server revenue, an increase of 15.4 percent.

Along with cloud computing, collaborative development appears to be contributing to the continued growth and adoption of Linux. According to the Linux Foundation, a new business model has emerged in which companies are joining together across industries to share development resources and build common open source code bases on which they can differentiate their own products and services. This collaborative approach promises to transform a number of industries, especially those involved with cloud computing, social and mobile. Apparently it provides a fast way to create the next generation of technology products.

In its latest survey, the Linux Foundation identified three drivers or the recent Linux growth:

  1. Collaborative software development—ninety-one percent of business managers and executives surveyed ruled collaborative software development somewhat to very important to their business while nearly 80 percent say collaborative development practices have been seen as more strategic to their organization over the past three years.
  2. Growing investments in collaborative software development—44 percent of business managers said they would increase their investments in collaborative software development in the next six months
  3. The benefits of collaboration—more than 77 percent of managers said collaborative development practices have benefited their organizations through a shorter product development cycle/faster time to market.

The bulk of the world’s critical transaction processing and production data continue to reside on the mainframe, around 70 percent, according to IBM. Similarly, 71% of all Fortune 500 companies have their core businesses on a mainframe. And this has remained remarkably steady over the past decade despite the rise of cloud computing. Of course, all these organizations have extensive multi-platform data centers and are adding growing numbers of on-premise and increasingly hybrid cloud systems.

Far from relying on its core production processing to carry the mainframe forever, the new Beijing mainframe Linux-cloud center demonstrates IBM’s intent to advance the mainframe platform in new markets. It is opening the mainframe up in a variety of ways; from z/OS in the cloud to Hadoop for z to new cloud-like pay-for-use pricing models. Watch DancingDinosaur for an upcoming post on the new pricing discounts for mobile transactions on z/OS.

DancingDinosaur is Alan Radding and can be followed on Twitter, @mainframeblog

New IBM Flash Storage for the Mainframe

June 2, 2014

IBM is serious about flash storage and they are enabling just about everything for flash—the DS8000 family, San Volume Controller, EasyTier, Real-time Compression (RtC), and more.  Of particular interest to DancingDinosaur readers should be the recently announced DS8870 all flash enclosure.

Storage in general is changing fast. Riding Moore’s Law for the past two decades, storage users could assume annual drops in the cost per gigabyte. It was as predictable as passing go in Monopoly and collecting $200. But with that ride coming to an end companies like IBM are looking elsewhere to engineer the continued improvements everyone assumed and benefited from. For example, IBM is combining SVC, RtC, and flash to get significantly more performance out of less actual storage capacity.

The DS8870 is particularly interesting. In terms of reliability, for instance, it delivers not five-nines (99.999%) availability but six-nines (99.9999%) availability. That works out to be about 30 seconds of downtime each year. It works with all IBM servers, not just the z, and it protects data through full disk encryption and advanced access control. With the new flash enclosure packed with IBM’s enhanced flash the DS8870 delivers 4x faster flash performance in 50% less space. That translates into a 3.2x improvement in database performance.

Flash is not cheap when viewed through the traditional cost/gigabyte metric, but the above performance data suggests a different way to gauge the cost of flash, which continues to steadily fall in price. The 3.2x increase in database performance, for example, means you can handle over 300% more transactions.

Let’s start with the assumption that more transactions ultimately translate into more revenue. The same for that extra 9 in availability. The high-performance all flash DS8870 configuration with the High Performance Flash Enclosure also reduces the footprint by 50% and reduces power consumption by 12%, which means lower space and energy costs. It also enables you to shrink batch times by 10%, according to IBM. DancingDinosaur will be happy to help you pull together a TCO analysis for an all-flash DS8870 investment.

The sheer specs of the new system are impressive. IBM reports the product’s up to 8 PCIe enclosures populated with 400 GB flash cards provides 73.6TB of usable capacity. For I/O capacity the 8 I/O bays installed in the base frame provide up to 128 8Gb FC ports. Depending on the internal server you install in the DS8870 you can also get up to 1TB of cache.

all flash rack enclosure

all flash rack enclosure

ds8870 rack

The Flash Enclosure itself is a 1U drawer that can take up to 30 flash cards.  By opting for thirty 400GB flash cards you will end up with 9.2TB Usable (12 TB raw). Since the high-performance all flash DS8870 can take up to 8 Flash Enclosures you can get 96TB raw (73.6TB usable) flash capacity per system.

A hybrid DS8870 system, as opposed to the high-performance all flash version, will allow up to 120 Flash cards in 4 Flash Enclosures for 48TB raw (36.8TB usable), along with 1536 2.5” HDDs/SSDs. Then, connect it all to the DS8870 internal PCIe fabric for impressive performance— 200,000 IOPS (100% read) and 130,000 IOPS (100% write). From there, you can connect it to flash-enabled SVC and Easy Tier.

Later this year, reports Clod Barrera, IBM’s storage CTO, you will be able to add 4 more enclosures in hybrid configurations for boosting flash capacity up to 96TB raw.  Together you can combine the DS8870, flash, SVC, RtC, and EasyTier for a lightning fast and efficient storage infrastructure.

Even the most traditional System z shop will soon find itself confronting mixed workloads consisting of traditional and non-traditional workload. You probably already are as mobile devices initiate requests for mainframe data. Pretty soon you will be faced with incorporating traditional and new workloads. When that happens you will want a fast, efficient, flexible infrastructure like the DS8870.

DancingDinosaur is Alan Radding. Follow him onTwitter, @mainframeblog

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog

IBM Edge2014 Executive Track Hits Critical Issues

May 14, 2014

Until now this blog has looked at the technology issues being addressed at the upcoming IBM Edge2014 conference starting in Las Vegas on Monday. There also, however, is a business focus through the Edge2014 Executive Track. You can find Executive Track session details here.

DancingDinosaur, a mediocre programmer in a previous life, still gravitates toward technical sessions in the hopes of understanding this amazing new technology. But for the sake of some balance your blogger will hit a few Executive Track sessions. The following look particularly interesting:

Infrastructure Matters Because Business Outcomes Matter—to be given by Tom Rosamilia, Senior Vice President, IBM Systems & Technology Group and IBM Integrated Supply Chain. Your blogger has heard Rosamilia talk on this topic before and he is spot on. He explains why the right IT infrastructure choices are critical for business success and how the demands created by cloud, big data and analytics, mobile, and social are fueling an explosion of data, and spawning new workloads and business models. This has drastically changed expectations for IT infrastructures, which now is expected to be software defined, open, and more secure than ever before. The result is what IBM calls Composable Business, which optimizes technology by building a robust, highly agile infrastructure that can adjust quickly and efficiently to change for better business outcomes.

Along the same lines is Rethinking Your Business Model with Cloud. Here Robert LeBlanc, Senior Vice President, Software and Cloud Solutions, IBM Software Group, describes how new disruptive technologies are pushing companies to rethink how they do business. As a result, a wide range of companies, both new and well established, ranging from Tangerine (formerly ING Direct), OnFarm (farm management) and Kiwi (wearable technology) to Pitney Bowes (reinventing the postage meter business) are redefining themselves and their industries. Not surprisingly business leaders, IT executives, and developers are embracing cloud technology across their organizations to promote innovation, drive business growth, and gain a competitive advantage.  Enterprises are tapping into IBM cloud technologies, such as the IBM Cloud Marketplace and BlueMix, to quickly build applications that transform their businesses.

Improved Economics with Data Virtualization—Jeff Barber, Vice President, Mid-Range/Low End Disk Business Line Executive, IBM Systems & Technology Group discusses how data is the next natural resource; the new currency of business. In today’s mobile and social environment, users expect data to be available anytime, anywhere, and on any device. This has placed great financial pressure on enterprise data fabrics. With budgets under constant scrutiny, companies need to manage storage costs while delivering faster business insights. IBM Storage provides data virtualization that helps enable business agility while reducing both infrastructure and operational costs. Data virtualization also provides the foundation for software defined storage and cloud storage.

Big Data Insights in a Flash–Michael Kuhn, Vice President and Business Line Executive, Flash

Systems, IBM Systems & Technology Group explains how the massive amounts of data being generated each day are making it difficult to capture insights from the data. Through this data explosion businesses can achieve their competitive edge by making data-driven decisions in real time. However, the velocity at which data is growing has made it increasingly difficult for businesses to manage and extract value from their data. Kuhn shows how adding flash storage to a data fabric offers a fast, low cost way to create business value and extract greater insights from data. Flash also enables data analytics in near real time so workers can take appropriate actions even as customers are making buying decisions.

Much of IBM Edge2014 will touch on storage in various ways, which makes sense since data lies at the heart of technology’s value, and storage is how organizations capture, preserve, protect, and leverage data for its maximum impact.

Flexible Storage for Data in the Cloud by Vincent Hsu, IBM Fellow and Chief Technical Officer, Storage Systems and Sidney Chow, Vice President and Business Line Executive, High End Disk, IBM Systems & Technology Group. Hsu and Chow start with the unpredictability of workloads in cloud environments that is driving an ever increasing need for a flexible and scalable infrastructure. Your cloud services, whether provided on-premises or off-site, are only as good as the elasticity and control they can provide. The right data infrastructure matters when creating cloud environments that can optimize data for faster business performance and lower total costs. Software Defined Storage take elasticity to the next level by simplifying interfaces and automating tasks.

Not sure which of these sessions your blogger will attend—all look good. In fact, there is so much at IBM Edge2014 that this blogger would need another week to catch all he likes. And, still need to make time for the Sheryl Crow concert.  Decisions, decisions, decisions…

Look for this blogger 5/19 through 5/22 at Edge2014. You’ll find me in the social media lounge when not attending a session. And follow me on Twitter, @mainframeblog or @IBMEdge.

IBM Edge2014 as Coming out Party for OpenStack

May 7, 2014

IBM didn’t invent OpenStack (Rackspace and NASA did), but IBM’s embrace of OpenStack in March 2013 as its standard for cloud computing made it a legit standard for enterprise computing. Since then IBM has made its intention to enable its product line, from the System z on down, for the OpenStack set of open source technologies.  Judging from the number of sessions at IBM Edge 2014, (Las Vegas, May 19-23 at the Venetian) that address one or another aspect of OpenStack you might think of IBM Edge2014 almost as a coming out celebration for OpenStack and enterprise cloud computing.

OpenStack is a collection of open source technologies. the goal of which is to provide a scalable computing infrastructure for both public and private clouds. As such it has become the foundation of IBM’s cloud strategy, which is another way of saying it has become what IBM sees as its future. An excellent mini-tutorial on OpenStack, IBM, and the System z can be found at mainframe-watch-Belgium here.

At IBM Edge2014 OpenStack is frequently included in sessions on storage, cloud, and storage management.  Let’s take a closer look at a few of those sessions.

IBM Storage and Cloud Technologies

Presenter Christopher Vollmar offers an overview of the IBM storage platforms that contain cloud technologies or provide a foundation for creating a private storage cloud for block and file workloads. This overview includes IBM’s SmartCloud Virtual Storage Center, SmartCloud Storage Access, Active Cloud Engine, and XIV’s Hyper-Scale as well as IBM storage products’ integration with OpenStack.

OpenStack and IBM Storage

Presenters Michael Factor and Funda Eceral explain how OpenStack is rapidly emerging as the de facto platform for Infrastructure as a Service. IBM is working fast to pin down the integration of its storage products with OpenStack. This talk presents a high level overview of OpenStack, with a focus on Cinder, the OpenStack block storage manager. They also will explain how IBM is leading the evolution of Cinder by improving the common base with features such as volume migration and ability to change the SLAs associated with the volume in the OpenStack cloud. Already IBM storage products—Storwize, XIV, DS8000, GPFS and TSM—are integrated with OpenStack, enabling self-provisioning access to features such as EasyTier or Real-time Compression via standard OpenStack interfaces. Eventually, you should expect virtually all IBM products, capabilities, and services to work with and through OpenStack.

IBM XIV and VMware: Best Practices for Your Cloud

Presenters Peter Kisich, Carlos Lizarralde argue that IBM Storage continues to lead in OpenStack integration and development. They then introduce the core services of OpenStack while focusing on how IBM storage provides open source integration with Cinder drivers for Storwize, DS8000 and XIV. They also include key examples and a demonstration of the automation and management IBM Storage offers through the OpenStack cloud platform.

IBM OpenStack Hybrid Cloud on IBM PureFlex and SoftLayer

Presenter Eric Kern explains how IBM’s latest version of OpenStack is used to showcase a hybrid cloud environment. A pair of SoftLayer servers running in IBM’s public cloud are matched with a PureFlex environment locally hosting the OpenStack controller. He covers the architecture used to set up this environment before diving into the details around deploying workloads.

Even if you never get to IBM Edge2014 it should be increasingly clear that OpenStack is quickly gaining traction and destined to emerge as central to Enterprise IT, any style of cloud computing, and IBM. OpenStack will be essential for any private, public, and hybrid cloud deployments. Come to Edge2014 and get up to speed fast on OpenStack.

Alan Radding/DancingDinosaur will be there. Look for me in the bloggers lounge between and after sessions. Also watch for upcoming posts on DancingDinosaur about OpenStack and the System z and on OpenStack on Power Systems.

Please follow DancingDinosaur on Twitter, @mainframeblog.

Best System z TCO in Cloud and Virtualization

May 1, 2014

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines in terms of TCO.  The analysis, which DancingDinosaur will dig into below, was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

DancingDinosaur has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 based systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial z Enterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM and the few IT analysts who talk to z customers have been saying for some time. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual machines compared to the public cloud and a bit more VMs compared to x86 machines.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance. View the IBM z TCO presentation here.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for AWS reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instances.

When IBM applied its analysis to 398 I/O diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM kept the assumptions equivalent across the platforms. If you make different software and middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the overall comparative rankings probably won’t change all that much.

Still time to register for IBM Edge2014 in Las Vegas, May 19-23. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog

3 Big Takeaways at IBM POWER8 Intro

April 24, 2014

POWER8 did not disappoint. IBM unveiled its latest generation of systems built on its new POWER8 technology on Wednesday, April 23.

DancingDinosaur sees three important takeaways from this announcement:

First, the OpenPOWER Foundation. It was introduced months ago and almost immediately forgotten. DancingDinosaur covered it at the time here. It had handful of small partners. Only one was significant, Google, and was it was hard to imagine Google bringing out open source POWER servers. Now the Foundation has several dozen members and it still is not clear what Google is doing there, but the Foundation clearly is gaining traction. You can expect more companies to join the Foundation in the coming weeks and months.

With the Foundation IBM swears it is committed to a true open ecosystem; one where even competitors can license the technology and bring out their own systems. At some point don’t be surprised to see white box Power systems below IBM’s price. More likely in the short term will be specialized Power appliances. What you get as a foundation member is the Power SOC design, Bus Specifications, Reference Designs, FW OS, and Hypervisor Open Source. It also includes access to Little Endian Linux, which will ease the migration of software to POWER. BTW, Google is listed as a member focusing on open source firmware and on the cloud and high performance computing.

Second, the POWER8 processor itself and the new family of systems. The processor, designed for big data, will run more concurrent queries and run them up to 50x fast than x86 with 4x more threads per core than x86. Its I/O bandwidth is 5x faster than POWER7. It can handle 1TB of memory with 4-6x more memory bandwidth and more than 3x more on-chip cache than an x86. The processor itself will utilize 22nm circuits and run 2.5 -5 GHz.

POWER8 sports an eight-threaded processor. That means each of the 12 cores in the CPU will coordinate the processing of eight sets of instructions at a time for a total of 96 processes. Each process consists of a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip, IBM explains. By comparison, IBM reports Intel’s Ivy Bridge E5 Xeon CPUs are double-threaded cores, with up to eight cores, handling 16 processes at a time (compared to 96 with POWER8).  Yes, there is some coordination overhead incurred as more threads are added. Still the POWER8 chip should attract interest among white box manufacturers and users of large numbers of servers processing big data.

Third is CAPI, your newest acronym.  If something is going to be a game-changer, this will be it. The key is to watch for adoption. Coherent Accelerator Processor Interface (CAPI) sits directly on the POWER8 and works with the same memory addresses that the processor uses. Pointers de-referenced same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable interface. In the process, it offloads complexity.

CAPI can reduce the typical seven-step I/O model flow to three steps (shared memory/notify accelerator, acceleration, and shared memory completion). The advantages revolve around virtual addressing and data caching through shared memory and reduced latency for highly referenced data. [see accompanying graphic] It also enables an easier, natural programming model with traditional thread level programming and eliminates the need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing.

 CAPI Picture

It’s too early to determine if CAPI is a game changer but IBM has already started to benchmark some uses. For example, it ran NoSQL on POWER8 with CAPI and achieved a 5x cost reduction. When combined with IBM’s TMI flash it found it could:

  • Attack problem sets otherwise too big for the memory footprint
  • Deliver fast access to small chunks of data
  • Achieve high throughput for data or simplify object addressing through memory semantics.

CAPI brings programming efficiency and simplicity. It uses the PCIe physical interface for the easiest programming and fastest, most direct I/O performance. It enables better virtual addressing and data caching. Although it was intended for acceleration it works well for I/O caching. And it has been shown to deliver a 5x cost reduction with equivalent performance when attaching to flash.  In summary, CAPI enables you to regain infrastructure control and rein in costs to deliver services otherwise not feasible.

It will take time for CAPI to catch on. Developers will need to figure out where and how best to use it. But with CAPI as part of the OpenPOWER Foundation expect to see work taking off in a variety of directions. At a pre-briefing a few weeks ago, DancingDinosaur was able to walk through some very CAPI interesting demos.

As for the new POWER8 Systems lineup, IBM introduced 6 one- or two-socket systems, some for Linux others for all systems.  The systems, reportedly, will start below $8000.

You can follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog. Also, please join me at IBM Edge2014, this May 19-23 at the Venetian in Las Vegas.  Find me in the bloggers lounge.

IBM Edge2014 Explores Software Defined Everything in Depth

April 18, 2014

IBM Edge2014 is coming up fast, May 19-23 in Las Vegas, at the Venetian. Plus there is the Sheryl Crow concert you don’t want to miss.

IBM Edge2014 is bringing over 400 program sessions across more than a dozen topic tracks; choices for everyone from the geekiest techie to the most buttoned down business executive. One that crosses both camps—technical and business—is Software Defined Environments (SDE), which DancingDinosaur of thinks of as software-defined everything.

SDE takes the abstraction, or the decoupling, of software and hardware to the next level. It takes virtualization and moves it higher up in the stack.  There you can virtualize not only servers but network switches, storage, and more. The benefit: efficiency, speed, and flexibility. You can allocate and deploy system resources quickly and easily through software. And add or move capabilities as needed, again through software.

Through software defined virtualization the IT industry can virtualize nearly every resource and capability. If you can encapsulate a set of capabilities as software you have created a virtualized component that can run on almost any network-attached device capable of hosting software.  In short, you have effectively decoupled those capabilities usually embedded as firmware from whatever underlying physical device previously hosted them.

IBM Edge2014 offers numerous software defined sessions. Let’s look at a few:

Software Defined Storage – Storage for Software Defined Environments

Presented by Clodoaldo Barrera

As described in the program, Software Defined Environments (SDE) have become the preferred approach for modern IT operations, combining the values of automation, policy enforcement, and efficiency. Within these SDE operations, storage must be properly configured to support the expected user experience and cost benefits, while still providing performance, availability, and data protection. In this session Barrera will explain how storage is configured and then managed through a stack of virtualization, provisioning, and orchestration software.

IBM Software Defined Storage Vision and Roadmap

Presented by Vincent Hsu, Tom Clark

This session introduces the core technology for IBM Software Defined Storage (SDS) vision and the SDS product roadmap. This includes the control plane technology as well as the data plane technology to address the future software defined data center.

But it is not only about storage. IBM’s Strategy for Software Defined Networking

Presented by Andy Wright

Here Software Defined Networking (SDN) is an emerging framework designed for virtual, dynamic and flexible networking that allows organizations to easily modify, control and manage physical and virtual networks. IBM already is a leader in this space with SDN VE offerings and the roadmap above tells you where it is headed. Wright’s session examines IBM’s Vision, Network Virtualization (Overlay) capabilities for existing networks, and the capabilities of OpenFlow networks. These technologies promise to improve the working lives of system, virtualization, cloud, and network administrators. If you fill one of these roles, you probably don’t want to miss this.

Continuity Requirements for Software Defined Data Centers

Presented by Jon Toigo

One the benefits of software defined resources is the ability to spin up additional resource virtually. That leads to the assumption of an agile and dynamic data center that can turn on a dime in response to business requirements. That is true in theory. Rarely discussed, however, are the inherent risks of combining physical and virtual resources, both locally deployed and sourced from remote cloud-based providers. This presentation will identify some disaster recovery and business continuity requirements to keep in mind as you plan your software defined data center and need to rein in management’s wishful thinking.

And a related topic, It’s All About the Object! IBM Customer Experiences with Object Storage

Presented by Manuel Avalos, Dan Lucky

Travis County in Austin, TX is an early adopter of IBM object storage. This case study positions IBM’s Statement Of Direction (SOD) about object storage based on OpenStack Swift. Here you can learn from six months of Travis County user experiences, values, and next steps.

And there is another entire session on SDE in the federal government. Overall, IBM Edge2014 is delivering considerable real world experiences direct from the user.

Look for DancingDinosaur at IBM Edge2014, Mon-Wed. at sessions or in the bloggers lounge.

And follow DancingDinosaur on Twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 660 other followers

%d bloggers like this: