Posts Tagged ‘TCO’

Under the Covers of Z Container Pricing

December 1, 2017

Along with the announcement of the z14, or now just Z, last July IBM also introduced container pricing as an upcoming capability of the machine intended to make it both flexible and price competitive. This is expected to happen by the end of this year.

A peak into the IBM z14

Container pricing implied overall cost savings and also simplified deployment. At the announcement IBM suggested competitive economics too, especially when benchmarked against public clouds and on-premises x86 environments.

By now you should realize that IBM has difficulty talking about price. They have lots of excuses relating to their global footprint and such. Funny, other systems vendors that sell globally don’t seem to have that problem. After two decades of covering IBM and the mainframe as a reporter, analyst, and blogger I’ve finally realized why the reticence: that the company’s pricing is almost always high, over-priced compared to the competition.

If you haven’t realized it yet, the only way IBM will talk price is around a 3-year TCO cost analysis. (Full disclosure: as an analyst, I have developed such TCO analyses and am quite familiar with how to manipulate them.) And even then you will have to swallow a number of assumptions and caveats to get the numbers to work.

For example, there is no doubt that IBM is targeting the x86 (Intel) platform with its LinuxONE lineup and especially its newest machine, the Emperor II. For example, IBM reports it can scale a single MongoDB database to 17TB on the Emperor II while running it at scale with less than 1ms response time. That will save up to 37% compared to x86 on a 3-year TCO analysis. The TCO analysis gets even better when you look at a priced-per-core data serving infrastructures. IBM reports it can consolidate thousands of x86 cores on a single LinuxONE server and reduce costs by up to 40%.

So, let’s see what the Z’s container pricing can do for you. IBM’s container pricing is being introduced to allow new workloads to be added onto z/OS in a way that doesn’t impact an organization’s rolling four-hour average while supporting deployment options that makes the most sense for an organization’s architecture while facilitating competitive pricing at an attractive price point relative to that workload.

For example, one of the initial use cases for container pricing revolves around payments workloads, particularly instant payments. That workload will be charged not to any capacity marker but to the number of payments processed. The payment workload pricing grid promises to be highly competitive with the price–per-payment starting at $0.0021 and dropping to $0.001 with volume. “That’s a very predictable, very aggressive price,” says Ray Jones, vice president, IBM Z Software and Hybrid Cloud. You can do the math and decide how competitive this is for your organization.

Container pricing applies to various deployment options—including co-located workloads in an existing LPAR—that present line-of-sight pricing to a solution. The new pricing promises simplified software pricing for qualified solutions. It even offers the possibility, IBM adds, of different pricing metrics within the same LPAR.

Container pricing, however, requires the use of IBM’s software for payments, Financial Transaction Manager (FTM). FTM counts the number of payments processed, which drives the billing from IBM.

To understand container pricing you must realize IBM is not talking about Docker containers. A container to IBM simply is an address space, or group of address spaces, in support of a particular workload. An organization can have multiple containers in an LPAR, have as many containers as it wants, and change the size of containers as needed. This is where the flexibility comes in.

The fundamental advantage of IBM’s container pricing comes from the co-location of workloads to get improved performance and lower latency. The new pricing eliminates what goes on in containers from consideration in the MLC calculations.

To get container pricing, however, you have to qualify. The company is setting up pricing agents around the world. Present your container plans and an agent will determine if you qualify and at what price. IBM isn’t saying anything about how you should present your container plans to qualify for the best deal. Just be prepared to negotiate as hard as you would with any IBM deal.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Revamped IBM Power Systems LC Takes on x86

September 9, 2016

To hear IBM, its revamped and refreshed Power Systems LC lineup will undermine x86 (Intel), HPE, Dell/EMC, and any other purveyor of x86-based systems. Backed by accelerators provided by OpenPower community members, IBM appears ready extend the x86 battle to on premises, in the cloud, and the hybrid cloud. It promises to deliver better performance at lower cost for all the hot workloads too: artificial intelligence, deep learning, high performance data analytics, and compute-heavy workloads.

ibm-power-systems-s821lc

Two POWER8 processors, 1U config, priced 30% less than an x86 server

Almost a year ago, Oct. 2015, DancingDinosaur covered IBM previous Power Systems LC announcement here. The LC designation stands for Linux Community, and the company is tapping accelerators and more from the OpenPower community, just as it did with its recent announcement of POWER9 expected in 2017, here.

The new Power LC systems feature a set of community delivered technologies IBM has dubbed POWERAccel, a family of I/O technologies designed to deliver composable system performance enabled by accelerators. For GPU acceleration the NVDIA NVLink delivers nearly 5x better integration between POWER processors and the NVIDIA GPUs.  For FPGA acceleration IBM tapped its own CAPI architecture to integrate accelerators that run natively as part of the application.

This week’s Power Systems LC announcement features three new machines:

  • S821LC (pictured above)—includes 2 POWER8 sockets in a 1U enclosure and intended for environments requiring dense computing.
  • S822LC—brings 2 POWER8 sockets for big data workloads and adds big data acceleration through CAPI and GPUs.
  • S822LC—intended for high performance computing, it incorporates the new POWER8 processor with the NVDIA NVLink to deliver 2.8x the bandwidth to GPU accelerators and up to 4 integrated NVIDIA Pascal GPUs.

POWER8 with NVLink delivers 2.8 x the bandwidth compared to a PCle data pipe. According to figures provided by IBM comparing the price-performance of the Power S822LC for HPC (20-core, 256 GB, 4x Pascal) with a Dell C4130 (20-core, 256 GB 4xK80) and measured by total queries per hour (gph) the Power System delivered 2.1x better price-performance.  The Power Systems server cost more ($66,612) vs. the Dell ($57,615) but the Power System delivered 444 qph vs. Dell’s 185 qph.

The story plays out similarly for big data workloads running MongoDB on the IBM Power S8221LC for big data (20-core, 128 GB) vs. an HP DL380 (20-core, 128 GB). Here the system cost (server, OS, MongoDB annual subscription) came to $24,870 for IBM Power and $29,915 for HP.  Power provided 40% more performance at a 31% lower hardware/maintenance cost.

When it comes to the cloud the new IBM Power Systems LC offerings get even more interesting from a buyer’s standpoint. IBM declared the cloud a strategic imperative about 2 years ago and needs to demonstrate adoption that can rival the current cloud leaders; AWS, Google, and Microsoft (Azure). To that end IBM has started to tack on free cloud usage.

For example, during the industry analyst launch briefing IBM declared: Modernize your Power infrastructure for the Cloud, get access to IBM Cloud for free and cut your current operating costs by 50%. Whether you’re talking on-premises cloud or hybrid infrastructure the freebies just come. The free built-in cloud deployment service options include:

  • Cloud Provisioning and Automation
  • Infrastructure as a Service
  • Cloud Capacity Pools across Data Centers
  • Hybrid Cloud with BlueMix
  • Automation for DevOps
  • Database as a Service

These cover both on-premises, where you can transform your traditional infrastructure with automation, self-service, and elastic consumption models or a hybrid infrastructure where you can securely extend to Public Cloud with rapid access to compute services and API integration. Other freebies include open source automation, installation and configuration recipes, cross data center inventory, performance monitoring via the IBM Cloud, optional DR as a service for Power, and free access and capacity flexibility with SolfLayer (12 month starter pack).

Will the new LC line and its various cloud freebies get the low cost x86 monkey off IBM’s back? That’s the hope in Armonk. The new LC servers can be acquired at a lower price and can deliver 80% more performance per dollar spent over x86-based systems, according to IBM. This efficiency enables businesses and cloud service providers to lower costs and combat data center sprawl.

DancingDinosaur has developed TCO and ROI analyses comparing mainframe and Power systems to x86 for a decade, maybe more.  A few managers get it, but most, or their staff, have embedded bias and will never accept non-x86 machines. To them, any x86 system always is cheaper regardless of the specs and the math. Not sure even free will change their minds.

The new Power Systems LC lineup is price-advantaged over comparatively configured Intel x86-based servers, costing 30% less in some configurations.  Online LC pricing begins at $5999. Additional models with smaller configurations sport lower pricing through IBM Business Partners. All but the HPC machine are available immediately. The HPC machine will ship Sept. 26.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New IBM Flash Storage for the Mainframe

June 2, 2014

IBM is serious about flash storage and they are enabling just about everything for flash—the DS8000 family, San Volume Controller, EasyTier, Real-time Compression (RtC), and more.  Of particular interest to DancingDinosaur readers should be the recently announced DS8870 all flash enclosure.

Storage in general is changing fast. Riding Moore’s Law for the past two decades, storage users could assume annual drops in the cost per gigabyte. It was as predictable as passing go in Monopoly and collecting $200. But with that ride coming to an end companies like IBM are looking elsewhere to engineer the continued improvements everyone assumed and benefited from. For example, IBM is combining SVC, RtC, and flash to get significantly more performance out of less actual storage capacity.

The DS8870 is particularly interesting. In terms of reliability, for instance, it delivers not five-nines (99.999%) availability but six-nines (99.9999%) availability. That works out to be about 30 seconds of downtime each year. It works with all IBM servers, not just the z, and it protects data through full disk encryption and advanced access control. With the new flash enclosure packed with IBM’s enhanced flash the DS8870 delivers 4x faster flash performance in 50% less space. That translates into a 3.2x improvement in database performance.

Flash is not cheap when viewed through the traditional cost/gigabyte metric, but the above performance data suggests a different way to gauge the cost of flash, which continues to steadily fall in price. The 3.2x increase in database performance, for example, means you can handle over 300% more transactions.

Let’s start with the assumption that more transactions ultimately translate into more revenue. The same for that extra 9 in availability. The high-performance all flash DS8870 configuration with the High Performance Flash Enclosure also reduces the footprint by 50% and reduces power consumption by 12%, which means lower space and energy costs. It also enables you to shrink batch times by 10%, according to IBM. DancingDinosaur will be happy to help you pull together a TCO analysis for an all-flash DS8870 investment.

The sheer specs of the new system are impressive. IBM reports the product’s up to 8 PCIe enclosures populated with 400 GB flash cards provides 73.6TB of usable capacity. For I/O capacity the 8 I/O bays installed in the base frame provide up to 128 8Gb FC ports. Depending on the internal server you install in the DS8870 you can also get up to 1TB of cache.

all flash rack enclosure

all flash rack enclosure

ds8870 rack

The Flash Enclosure itself is a 1U drawer that can take up to 30 flash cards.  By opting for thirty 400GB flash cards you will end up with 9.2TB Usable (12 TB raw). Since the high-performance all flash DS8870 can take up to 8 Flash Enclosures you can get 96TB raw (73.6TB usable) flash capacity per system.

A hybrid DS8870 system, as opposed to the high-performance all flash version, will allow up to 120 Flash cards in 4 Flash Enclosures for 48TB raw (36.8TB usable), along with 1536 2.5” HDDs/SSDs. Then, connect it all to the DS8870 internal PCIe fabric for impressive performance— 200,000 IOPS (100% read) and 130,000 IOPS (100% write). From there, you can connect it to flash-enabled SVC and Easy Tier.

Later this year, reports Clod Barrera, IBM’s storage CTO, you will be able to add 4 more enclosures in hybrid configurations for boosting flash capacity up to 96TB raw.  Together you can combine the DS8870, flash, SVC, RtC, and EasyTier for a lightning fast and efficient storage infrastructure.

Even the most traditional System z shop will soon find itself confronting mixed workloads consisting of traditional and non-traditional workload. You probably already are as mobile devices initiate requests for mainframe data. Pretty soon you will be faced with incorporating traditional and new workloads. When that happens you will want a fast, efficient, flexible infrastructure like the DS8870.

DancingDinosaur is Alan Radding. Follow him onTwitter, @mainframeblog

Best System z TCO in Cloud and Virtualization

May 1, 2014

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines in terms of TCO.  The analysis, which DancingDinosaur will dig into below, was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

DancingDinosaur has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 based systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial z Enterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM and the few IT analysts who talk to z customers have been saying for some time. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual machines compared to the public cloud and a bit more VMs compared to x86 machines.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance. View the IBM z TCO presentation here.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for AWS reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instances.

When IBM applied its analysis to 398 I/O diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM kept the assumptions equivalent across the platforms. If you make different software and middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the overall comparative rankings probably won’t change all that much.

Still time to register for IBM Edge2014 in Las Vegas, May 19-23. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog

Expect Flash to be Big at Edge 2014

March 26, 2014

You can almost hear the tom-toms thumping as IBM picks up the beat for flash storage and its FlashSystem, and for good reason. Almost everything companies want to do these days requires fast, efficient storage. Everything waits for data—applications, servers, algorithms, virtually any IT resource. And fast data, of course, depends on the responsiveness of the storage. Flash’s time has arrived.

To get the responsiveness they need companies previously loaded up with conventional DASD, spinning disks that top out at 15K RPM or cheaper DASD at 5400 RPM. To coax sufficient IO/second (IOPS) they ganged together massive amounts of DASD just to get more parallel spindles to compensate for the low IOPS. Sure the disks were cheap but still the cost per IOPS was sky high, especially considering all the overhead and inefficiency they had to absorb.

But in this era of big data analytics, where an organization’s very competitiveness depends on absorbing massive amounts of data fast that old approach doesn’t work anymore. You can’t aggregate enough spindles to handle the huge amounts of machine-generated or sensor or meter data, not to mention data created by millions, possible even billions, of people on Facebook or Twitter and everywhere else to keep up with the data flow. You can’t possibly come up with meaningful results fast enough to be effective. Opportunities will fly past you.

Furthermore, traditional high performance storage comes at a high price, not just in the acquisition cost of large volumes of spinning disk but also in the inefficiency of its deployment. Sure, the cost per gigabyte may be low but aggregating spindles by the ton while not even utilizing the resulting large chunks of unused capacity will quickly offset any gains from a low cost per gigabyte. In short, traditional storage, especially high performance storage, imposes economic limits on the usefulness and scalability of many analytics environments.

Since data access depends on the response of storage, flash has emerged as the way to achieve high IOPS at a low cost, and with the cost of flash storage dropping steadily it will only become a better deal doing forward. Expect to hear a lot about IBM FlashSystem storage at Edge 2014. As IBM points out, it can eliminate wait times and accelerate critical applications for faster decision making, which translates into faster time to results.

Specifically, IBM reports its FlashSystem delivers:

  • 45x performance improvement with 10x more durability
  • 115x better energy efficiency with 315x superior density
  • 19x more efficient $/IOPS.

Here’s how: both the initial acquisition costs and the ongoing operational costs, such as staffing and environmental costs of FlashSystem storage, according to IBM, can be lower than both performance-optimized hard drive storage solutions and emerging hybrid- or all-flash solutions. In short, IBM FlashSystem delivers the three key attributes data analytics workloads require: compelling data economics, enterprise resiliency, and easy infrastructure integration along with high performance.

As proof, IBM cites a German transport services company that deployed FlashSystem storage to support a critical SAP e- business analytics infrastructure and realized a 50% TCO reduction versus competing solutions.

On top of that, IBM reports FlashSystem storage unlocks additional value from many analytics environments by both turbo-charging response times with its use of MicroLatency technology, effectively multiplying the amount of data that can be analyzed. MicroLatency enables a streamlined high performance data path to accelerate critical applications. The resulting faster response times can yield more business agility and quicker time to value from analytics.

In fact, recent IBM research has found that IBM InfoSphere Identity Insight entity analytics processes can be accelerated by over 6x using FlashSystem storage instead of traditional disk. More data analyzed at once means more potential value streams.

Data has long been considered a valuable asset. For some data has become the most important commodity of all. The infrastructure supporting the analytics environment that converts data as a commodity into valuable business insights must be designed for maximum resiliency. FlashSystem brings a set of data protection features that can help enhance reliability, availability and serviceability while minimizing the impact of failures and down-time due to maintenance. In short it protects what for many is the organization’s data, its most valuable asset.

DancingDinosaur is looking forward to attending Edge 2014 sessions that will drill down into the specifics of how IBM FlashSystem storage works under the cover. It is being held May 19-23 in Las Vegas, at the Venetian. Register now and get a discount. And as much as DancingDinosaur is eager to delve into the details of FlashSystem storage the Sheryl Crow concert is very appealing too. When not in sessions or at the concert look for DancingDinosaur in the bloggers lounge. Please join me.

Follow DancingDinosaur on Twitter: @mainframeblog

Lessons from IBM Eagle- zEnterprise TCO Analyses

March 18, 2013

Lessons from IBM Eagle-IBM Systems z  TCO Analyses

A company running an obsolete z890 2-way machine with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server multi-core (41x more cores than the z890) distributed environment only its 4-yearTCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS shop which budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, already 6 months late, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the dual-running period at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. It also could be used to secure a new opportunity for the z. Since 2007, the team reports that its TCO studies secured wins amounting to over $1.6 billion in revenue.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing z shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; for example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Take advantage of sub-capacity, which may produce free workloads
  • Consolidate System z Linux, which invariably saves money; many IT people don’t realize how many Linux virtual servers can run on a z core. (A debate raging on LinkedIn focused on how many virtual instances can run on an IFL with quite a few suggesting a max of 20. The official IBM figure:  consolidate up to 60 distributed cores or more on a single System z core, thousands on a single footprint; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs
  • Workloads amenable to specialty processors, like the IFL, zIIP, and zAAP, reduce mainframe costs through lower cost/MIPS and fewer general processor cycles
  • Consider the  System z Solution Edition (DancingDinosaur has long viewed the Solution Edition program as the best System z  deal going although you absolutely must be able to operate within the strict usage constraints the deal imposes.)

The Eagle team also suggests other things to consider, especially when the initial cost of a distributed platform looks attractive to management. To begin the System z responds flexibly to unforeseen business events; a distributed system may have to be augmented or the deployment re-architected, both of which drive up cost and slow responsiveness.  Also, the cost of adding incremental workloads to System z is less than linear. Similarly, the cost of administrative labor is lower on System z, and the System z cost per unit of work is much lower than with distributed systems.

DancingDinosaur generally is skeptical of TCO analyses from vendors. To be useful the analysis needs context, technical details (components, release levels, and prices), and specific verifiable quantitative results.  In addition, there are soft costs that must be considered.  In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

Linux on System z TCO for BI

May 24, 2010

Since its acquisition Cognos in 2007 IBM has revived the notion of BI on the mainframe. DancingDinosaur first took this up here last October. With Cognos 8 running on Linux on System z the idea finally became viable. The introduction of the System z Solution Edition dubbed Enterprise Linux Server allowed the idea of BI on z to become practical from a cost standpoint.

In 2009, IBM began promoting the idea of BI on the z by citing an earlier study by Nucleus Research, in which the researchers declared that Cognos on z “makes it easy for companies to deploy BI to a broader user population, while minimizing the resulting workload for IT departments.” IBM touts the study here.

This May IBM was ready to brief industry analysts on its own comprehensive comparative TCO analysis of BI using Cognos 8 on the System z versus the x86. This is a consolidation scenario that projected the results over five years and adjusted for typical growth over that time. The study was pretty exhaustive, looking at BI use cases ranging from 100 users to 50,000 users. It included the standard hardware and system and application software costs as well as power/cooling, floor space, maintenance, connectivity, system administration, and high availability costs while accounting for periodic tech refresh during the five year period.

Since this was a total cost of ownership (TCO) study, not a cost of acquisition study, the result was what anyone who has been following DancingDinosaur would expect. At 100 users, the 5-year TCO for the x86 environment was $3 million compared to just over $1 million for the System z. By years four and five, the costs leveled out with the z maintaining a nearly $200,000 annual advantage.

At 10,000; 20,000; and 50,000 users an interesting change started to appear. The System z continued to have the lowest TCO; at 50,000 users the z cost $17 million versus $30 million for the x86 environment, but the cost difference greatly narrowed by the fourth and fifth years. The cost difference in those later years, however, did not narrow nearly as much when it came to the high availability x86 environment. High availability in the x86 world continued to cost about $3 million more a year.

This shouldn’t be too surprising. The z comes with extensive redundancy and built-in high availability. With the x86 environment you have to build those capabilities in initially and then maintain it going forward. This adds to the hardware and software costs and to the administration costs.  Without high availability, the cost of the z and x86 platform reach near parity over time; with high availability, which you definitely need with thousands of users on the system, the z maintains its advantage with no hint of convergence in the trend lines.

With every IT budget cycle someone suggests eliminating the mainframe in favor of a set of distributed, highly virtualized x86 servers. For some it will be an appropriate move. However, they had better figure out how they are going to deliver the high availability they automatically get from the mainframe. In the x86 world, you have to buy it, build it, and maintain it yourself.

They also don’t think about the cost of moving their applications to the distributed platform. DancingDinosaur will take that up another time.


%d bloggers like this: