Posts Tagged ‘LPAR’

Under the Covers of Z Container Pricing

December 1, 2017

Along with the announcement of the z14, or now just Z, last July IBM also introduced container pricing as an upcoming capability of the machine intended to make it both flexible and price competitive. This is expected to happen by the end of this year.

A peak into the IBM z14

Container pricing implied overall cost savings and also simplified deployment. At the announcement IBM suggested competitive economics too, especially when benchmarked against public clouds and on-premises x86 environments.

By now you should realize that IBM has difficulty talking about price. They have lots of excuses relating to their global footprint and such. Funny, other systems vendors that sell globally don’t seem to have that problem. After two decades of covering IBM and the mainframe as a reporter, analyst, and blogger I’ve finally realized why the reticence: that the company’s pricing is almost always high, over-priced compared to the competition.

If you haven’t realized it yet, the only way IBM will talk price is around a 3-year TCO cost analysis. (Full disclosure: as an analyst, I have developed such TCO analyses and am quite familiar with how to manipulate them.) And even then you will have to swallow a number of assumptions and caveats to get the numbers to work.

For example, there is no doubt that IBM is targeting the x86 (Intel) platform with its LinuxONE lineup and especially its newest machine, the Emperor II. For example, IBM reports it can scale a single MongoDB database to 17TB on the Emperor II while running it at scale with less than 1ms response time. That will save up to 37% compared to x86 on a 3-year TCO analysis. The TCO analysis gets even better when you look at a priced-per-core data serving infrastructures. IBM reports it can consolidate thousands of x86 cores on a single LinuxONE server and reduce costs by up to 40%.

So, let’s see what the Z’s container pricing can do for you. IBM’s container pricing is being introduced to allow new workloads to be added onto z/OS in a way that doesn’t impact an organization’s rolling four-hour average while supporting deployment options that makes the most sense for an organization’s architecture while facilitating competitive pricing at an attractive price point relative to that workload.

For example, one of the initial use cases for container pricing revolves around payments workloads, particularly instant payments. That workload will be charged not to any capacity marker but to the number of payments processed. The payment workload pricing grid promises to be highly competitive with the price–per-payment starting at $0.0021 and dropping to $0.001 with volume. “That’s a very predictable, very aggressive price,” says Ray Jones, vice president, IBM Z Software and Hybrid Cloud. You can do the math and decide how competitive this is for your organization.

Container pricing applies to various deployment options—including co-located workloads in an existing LPAR—that present line-of-sight pricing to a solution. The new pricing promises simplified software pricing for qualified solutions. It even offers the possibility, IBM adds, of different pricing metrics within the same LPAR.

Container pricing, however, requires the use of IBM’s software for payments, Financial Transaction Manager (FTM). FTM counts the number of payments processed, which drives the billing from IBM.

To understand container pricing you must realize IBM is not talking about Docker containers. A container to IBM simply is an address space, or group of address spaces, in support of a particular workload. An organization can have multiple containers in an LPAR, have as many containers as it wants, and change the size of containers as needed. This is where the flexibility comes in.

The fundamental advantage of IBM’s container pricing comes from the co-location of workloads to get improved performance and lower latency. The new pricing eliminates what goes on in containers from consideration in the MLC calculations.

To get container pricing, however, you have to qualify. The company is setting up pricing agents around the world. Present your container plans and an agent will determine if you qualify and at what price. IBM isn’t saying anything about how you should present your container plans to qualify for the best deal. Just be prepared to negotiate as hard as you would with any IBM deal.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort’s 2015 State of the Mainframe: Little Has Changed

November 30, 2015

Syncsort’s annual survey of almost 200 mainframe shops found that 83 percent of respondents cited security and availability as key strengths of the mainframe. Are you surprised? You can view the detailed results here for yourself.

synsort mainframes Role Big Data Ecosystem

Courtesy; Syncsort

Security and availability have been hallmarks of the z for decades. Even Syncsort’s top mainframe executive, Harvey Tessler, could point to little unexpected in the latest results “Nothing surprising. At least no big surprises. Expect the usual reliability, security,” he noted. BTW, in mid-November Clearlake Capital Group, L.P. (Clearlake) announced that it had completed the acquisition of Syncsort Incorporated. Apparently no immediate changes are being planned.

The 2015 study also confirmed a few more recent trends that DancingDinosaur has long suspected. More than two-thirds (67 percent) of respondents cited integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe.

Similarly, the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe. That, in fact, may be the most surprising response. Mainframe shops (or more likely the line-of-business managers they work with) are notorious for moving data off the mainframe for analytics, usually to distributed x86 platforms. The study showed respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis.

Many of the respondents no doubt will continue to do so, but it makes little sense in 2015 with a modern z System running a current configuration. In truth, it makes little sense from either a performance or a cost standpoint to move data off the z to perform analytics elsewhere. The z runs Hadoop and Spark natively. With your data and key analytics apps already on the z, why bother incurring both the high overhead and high latency entailed in moving data back and forth to run on what is probably a slower platform anyway.

The only possible reason might be that the mainframe shop doesn’t run Linux on the mainframe at all. That can be easily remedied, however, especially now with the introduction of Ubuntu Linux for the z. C’mon, it’s late 2015; modernize your z for the cloud-mobile-analytics world and stop wasting time and resources jumping back and forth to distributed systems that will run natively on the z today.

More encouraging is the interest of the respondents in big data and analytics. “The survey demonstrates that many big companies are using the mainframe as the back-end transaction hub for their Big Data strategies, grappling with the same data, cost, and management challenges they used it to tackle before, but applying it to more complex use cases with more and dauntingly large and diverse amounts of data,” said Denny Yost, associate publisher and editor-in-chief for Enterprise Systems Media, which partnered with Syncsort on the survey. The results show the respondents’ interest in mainframe’s ability to be a hub for emerging big data analytics platforms also is growing.

On other issues, almost one-quarter of respondents ranked as very important the ability of the mainframe to run other computing platforms such as Linux on an LPAR or z/VM virtual machines as a key strength of the mainframe at their company. Over one-third of respondents ranked as very important the ability of the mainframe to integrate with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of the mainframe at their company.

Maybe more surprising; only 70% on the respondents ranked as very important their organizations use of the mainframe for performing large-scale transaction processing or use of the mainframe for hosting mission-critical applications. Given that the respondents appeared to come from large, traditional mainframe shops you might have expected those numbers to be closer to 85-90%. Go figure.

When asked to rank their organization’s use of the mainframe to supplement or replace non-mainframe servers (i.e. RISC or x86-based servers) just 10% of the respondents considered it important. Clearly the hybrid mainframe-based data center is not a priority with these respondents.

So, what are they looking to improve in the next 12 months? The respondents’ top three initiatives are:

  1. Meeting Security and Compliance Requirements
  2. Reducing CPU usage and related costs
  3. Meeting Service Level Agreements (SLAs)

These aren’t the most ambitious goals DancingDinosaur has ever encountered but they should be quite achievable in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

New Software Pricing for IBM z13

February 6, 2015

Every new mainframe causes IBM to rethink its pricing. This makes sense because mainframe software licensing is complex. The z13 enables different workloads and combinations of uses that merit reexamining the software licensing. But overall, IBM is continuing its strategy to enhance software price/performance with each generation of hardware. This has been the case for as long as DancingDinosaur has been covering the mainframe. (click graphic below to enlarge)

 IBM z13 technology update pricing

DancingDinsosaur along with other mainframe analysts recently listened to Ray Jones, IBM Vice President, z Systems Sales go through the new z13 software pricing. In short, expect major new structural enhancements coming in the first half of 2015. Of particular interest will be two changes IBM is instituting:

  1. IBM Collocated Application Pricing (ICAP), which lets you run your systems the way that make sense in your organization
  2. Country Multiplex Pricing, an evolution of Sysplex pricing that allows for greater flexibility and simplicity which treats all your mainframes in one country as a single sysplex.

Overall, organizations running the z under AWLC should see a 5% discount on average.

But first let’s take a moment to review AWLC (Advanced Workload License Charge). This monthly program from the start has been intended to allow you to grow hardware capacity without necessarily increasing software charges. Most organizations under AWLC can grow hardware capacity without necessarily increasing software charges. In general you’ll experience a low cost of incremental growth and you can manage software cost by managing workload utilization and deployment across LPARs and peak hours.

A brief word about MSU. DancingDinosaur thinks of MSU as mainframe service unit. It is the measurement of the amount of processing or capacity of your mainframe. IBM determines the MSU rating of a particular mainframe configuration by some arcane process invisible to most of us. The table above starts with MSU; just use the number IBM has assigned your z configuration.

OK, now we’re ready to look at ICAP pricing. IBM describes ICAP as the next evolution of z Systems sub-capacity software pricing. ICAP allows workloads to be priced as if in a dedicated environment although technically you have integrated them with other workloads. In short, you can run your systems and deploy your ICAP workloads the way you want to run them. For example, you might want to run a new anti-fraud app or a new instance of MQ and do it on the same LPAR you’re running some other workload.

ICAP is for new workloads you’re bringing onto the z. You have to define the workloads and are responsible for collecting and reporting the CPU time. It can be as simple as creating a text file to report it. However, don’t rush to do this; IBM suggested an ICAP enhancement to the MWRT sub-capacity reporting tool will be coming.

In terms of ICAP impact on pricing, IBM reports no effect on the reported MSUs for other sub-capacity middleware programs (adjusts MSUs like an offload engine, similar to Mobile Workload Pricing for z/OS). z/OS shops could see 50% of the ICAP-defining program MSUs will be removed, which can result in real savings. IBM reports that ICAP provides a price benefit similar to zNALC for z/OS, but without the requirement for a separate LPAR. Remember, with ICAP you can deploy your workloads where you see fit.

For Country Multiplex Pricing a country multiplex to IBM is the collection of all zEnterprise and later machines in a country, and they are measured like one machine for sub-capacity reporting (applicable to all z196, z114, zEC12, zBC12, and z13 machines). It amounts to a new way of measuring and pricing MSUs, as opposed to aggregating under current rules. The result should be flexibility to move and run work anywhere, the elimination of Sysplex pricing rules, and the elimination of duplicate peaks when workloads move between machines.

In the end, the cost of growth is reduced with one price per product based on growth anywhere in the country. Hardware and software migrations also become greatly simplified because Single Version Charging (SVC) and Cross Systems Waivers (CSW) will no longer be relevant.  And as with ICAP, a new Multiplex sub-capacity reporting tool is coming.

Other savings also remain in play, especially the z/OS mobile pricing discounts, which significantly reduces the level at which mobile activity is calculated for peak load pricing. With the expectation that mobile activity will only grow substantially going forward, these savings could become quite large.

DancingDinosaur is Alan Radding, a veteran mainframe and IT writer and analyst. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my writing at Technologywriter.com and here.

IBM-Apple Deal Enhanced by New z/OS Pricing Discounts

July 25, 2014

In the spring, IBM announced, almost as an aside, new pricing discounts for z/OS mobile transactions. At the time, it didn’t seem like a big deal. But IBM’s more recent announcement of its exclusive mobile partnership with Apple, covered by DancingDinosaur here, suddenly gives it much bigger potential.

The plan is to create apps that can transform specific aspects of how businesses and employees work using iPhone and iPad, allowing companies to achieve new levels of efficiency, effectiveness and customer satisfaction. At the backend will be the mainframe.

Already zEnterprise shops, especially banks and financial services firms, are reporting growth in the volume of transactions that originate from mobile devices. The volume of these mobile-originated transactions in some cases is getting large enough to impact the four-hour peak loads that are used in calculating monthly costs.

Here’s the problem: you put out a mobile app and want people to use it. They do, but much of the workload being generated does not directly produce revenue. Rather, they are requesting data or checking invoices and balances. Kind of a bummer to drive up monthly charges with non-revenue producing work.

That’s where the new pricing discounts for z/OS mobile workloads come in. The new pricing reduces the impact of these mobile transactions on reported LPAR MSUs. Specifically, the Mobile Workload Pricing Reporting Tool (MWRT) will subtract 60% of the reported Mobile MSUs from a given LPAR in each hour, adjusting the total LPAR MSU value for that hour. Think of this as just a standard SCRT report with a discount built in to adjust for mobile workload impact.

So, what does that translate into in terms of hard dollar savings? DancingDinosaur had a private briefing with two IBMers who helped build the tool and asked that question. They are only in the earliest stages of getting actual numbers from users in the field; the tool only became available June 30.  Clearly the results depend on how many mobile transactions you are handling in each reporting hour and how you are handling the workloads.

There is a little work involved but the process won’t seem intimidating to mainframe shops accustomed to IBM’s monthly reporting process. Simply record mobile program transaction data, including CPU seconds, on an hourly basis per LPAR, load the resulting data file into the new tool, MWRT, each month using the IBM-specified CSV format, and run MWRT, submitting the results to IBM each month. It replaces the SCRT process.

The MWRT will function like a partial off-load from a software pricing perspective. When an LPAR value is adjusted, all software running in the LPAR will benefit from lower MSUs. The tool will calculate the monthly MSU peak for a given machine using the adjusted MSU values.

This brings us back to the hard dollar savings question. The answer: probably not much initially unless your mobile apps already generate a sizeable proportion of your peak transaction volume. But jump ahead six months or a year when the IBM-Apple partnership’s new iOS made-for-business apps are gaining traction your mobile transaction volume could be climbing substantially each month. At that point, savings of hundreds of thousands of dollars or more seem quite possible.

Of course, the new applications or the entire partnership could be a bust. In that case, you will have burned some admin time for a one-time set up. You’ll still experience whatever normal transaction growth your current mobile apps generate and collect your discounted MSU charges. Unless the big IT analysis firms are dead wrong, however, mobile transactions are not going away. To the contrary, they will only increase. The bottom line: negligible downside risk while the upside gain could be huge.

Hope to see you at IBM Enterprise 2014 in Las Vegas, Oct. 6-10. DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

 

 

Automated System z Workload Capping Can Save Big Bucks

June 20, 2014

IBM’s Monthly License Charge (MLC) pricing can be a powerful tool to significantly lower the cost of software licensing for a mainframe shop. The problem: it is frightfully complicated. DancingDinosaur has attended conferences that scheduled multi-part sessions just to cover the basic material. Figuring out which pricing program you qualify for is itself a challenge and you probably want a lawyer looking over your shoulder. Find IBM’s System z pricing page here.

One particularly galling challenge is estimating and capping the 4-hour utilization for each LPAR.  You can easily find yourself in a situation where you exceed the cap on one LPAR, resulting in a surcharge, while you have headroom to spare on other LPARs. The trick is to stay on top of this by constantly monitoring workloads and shift activity among LPARs to ensure you don’t exceed a cap.

This requires a skilled mainframe staffer with both a high level of z/OS skill and familiarity with z workloads and LPARs. While you’re at it throw in knowledge of data center operations and the organization’s overall business direction. Finding such an expert is costly and not easily spared for constant monitoring. It’s a task that lends itself to automation.

And that’s exactly what BMC did earlier this week when it introduced Intelligent Capping (iCap) for zSeries mainframes. On average, according to BMC, companies that actively manage and effectively prioritize their mainframe workloads save 10-15 percent more on their monthly license charges than those who use a more passive approach. Furthermore, instead of assigning a costly mainframe workload guru to manually monitor and manage this, BMC promises that the costs can be reduced while also diminishing risk to the business through the use of its intelligent iCap software that understands workloads, makes dynamic adjustments, and automates workload capping.

The savings, according to BMC, can add up fast. In one example, BMC cited saving 161 MSUs, which translated for that organization to over $55k that month. Given that a mainframe shop spends anywhere from few hundred thousand to millions of dollars per month on MLC charges savings of just a few percent can be significant. One BMC customer reportedly expects intelligent capping to save it 12% each month. Caveat: DancingDinosaur has not yet been able to speak with any BMC iCap customer to verify these claims.

But assuming they are true, iCap is a no-brainer for any mainframe shop paying anything but the most minimal MLC. BMC charges for iCap based on the customer’s capacity. It is willing to discuss a shared gain model by which the iCap charges are based on how much is saved but none of those deals apparently have been finalized.

This seems like a straightforward challenge for a mainframe management tool vendor but DancingDinosaur has found only a few actually doing it—BMC, Softwareonz, and IBM. Softwareonz brings AutoSoftCapping. The product promises to maximize software cost efficiency for IBM zSeries platforms, and specifically z/OS. It does so by automatically adjusting defined capacity by LPAR based upon workload while maintaining a consistent overall defined capacity for your CPC.

Softwareonz, Seattle, estimates it saves 2% on monthly charges, on the low end. At the high end, it has run simulations suggesting 20% savings.  AutoSoftCapping only works for datacenters running their z on the VWLC pricing model. Customers realistically can save 8-10%. Again, DancingDinosaur has not yet validated any savings with an actual Softwareonz customer.

Without automation, you have to do this manually, by adjusting defined capacity based on actual workloads. Too often that leave the organization with the choice of constraining workloads and thereby inhibiting performance or over-provisioning the cap and thereby driving up costs through wasted capacity.

So, if automatic MLC capping is a no brainer, why isn’t everybody doing it? Softwareonz sees several reasons, the primary one being the fear of the cap negatively impacting the VWLC four-hour rolling average. Nobody wants to impact their production workloads. Of course, the whole reason to apply intelligence to the automation is to reduce software costs without impacting production workloads. BMC offers several ways to ease the organization into this as they become more comfortable and confident in the tool.

Another reason suggested is that the System z operational team is protecting its turf from the inroads of automation. A large z shop might use a team of half a dozen or more people dedicated to monitoring and managing workloads manually. Bring in automation like iCAP or AutoSoftCapping and they expect pink slips to follow.

Of course, IBM brings the z/OS Capacity Provisioning tool for z/OS (v1.9 and above), which can be used to add and remove capacity through a Capacity Provisioning Manager (CPM) policy. This can be used to automatically control the defined capacity limit or the group capacity limits. The user interface for defining CPM policies is through z/OSMF.

If you are subject to MLC pricing, consider an automated tool. BTW, there also are consultants who will do this for you.

A note: IBM Enterprise Cloud System, covered by DancingDinosaur a few weeks ago here, is now generally available. It is an OpenStack-based converged offering that includes compute, storage, software, and services built around the zBC12. Check out the most recent details here.

Also take note: IBM Enterprise2014 is coming to Las Vegas in early Oct, Details here. The conference combines System z University and Power System University plus more. You can bet there will be multiple sessions on MLC pricing in its various permutations and workload capping.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or visit his website, www.technologywriter.com

 

Latest in System z Software Pricing—Value Unit Edition

December 5, 2013

Some question how sensitive IBM is to System z costs and pricing.  Those that attended any of David Chase’s several briefings on System z software pricing at Enterprise 2013 this past October, however would realize the convulsions the organization goes through for even what seems like the most trivial of pricing adjustments. So, it is not a small deal that IBM is introducing something called Value Unit Edition (VUE) pricing for System z software.

VUE began with DB2. The purpose is to give z data centers greater pricing flexibility while encouraging new workloads on the z. VUE specifically is aimed at key business initiatives such as SOA, Web-based applications, pureXML, data warehousing and operational business intelligence (BI), and commercial (packaged) applications such as SAP, PeopleSoft, and Siebel. What started as a DB2 initiative has now been extended to WebSphere MQ, CICS, and IMS workloads.

In short, VUE pricing gives you a second pricing option for eligible (meaning new) z workloads. BTW, this eligibility requirement isn’t unusual with the z; it applies to the System z Solution Edition deals too. Specifically, VUE allows you to opt to pay for the particular software as a one-time capital expenditure (CAPEX) in the form of a one-time charge (OTC) rather than as a monthly license charge (MLC), which falls into the OPEX category.

Depending on your organization’s particular circumstances the VUE option could be very helpful. Whether it is more advantageous for you, however, to opt for OTC or MLC with any eligible workload is a question only your corporate accountant can answer (and one, hopefully, that is savvy about System z software pricing overall).  This is not something z data center managers are likely to answer on their own.

Either way you go, IBM in general has set the pricing to be cost neutral with a five-year breakeven. Under some circumstances you can realize discounts around the operating systems; in those cases you may do better than a five-year breakeven. But mainly this is more about how you pay, not how much you pay. VUE pricing is available for every System z model, even older ones. Software running under VUE will have to run in its own LPAR so IBM can check its activity as it does with other software under SCRT.

In summary, the main points of VUE are:

  • One-time-charge (OTC) pricing option across key middleware and packaged applications
  • The ability to consolidate or grow new workloads without increasing operational expense
  • Deployment on a z New Application License Charge (zNALC) LPAR, which, as expected, runs under the zNALC terms and conditions
  • Of course, new applications must be qualified; it really has to be new
  • Allows a reduced price for the z/OS operating system
  • Runs as a mixed environment, some software MLC  some OTC
  • Selected ISV offerings qualify for VUE

Overall, System z software pricing can be quite baffling. There is nothing really comparable in the distributed world. The biggest benefit of VUE comes from the flexibility it allows, OPEX or CAPEX, not from not from any small discount on z/OS. Given the set of key software and middleware VUE applies to the real opportunity lies in using its availability to take bring on new projects that expand the footprint of the z in your organization. As DancingDinosaur has pointed out before, the more workloads you run on the z the lower your cost-per-workload.

Follow DancingDinosaur on Twitter, @mainframeblog


%d bloggers like this: