A Maturity Model for the New Mainframe Normal

February 3, 2014

Last week Compuware introduced its new mainframe maturity model designed to address what is emerging as the new mainframe normal. DancingDinosaur played a central role in the creation of this model.

A new mainframe maturity model is needed because the world of the mainframe is changing rapidly.  Did your data center team ever think they would be processing mainframe transactions from mobile phones? Your development team probably never imagined they would be architecting compound workloads across the mainframe and multiple distributed systems running both Windows and Linux? What about the prospect of your mainframe serving up millions or even billions of customer-facing transactions a day?  But that’s the mainframe story today.

Even IBM, the most stalwart of the mainframe vendors, repeats the driving trends—cloud, mobile, social, big data, analytics, Internet of things—like a mantra. As the mainframe celebrates its 50th anniversary year, it is fitting that a new maturity model be introduced because there is, indeed, a new mainframe normal rapidly evolving.

Things certainly are changing in ways most mainframe data center managers wouldn’t have anticipated 10 years ago, probably not even five years ago. Of those, perhaps the most disconcerting change for traditional mainframe shops is the need to accommodate distributed, open systems (systems of engagement) alongside the traditional mainframe environment (systems of record).

Since the rise of distributed systems two decades ago, there has existed both a technical and cultural gap between the mainframe and distributed teams. The emergence of technologies like hybrid computing, middleware, and the cloud have gone far to alleviate the technical gap. The cultural gap is not so amenable to immediate fixes. Still, navigating that divide is no longer optional – it has become a business imperative.  Crossing the gap is what the new maturity model addresses.

Many factors contribute to the gap; the largest of which appears to be that most organizations still approach the mainframe and distributed environments as separate worlds. One large financial company, for example, recently reported that they view the mainframe as simply MQ messages to distributed developers.

The new mainframe maturity model can be used as a guide to bridging both the technical and cultural gaps.  Specifically, the new model defines five levels of maturity. In the process, it incorporates distributed systems alongside the mainframe and recognizes the new workloads, processes and challenges that will be encountered. The five levels are:

  1. Ad-hoc:  The mainframe runs core systems and applications; these represent the traditional mainframe workloads and the green-screen approach to mainframe computing.
  2. Technology-centric:  An advanced mainframe is focused on ever-increasing volumes, higher capacity, and complex workload and transaction processing while keeping a close watch on MIPS consumption.
  3. Internal services-centric:  The focus shifts to mainframe-based services through a service delivery approach that strives to meet internal service level agreements (SLAs).
  4. External services-centric:  Mainframe and non-mainframe systems interoperate through a services approach that encompasses end-user expectations and tracks external SLAs.
  5. Business revenue-centric:  Business needs and the end-user experience are addressed through interoperability with cloud and mobile systems, services- and API-driven interactions, and real-time analytics to support revenue initiatives revolving around complex, multi-platform workloads.

Complicating things is the fact that most IT organizations will likely find themselves straddling different maturity levels. For example, although many have achieved levels 4 and 5 when it comes to technology the IT culture remains at levels 1 or 2. Such disconnects mean IT still faces many obstacles preventing it from reaching optimal levels of service delivery and cost management. And this doesn’t just impact IT; there can be ramifications for the business itself, such as decreased customer satisfaction and slower revenue growth.

DancingDinosaur’s hope is that as the technical cultures come closer through technologies like Java, Linux, SOA, REST, hybrid computing, mobile, and such to allow organizations to begin to close the cultural gap too.

Follow DancingDinosaur on Twitter: @mainframeblog

SOA Software Enables New Ways to Tap Mainframe Code

January 30, 2014

Is the core enterprise processing role handled by the mainframe enough? Yet, enterprises today often are running different types of workloads built using different app dev styles. These consist of compound applications encompassing the mainframe and a variety of distributed systems (Linux, UNIX, Windows) and different programming models, data schema, services, and more. Pieces of these workloads may be running on the public cloud, a partner’s private cloud, and a host of other servers. The pieces are pulled together at runtime to support the particular workload.  Mainframe shops should want to play a big role in this game too.

“Mainframe applications still sit at heart of enterprise operations, but mainframe managers also want to take advantage of these applications in new ways,” says Brent Carlson, SVP at SOA Software. The primary way of doing this is through SOA services, and mainframes have been playing in the SOA arena for years. But it has never been as seamless, easy, and flexible as it should. And as social and mobile and other new types of workloads get added to the services mix, the initial mainframe SOA approach has started to show its age. (Over the years, DancingDinosaur has written considerably on mainframe SOA and done numerous SOA studies.)

That’s why DancingDinosaur welcomes SOA Software’s Lifecycle Manager to the mainframe party.  It enables what the company calls a “RESTful Mainframe,” through governance of REST APIs that front zOS-based web services. This amounts to a unified platform from a governance perspective to manage both APIs as well as existing SOA assets. As Carlson explained: applying development governance to mainframe assets helps mainframe shops overcome the architectural challenges inherent in bringing legacy systems into the new API economy, where mobile apps need rapid, agile access to backend systems.

The company is aiming to make Lifecycle Manager into the system-of-record for all enterprise assets including mainframe-based SOAP services and RESTful APIs that expose legacy software functionality. The promise: seamless access to service discovery and impact analysis whether on mainframe, distributed systems, or partner systems. Both architects and developers should be able to map dependencies between APIs and mainframe assets at the development stage and manage those APIs across their full lifecycles.

Lifecycle Manager integrates with SOA’s Policy Manager to work either top down or bottom up.  The top down approach relies on a service wrapping of existing mainframe programs. Think of this as the WSDL first approach to designing web services and then developing programs on mainframe to implement it.  The bottom up approach starts with the copy book.  Either way, it is automated and intended to be seamless. It also promises to guide services developers on best practices like encryption, assign and enforce correct policies, and more.

“Our point: automate whatever we can, and guide developers into good practices,” said Carlson.  In the process, it simplifies the task of exposing mainframe capabilities to a broader set of applications while not interfering with mainframe developers.  To distributed developers the mainframe is just another service endpoint that is accessed as a service or API.  Nobody has to learn new things; it’s just a browser-based IDE using copy books.

For performance, the Lifecycle Manager-based runtime environment is written in assembler, which makes it fast while minimizing MIPS consumption. It also comes with the browser-based IDE, copybook tool, and import mappings.

The initial adopters have come from financial services and the airlines.  The expectation is that usage will expand beyond that as mainframe shops and distributed developers seek to leverage core mainframe code for a growing array of workloads that weren’t on anybody’s radar screen even a few years ago.

There are other ways to do this on the mainframe, starting with basic SOA and web services tools and protocols, like WSDL. Many mainframe SOA efforts leverage CICS, and IBM offers additional tools, most recently SoftLayer, that address the new app dev styles.

This is healthy for mainframe data centers. If nothing else SOA- and API-driven services workloads that include the mainframe help lower the cost per workload of the mainframe. It also puts the mainframe at the center of today’s IT action.

Follow DancingDinosaur on Twitter: @mainframeblog

Goodbye X6 and IBM System x

January 24, 2014

Seems just last week IBM was touting the new X6-based systems, the latest in its x86 System x server lineup.  Now the X6 and the entire System x line is going to Lenovo, which will acquire IBM’s x86 server business.  Rumors had been circulating about the sale for the last year, so often that you stopped paying attention to them.

The sale includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, and blade networking and maintenance operations. The purchase price is approximately US $2.3 billion, about two billion of which will be paid in cash and the balance in Lenovo stock.

Definitely NOT part of the sale are the System z, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.  These are considered part of the IBM Enterprise Systems portfolio.  This commitment to the z and other enterprise systems is encouraging, especially in light of the latest IBM quarterly financial statement in which all the system hardware platforms did poorly, including System x.

DancingDinosaur’s planned follow up to last week’s X6 column in anticipation of a reported upcoming February briefing on X6 speeds and feeds is now unlikely. IBM pr folks said no such briefing is planned.

Most of the System x team appears to be departing with the products. Approximately 7,500 IBM employees around the world, including those based at major locations such as Raleigh, Shanghai, Shenzhen and Taipei, are expected to be offered employment by Lenovo, according to the announcement.

IBM, however, may become more active than ever.  Recently, IBM announced that it will invest more than $1 billion in the new IBM Watson Group, and $1.2 billion to expand its global cloud computing footprint to 40 data centers worldwide in 15 countries across five continents.  It also announced bolstering the SoftLayer operation, sort of a combined IaaS and global content delivery network, plus earlier investments in Linux, OpenStack, and various other initiatives. DancingDinosaur will try to follow it for you along with the System z and other enterprise IBM platforms.

 Please follow DancingDinosaur on Twitter: @mainframeblog

IBM Leverages High End Server Expertise in New X6 Systems

January 17, 2014

If you hadn’t noticed how x86 systems have been maturing over the past decade you might be surprised at the introduction yesterday of IBM’s newest entry in the x86 world, the X6. The X6 is the latest rev of IBM’s eX5. If you didn’t already think the eX5 was enterprise-class, here’s what IBM says of the X6:  support for demanding mission and business critical workloads, better foundation for virtualization of enterprise applications, infrastructure that facilitates a private or hybrid cloud model. Sound familiar? IBM has often said the same things about its Power Systems and, of course, the zEnterprise.

As the sixth generation of IBM’s EXA x86 technology it promises to be fast (although the actual speeds and feeds won’t be revealed for another month), 3x the memory, high availability features that increase reliability, use of flash to boost on-board memory, and lower cost. IBM hasn’t actually said anything specific about pricing; published reports put X6 systems starting at $10k.

More specifically, the flash boost consists of integrated eXFlash memory-channel storage that provides DIMM-based storage up to 12.8 terabytes in the form of ultrafast flash storage close to the processor.  This should increase application performance by providing the lowest system write latency available, and X6 can enable significantly lower latency for database operations, which can lower licensing costs and reduce storage costs by reducing or eliminating the need for external SAN/NAS storage units. This should deliver almost in-memory performance (although again, we have to wait for the actual speeds and feeds and benchmarks).

The new X6 also borrows from the System z in its adoption of compute book terminology to describe its packaging, adding a storage book too.  The result: a modular, scalable compute book design that supports multiple generations of CPUs that, IBM promises, can reduce acquisition costs, up to 28% in comparison to one competitive offering.  (Finally some details: 28% acquisition cost savings based on pricing of x3850 X6 at announcement on 2/18 vs. current pricing of a comparable x86 based system that includes 2 x Intel Xeon E7-4820 [v1] processors, 1TB of memory [16GB RDIMMs] 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller. x3850 X6 includes 2 Compute Books, 2 x Intel Xeon E7 processors, 1TB of memory [16GB RDIMMs], 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller.)

X6 also provides stability and flexibility through forthcoming technology developments, allowing users to scale up now and upgrade efficiently in the future based on the compute/storage book design that makes it easy to snap books into the chassis as you require more resources. Fast set-up and configuration patterns simplify deployment and life-cycle management.

In short, the book design, long a hallmark of the System z, brings a number of advantages.  For starters, you can put multiple generations of technology in the same chassis, no need to rip-and-replace or re-configure. This lets you stretch and amortize costs in a variety of ways.  IBM also adds RAS capabilities, another hallmark of the z. In the case of X6 it includes features like memory page retire; advanced double chip kill; the IBM MEH algorithm; multiple storage controllers; and double, triple, or quadruple memory options.

Server models supported by the X6 architecture currently include the System x3850 X6 four-socket system, System x3950 X6 eight-socket system, and the IBM Flex System x880 scalable compute nodes. IBM also is introducing the System x3650 M4 BD storage server, a two-socket rack server supporting up to 14 drives delivering up to 56 terabytes of high-density storage — the largest available in the industry, according to IBM.  (More tidbits from the speeds and feeds to come: Compared to HP two-socket servers supporting a maximum of 48 TB storage with 12 x 3.5″ drives, and Dell two-socket servers supporting a maximum of 51.2 TB storage with 12 x 3.5″ and 2 x 2.5″ drives X6 delivers 46% greater performance—based on Intel Internal Test Report #1310, using SPECjbb*2013 benchmark, July 2013.). IBM’s conclusion: X6 is ideally suited for distributed scale-out of big data workloads.

The X6 systems come with a reference architecture that simplifies deployment. To make it even simpler, maybe even bullet-proof, IBM also is introducing the X6 as a set of packaged solutions. These include:

  • IBM System x Solution for SAP HANA on X6
  • IBM System x Solution for SAP Business Suite on X6
  • IBM System x Solution for VMware vCloud Suite on X6
  • IBM System x Solution for Microsoft SQL Data Warehouse on X6
  • IBM System x Solution for Microsoft Hyper-V on X6
  • IBM System x Solution for DB2 with BLU Acceleration on X6

These are optimized and tuned in advance for database, analytics, and cloud workloads.

So, the X6 bottom line according to IBM: More performance at  40%+ lower cost, multiple generations in one chassis; 3X more memory and higher system availability; expanded use of flash and more storage options; integrated solutions for easy and worry-free deployment; and packaged solutions to address data analytics, virtualization, and cloud.

IBM packed a lot of goodies into the X6. DancingDinosaur will take it up again when IBM presents the promised details. Stay tuned.

Follow DancingDinosaur on Twitter: @mainframeblog

IBM Commits $1B to Drive Watson into the Mainstream

January 10, 2014

IBM is ready to propel Watson beyond Jeopardy, its initial proof-of-concept, and into mainstream enterprise computing. To that end, it announced plans to spend more than $1 billion on the recently formed Watson business unit, an amount that includes $100 million in venture investments to build an ecosystem of entrepreneurs developing Watson-powered apps.

In addition, companies won’t need racks of Power servers to run Watson. With a series of announcements yesterday IBM unveiled plans to deliver Watson capabilities as business-ready cloud services. The announcement focused on three Watson services: 1)  Watson Discovery Advisor for research and development projects in industries such as pharmaceutical, publishing and biotechnology; 2) Watson Analytics to deliver visualized big data insights based on questions posed in natural language by any business user; and 3) IBM Watson Explorer to more easily uncover and share data-driven insights across the enterprise.

DancingDinosaur has been following Watson since its Jeopardy days. Having long since gotten over the disappointment that Watson didn’t run on the Power side of a hybrid zEnterprise, it turns out that IBM has managed to shrink Watson considerably.  Today Watson runs 24x faster, boasts a 2,400% improvement in performance, and is 90% smaller.  IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes, and you don’t even need to locate it in your data center; you can run it in the cloud.

Following the introduction of Watson IBM was slow to build on that achievement. It focused on healthcare and financial services, use-cases that appeared to be no-brainers.  Eventually it experienced success, particularly in healthcare, but the initial customers came slowly and the implementations appeared to be cumbersome.

Watson, at least initially, wasn’t going to be a simple deployment. It needed a ton of Power processors. It also needed massive amounts of data; in healthcare IBM collected what amounted to the entire library of the world’s medical research and knowledge. And it needed applications that took advantage of Watson’s formidable yet unusual capabilities.

The recent announcements of delivering Watson via the cloud and committing to underwrite application developers definitely should help. And yesterday’s announcement of what amounts to three packaged Watson services should speed deployment.

For example, Watson Analytics, according to IBM, removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Using sophisticated analytics and aided by Watson’s natural language interface, Watson Analytics automatically prepares the data, finds the most important relationships, and presents the results in an easy to interpret interactive visual format. As a result, business users are no longer limited to predefined views or static data models. Better yet, they can feel empowered to apply their own knowledge of the business to ask and answer new questions as they emerge. They also will be able to quickly understand and make decisions based on Watson Analytics’ data-driven visualizations.

Behind the new Watson services lies IBM Watson Foundations, described as a comprehensive, integrated set of big data and analytics capabilities that enable enterprises to find and capitalize on actionable insights. Basically, it amounts to a set of user tools and capabilities to tap into all relevant data – regardless of source or type – and run analytics to gain fresh insights in real-time. And it does so securely across any part of an enterprise, including revenue generation, marketing, finance, risk, and operations.  Watson Foundations also includes business analytics with predictive and decision management capabilities, information management with in-memory and stream computing, and enterprise content management packaged into modular offerings. As such it enables organizations of any size to address immediate needs for decision support, gain sustainable value from their initial investments, and grow from there.

This apparently sounded good to Singapore’s DBS Bank, which will deploy Watson cognitive computing capabilities to deliver a next- generation client experience.  For starters, DBS intends to apply Watson to its wealth management business to improve the advice and experience delivered to affluent customers.  The bank is counting on cloud-based Watson to process enormous amounts of information with the ability to understand and learn from each interaction at unprecedented speed. This should greatly increase the bank’s ability to quickly analyze, understand and respond to the vast amounts of data it is accumulating.

Specifically, DBS will deploy IBM’s cloud-based Watson Engagement Advisor solution, to be rolled out in the second half of the year. From there the bank reportedly plans to progressively deploy these capabilities to its other businesses over time.

For fans of cognitive computing and Watson, the announcements represent a much awaited evolution in IBM’s strategy. It promises to make cognitive computing and the natural language power of Watson usable for mainstream enterprises. How excited fans should get, however, depends on the specifics of IBM’s pricing and packaging for these offerings.  Still, faced with having to recoup a $1 billion investment, don’t expect loss-leader pricing from IBM.

Follow DancingDinosaur on Twitter: @mainframeblog

IBM on How Computing Will Change in 5 Years

December 31, 2013

Since 2006 at about this time IBM comes out with five predictions, dubbed 5-in-5, about how technology will affect the world within five years.  Each year the predictions look at how technology innovations will change the way people work, live, and play within the next five years. They are based on market and social trends combined with ideas from the thousands of biologists, engineers, mathematicians and medical physicians in IBM research labs around the world.

Last year the 5-in-5 predictions focused on how systems would augment human senses. It looked at sight, hearing, smell, touch, and taste. For example, a machine that experiences flavor could determine the precise chemical structure of food and why people like it. Or, computers might smell for chemicals in urban environments to monitor pollution or analyze the soil.

IBM’s 5-in-5 predictions for 2013 go in a different direction. This year the researchers looked at how innovations in computing allow us to interact with the meaning that lies in data. The researchers, taking a distinctly benign view, suggest that systems will emerge that treat us as individuals, adapt to us, and look out for our interests. Others, of course, might see this as the tyranny of Big Brother.

Here is this year’s 5-in-5:

  1. The classroom will learn you.  Teachers will work on device that can monitor and interact with the student and ultimately create a unique persona for each student. Teachers will use that persona, which changes over time, to guide the student on his or her learning path. They will know, through the student’s device, what the particular student is struggling to learn and will provide the right help at the right time.
  2. Buying local beats online.  The combination of cloud technology and in-store display will enable local stores to act as a showroom for the wide variety of products available online and enable customers to interact with a product. Of course the store will recognize you and know your preferences. In short, IBM is predicting the convergence of online and brick and mortar retail.
  3. Doctors will use your DNA to keep you well. This already is happening how. But it goes beyond DNA to using the data analytic power of computers to diagnose patient ills and guide doctors in treatment. IBM’s Watson is doing some of this today. How quickly this will evolve remains to be seen; healthcare is a minefield of conflicting interests, most of which have nothing to do with patient care and successful outcomes. You can, for instance, have your personal genome assessed and analyzed today but few have opted to do so. Do you want to know you have a strong genetic inclination toward a disease for which there is no cure?
  4. Your city will help you live in it. Sitting at consoles in operations centers connected to myriad sensors generating massive amounts of real time data, city administrators will be able to, say, manage traffic lights interactively as traffic flows or dynamically adjust the arrival and departure of various transportation. All things we as citizens probably want. The city also could become a huge social network where policies are developed based on clicking likes. Big brother, anyone?
  5. A digital guardian will protect you online. The retailer Target just compromised tens of millions of personal identifications at the end of the year. We truly need an effective digital guardian. As IBM notes, this requires continuous, sophisticated analytics, to identify whatever activity in your digital life varies from the norm and flags any sudden deviation of behavior. This guardian needs to shut down bad things proactively before it reaches you and also provide a private safe fortress for your data and online persona. As one whose email was recently hacked, DancingDinosaur is ready to sign up today. My apologies to any readers who received a desperate message purportedly from me saying I was stranded in Turkey, my wallet destroyed (in a hotel fire), and in immediate need of money to get home. Hope you didn’t send any.

 Best wishes for a peaceful and prosperous New Year.

 New Year Resolution #1: Follow DancingDinosaur on Twitter: @mainframeblog

Meet the Power 795—the RISC Mainframe

December 16, 2013

The IBM POWER 795 could be considered a RISC mainframe. A deep dive session on the Power 795 at Enterprise 2013 in early October presented by Patrick O’Rourke didn’t call the machine a mainframe. But when he walked attendees through the specifications, features, capabilities, architecture, and design of the machine it certainly looked like what amounted to a RISC mainframe.

Start with the latest enhancements to the POWER7 chip:

  • Eight processor cores with:

12 execution units per core

4 Way SMT per core – up to 4 threads per core

32 Threads per chip

L1: 32 KB I Cache / 32 KB D Cache

 L2: 256 KB per core

 L3: Shared 32MB on chip eDRAM

  • Dual DDR3 Memory Controllers

100 GB/s Memory bandwidth per chip

  • Scalability up to 32 Sockets

360 GB/s SMP bandwidth/chip

20,000 coherent operations in flight

Built on POWER7 and slated to be upgraded to POWER8 by the end of 2014 the Power 795 boasts a number of new features:

  • New Memory Options
  • New 64GB DIMM enable up to 16TB of memory
  • New hybrid I/O adapters will deliver Gen2 I/O connections
  • No-charge Elastic processor and memory days
  • PowerVM will enable up an 20 LPARs per core

And running at 4.2 GHz, the Power 795 clock speed starts to approach the zEC12 at 5.5 GHz while matching the clock speed of the zBC12.

IBM has also built increased flexibility into the Power 795, starting with turbo mode which allows users to turn on and off cores as they manage power consumption and performance. IBM also has enhanced the concept of Power pools, which allows users to group systems into compute clusters by setting up and moving processor and memory activations within a defined pool of systems, at the user’s convenience. With the Power 795 pool activations can be moved at any time by the user without contacting IBM, and the movement of the activations is instant, dynamic, and non-disruptive. Finally, there is no limit to the number of times activations can be moved. Enterprise pools can include the Power 795, 780, and 770 and systems with different clock speeds can coexist in the same pool. The activation assignment and movement is controlled by the HMC, which also determines the maximum number of system in any given pool.

The Power 795 provides three flavors of capacity of demand (CoD). One flavor for organizations that know they will need the extra capacity that can be turned on through easy activation over time. Another is intended for organizations that know they will need extra capacity at predictable times, such as the end of the quarter, and want to pay for the added capacity on a daily basis. Finally, there is a flavor for organizations that experience unpredictable short bursts of activity and prefer to pay for the additional capacity by the minute. Actually, there are more than the three basic flavors of CoD above but these three will cover the needs of most organizations.

And like a mainframe, the Power 795 comes with extensive hardware redundancy.  OK, the Power 795 isn’t a mainframe. It doesn’t run z/OS and it doesn’t do hybrid computing. But if you don’t run z/OS workloads and you’re not planning on running hybrid workloads yet still want the scalability, flexibility, reliability, and performance of a System z the Power 795 might prove very interesting indeed. And when the POWER8 processor is added to the mix the performance should go off the charts. This is a worthy candidate for enterprise systems consolidation.

Latest in System z Software Pricing—Value Unit Edition

December 5, 2013

Some question how sensitive IBM is to System z costs and pricing.  Those that attended any of David Chase’s several briefings on System z software pricing at Enterprise 2013 this past October, however would realize the convulsions the organization goes through for even what seems like the most trivial of pricing adjustments. So, it is not a small deal that IBM is introducing something called Value Unit Edition (VUE) pricing for System z software.

VUE began with DB2. The purpose is to give z data centers greater pricing flexibility while encouraging new workloads on the z. VUE specifically is aimed at key business initiatives such as SOA, Web-based applications, pureXML, data warehousing and operational business intelligence (BI), and commercial (packaged) applications such as SAP, PeopleSoft, and Siebel. What started as a DB2 initiative has now been extended to WebSphere MQ, CICS, and IMS workloads.

In short, VUE pricing gives you a second pricing option for eligible (meaning new) z workloads. BTW, this eligibility requirement isn’t unusual with the z; it applies to the System z Solution Edition deals too. Specifically, VUE allows you to opt to pay for the particular software as a one-time capital expenditure (CAPEX) in the form of a one-time charge (OTC) rather than as a monthly license charge (MLC), which falls into the OPEX category.

Depending on your organization’s particular circumstances the VUE option could be very helpful. Whether it is more advantageous for you, however, to opt for OTC or MLC with any eligible workload is a question only your corporate accountant can answer (and one, hopefully, that is savvy about System z software pricing overall).  This is not something z data center managers are likely to answer on their own.

Either way you go, IBM in general has set the pricing to be cost neutral with a five-year breakeven. Under some circumstances you can realize discounts around the operating systems; in those cases you may do better than a five-year breakeven. But mainly this is more about how you pay, not how much you pay. VUE pricing is available for every System z model, even older ones. Software running under VUE will have to run in its own LPAR so IBM can check its activity as it does with other software under SCRT.

In summary, the main points of VUE are:

  • One-time-charge (OTC) pricing option across key middleware and packaged applications
  • The ability to consolidate or grow new workloads without increasing operational expense
  • Deployment on a z New Application License Charge (zNALC) LPAR, which, as expected, runs under the zNALC terms and conditions
  • Of course, new applications must be qualified; it really has to be new
  • Allows a reduced price for the z/OS operating system
  • Runs as a mixed environment, some software MLC  some OTC
  • Selected ISV offerings qualify for VUE

Overall, System z software pricing can be quite baffling. There is nothing really comparable in the distributed world. The biggest benefit of VUE comes from the flexibility it allows, OPEX or CAPEX, not from not from any small discount on z/OS. Given the set of key software and middleware VUE applies to the real opportunity lies in using its availability to take bring on new projects that expand the footprint of the z in your organization. As DancingDinosaur has pointed out before, the more workloads you run on the z the lower your cost-per-workload.

Follow DancingDinosaur on Twitter, @mainframeblog

CA Technologies Joins System z and Cloud for Cheaper Storage

December 2, 2013

A pair of announcements at the AWS re:Invent conference in mid November aimed to combine System z with the cloud. The first addressed how to unite the z with the cloud through new tools that support storage, virtualized environments, and application delivery for the purpose of  meeting the management demands of what CA refers to as dynamic data centers by blending mainframe and cloud capabilities.

The idea here is to blend the z with cloud infrastructures that offer greater flexibility to manage enterprise data centers and balance workloads across platforms. Citing a Forrester Consulting study commissioned by IBM that noted how organizations, by including the mainframe in cloud infrastructures, can enable a broader mix of infrastructure service options. The idea, for example, is to enable a mix of Linux virtual machines from both the mainframe for data needing to meet high SLAs and from commodity infrastructure when SLA requirements are less stringent. The study also pointed out that the z can better accommodate high densities of very small workloads with resource guarantees — something very difficult to achieve on commodity resources. CA is supporting System z and the cloud with several new software releases that bring improved efficiencies and cost savings.

The second announcement is similar to the first except it looks specifically at cloud storage for the z, particularly when backing up data through Amazon Web Services and Riverbed Technology. The promise here is to streamline storage management operations while cutting storage costs to pennies per gigabyte. Essentially, z shops can use the CA tools to back up their data and archive it very cheaply in the cloud.

CA Cloud Storage for System z, when used with  Amazon Web Services (AWS) cloud storage and the Riverbed Whitewater cloud storage appliance enables mainframe data centers greater storage agility at lower storage costs. The upshot: disaster recovery readiness is improved and AWS cloud storage is accessed without changing the existing backup infrastructure.

The product not only lets organizations reduce data storage costs by taking advantage of low cloud storage costs but delivers the elastic capacity and flexibility.  CA also insists the product eliminates purpose-built robots and disks, but that doesn’t seem to be entirely the case.

Rather, it incorporates Riverbed Whitewater, itself a purpose-built storage appliance that helps integrate cloud storage infrastructures to securely deliver instant recovery and cost-effective storage for backup and data archiving. By using CA Cloud Storage for System z and the Riverbed appliance, z shops can back up IBM System z storage data to Amazon S3, a storage infrastructure designed for mission-critical and primary data storage or to Amazon Glacier, an extremely low-cost storage service for which retrieval times of several hours are suitable. Both services are highly secure and scalable and designed for 99.999999999 percent durability, according to CA.

Apparently CA is deploying the software with AWS and Riverbed for itself. The company expects achieve scalable storage while reducing the cost of its own backups. In addition, it picks up the benefits of elastic storage, which should improve its disaster recovery and ensure faster response to business needs without having to depend on offsite tape recalls, according to the company.

Both CA offerings, in effect, blend the System z with the cloud to increase flexibility and reduce cost. “The growth of System z and the increased adoption of Linux on the mainframe make it an optimal platform for reliably and cost-effectively delivering IT services that support opportunities around cloud, big data, and mobile,” said Joe Clabby, president, Clabby Analytics commenting on the CA announcements. In short, he noted the product combination enables IT workers to bridge the IT on-premise/cloud gap and manage the cross-enterprise and cross-platform operations of today’s dynamic data center.

Of course, for z data centers there are other ways to bridge the gap. IBM, for example, has been nudging the z toward to cloud for some time, as DancingDinosaur reported here. IBM also has its own relationship involving the Riverbed Whitewater appliance and Tivoli Storage Manager. Whatever approach you choose, it is time for z shops to explore how they can leverage the cloud.

You can follow DancingDinosaur on Twitter, @mainframeblog.

The zEnterprise as a Hybrid Data Center

November 21, 2013

There is no doubt that the zEnterprise enables hybrid computing. Just attach a zBX to it and start plugging in Linux and x86 blades; presto, you’ve got hybrid computing.  You can manage this entire hybrid infrastructure via the Unified Resource Manager.

The zEnterprise also has a sister hybrid computing platform, IBM PureSystems. Here, too, you can add in System x and Linux or even Power and System i and do hybrid computing. You can also manage the hybrid environment through a single console, albeit a different console—the Flex System Manager—and manage this second IBM hybrid platform as a unified environment.  DancingDinosaur has noted the irony of IBM having two different, incompatible hybrid systems; IBM has reassured this blogger several times that it is trying to converge the two. Whenever it happens DancingDinosaur will be the first to report it.

The zEnterprise or even PureSystems as a hybrid computing platform, however, is not the same as a hybrid data center.  Apparently there is no definition of a hybrid data center despite all the talk about hybrid computing, hybrid clouds, and hybrid systems.  As best DancingDinosaur can piece it together, the hybrid data center is multiplatform like the zEnterprise, but it also is multi-location, often using co-location facilities or factory-built containerized data centers (IBM calls them Portable Modular Data Centers, PMDC). More often, however, hybrid data centers are associated with cloud computing as the third of the three flavors of cloud (private, public, hybrid).

Gartner recently described some architecture options for a hybrid data center. In one case you could have a zEnterprise acting as, say, a private cloud using a co-location facility as a DMZ between the private cloud and a public cloud like Amazon. Not sure, however, you would need the DMZ if your private cloud was running on the highly secure zEnterprise but Gartner included it. Go figure.

Hybrid showed up in numerous Enterprise 2013 sessions this past October. You can catch some video highlights from it here. The conference made frequent mention of hybrid in numerous sessions, some noted in previous DancingDinosaur posts, such as Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The session introduced the Unified Resource Manager and described how it would allow an IT shop to manage a collection of one or more zEnterprise nodes including any optionally attached zBX cabinets as a single logical virtualized system through a Hardware Management Console (HMC). In short, it was about providing a single point of control through which data center personnel can deploy, configure, monitor, manage and maintain the integrated System z and zBX blades based on heterogeneous architectures in a unified manner. But it wasn’t talking about the hybrid enterprise data center described in the previous paragraph.

Similarly, Application Performance Management and Capacity Planning for the IBM zEnterprise Hybrid Workload focused on extending the Unified Resource Manager to goal-oriented performance management for both traditional System z and BladeCenter applications. It was about applying WLM, RMF, and Platform Performance Management to cross-platform hybrid applications. Again, this really wasn’t about the hybrid data center described above.

BTW, plans apparently already are underway for Enterprise 2014. Looks like it will be Oct. 6-10 at the Venetian in Las Vegas. It should be quite an event given that IBM will be celebrating the 50th anniversary of the mainframe in 2014.

And there is much more on z hybrid computing and hybrid clouds. The zEnterprise has its own page on cloud computing here, and last month the zEnterprise zBC12 won CRN Tech Innovator Award for the most Innovative cloud solution.  You can also click here to see how a dozen IBM customers used various IBM platforms to build hybrid clouds.

IBM has already used the zEnterprise to consolidate over 30,000 servers around the world for an 84% improvement in data center efficiency and a 50% reduction in power and cooling. This effectively freed $1 billion to spend on innovative new projects that drive business growth across the company. And IBM is about as hybrid a data center as you can find.

Follow DancingDinosaur on Twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 572 other followers

%d bloggers like this: