Posts Tagged ‘z10’

Enterprise 2013—System z Storage, Hybrid Computing, Social and More

October 10, 2013

The abstract for the Enterprise 2013 System z program runs 43 pages. Haven’t tallied the number of sessions offered but there certainly are enough to keep you busy for the entire conference (Oct. 20-25, in Orlando, register now) and longer.

Just the storage-related sessions are wide ranging, from  DFSM, which DancingDinosaur covered a few weeks back following the SHARE Boston event here, to the IBM Flash portfolio, System z Flash Express, dynamically provisioning Linux on z storage, capacity management, and more. For storage newcomers, there even is a two-part session on System z Storage Basics.

A storage session titled the Evolution of Space Management looks interesting.  After the advent of System Managed Storage (SMS), the mainframe went decades without much change in the landscape of space management processing. Space management consisted of the standard three-tier hierarchy of Primary Level 0 and the two Migration tiers, Migration Level 1 (disk) and Migration Level 2 (tape).This session examines recent advances in both tape and disk technologies that have dramatically changed that landscape and provided new opportunities for managing data on the z. Maybe they will add a level above primary called flash next year. This session will cover how the advances are evolving the space management hierarchy and what to consider when determining which solutions are best for your environment.

IBM has been going hog-wild with flash, the TMS acquisition playing no small part no doubt. Any number of sessions deal with flash storage. This one, IBM’s Flash Portfolio and Futures, seems particularly appealing. It takes a look at how IBM has acquired and improved upon flash technology over what amounts to eight generations technology refinements.  The session will look at how flash will play a major role across not only IBM’s storage products but IBM’s overall solution portfolio. Flash technology is changing the way companies are managing their data today and it is changing the way they understand and manage the economics of technology. This session also will cover how IBM plans to leverage flash in its roadmap moving forward.

Hybrid computing is another phenomenon that has swept over the z in recent years. For that reason this session looks especially interesting, Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The IBM zEnterprise hybrid system introduces the Unified Resource Manager, allowing an IT shop to manage a collection of one or more zEnterprise nodes, including an optionally attached zBX loaded with blades for different platforms, as a single logical virtualized system through a single mainframe console. The mainframe can now act as the primary point of control through which data center personnel can deploy, configure, monitor, manage, and maintain the integrated System z and zBX blades based on heterogeneous architectures but in a unified manner. It amounts to a new world of blades and virtual servers with the z at the center of it.

Maybe one of the hardest things for traditional z data center managers to get their heads around is social business on the mainframe. But here it is: IBM DevOps Solution: Accelerating the Delivery of Multiplatform Applications looks at social business and mobile along with big data, and cloud technologies as driving the demand for faster approaches to software delivery across all platforms, middleware, and devices. The ultimate goal is to push out more features in each release and get more releases out the door with confidence, while maintaining compliance and quality. To succeed, some cultural, process, and technology gaps must be addressed through tools from Rational.

IBM has even set itself up as a poster child for social business in another session, Social Business and Collaboration at IBM, which features the current deployment within IBM of its social business and collaboration environments. Major core components are currently deployed on System z. The session will look at what IBM is doing and how they do it and the advantages and benefits it experiences.

Next week, the last DancingDinosaur posting before Enterprise 2013 begins will look at some other sessions, including software defined everything and Linux on z.

When DancingDinosaur first started writing about the mainframe over 20 years ago it was a big, powerful (for the time), solid performer that handled a few core tasks, did them remarkably well, and still does so today. At that time even the mainframe’s most ardent supporters didn’t imagine the wide variety of things it does now as can be found at Enterprise 2013.

Please follow DancingDinosaur and its sister blogs on Twitter, @mainframeblog.

New zEnterprise Business Class Entry Model—zBC12

July 23, 2013

IBM introduced its new zEnterprise Business Class machine, the equivalent of the z114 for the zEC12, the zEnterprise BC12 (zBC12).  It offers significantly more power than its predecessor but the $75,000 base price hasn’t changed.

The company has been hinting at the arrival of this machine for months (and DancingDinosaur has been passing along those hints as quickly as they came). Of particular interest is that the System z Solution Edition pricing applies to the zBC12. Solution Edition pricing should make the machine quite competitive with x86-based systems, especially when running multiple Linux instances.

IBM isn’t being coy about its intentions to discount this machine. The initial announcement touted a new Linux-only based version of the zBC12, the Enterprise Linux Server (ELS). The ELS includes hardware, the z/VM Hypervisor, and three years of maintenance at a deeply discounted price. Besides over 3,000 Linux applications it includes two new capabilities, ELS for Analytics and Cloud-Ready for Linux on System z, each acting as an onramp for analytics or cloud computing.

DancingDinosaur has been a big fan of the Solution Edition program as the only way to get serious discounts on a mainframe. The big caveat is the constraints IBM puts on the use of the discounted machine. Each Solution Edition program is negotiated so just make sure you fully understand the constraints and all the fine print so you can live with it for several years. Of course, a zBC12 can be used for anything you would use a mainframe although enterprise Linux serving seems  an ideal use.

Besides its faster processor the zBC12 also offers up 156 capacity settings on each model  to choose just the right capacity setting for your needs along with a new pay-as-you-grow approach. When it is integrated with the IBM DB2 Analytics Accelerator, the zBC12 can perform business analytics workloads with10x better price performance and 14 percent lower total cost of acquisition than the closest competitor, according to IBM.

Out of the box the zBC12 specs look good:

  • 4.2 GHz processor designed to deliver up to a 36% performance increase per core to help boost software performance for business-critical workloads
  • Up to six general purpose processors designed to deliver up to 58% more capacity compared to the z114, which had five general purpose processors
  • Up to a 2x increase in available memory (496 GB) compared to the z114 for improved performance of memory-demanding workloads such as DB2, IBM WebSphere, and Linux on System z

The zBC12 comes in two models, the H06 and H13. Both are air cooled, single frame, or support 30 LPARS. The H06 has one processor drawer for 9 processor units. These can be divided between SAPs, CPs, IFLs/ICFs, zIIPs and zAAPs, and 1 IFP.  The Model H113 has two processor drawers to handle 18 processor units. It allows the same mix of processor types but in larger quantities and 2 dedicated spares. There are configurations where the H06 requires the second processor drawer. The entry processing level is 50 MIPS, up from 26 MIPS with the z114 with no change in the base price.

As far as other pricing, the zBC12 follows essentially an extension of the z114 stack pricing with a 27% price/performance improvement over the z114 for specialty engine pricing, which translates in 36% greater performance for the money.  Pricing for maintenance remains the same. Software keeps with the same pricing curve with a 5% discount applied. The price of Flash Express for the zBC12 remains at $125,000.

IBM has provided a straightforward upgrade path from the z10 or zEnterprise to the zBC12 as well as from the zBC12 to the zEC12. It also can be connected to the zBX (Model 003) to seamlessly manage workloads across a hybrid computing environment consisting of multiple architectures (Linux, AIX, and Intel/Windows).

The announcement of the zBC12 was accompanied by a slew of other new z announcements, including the new IBM zEnterprise Analytics System 9710,and native JSON support to bridge the gap between mobile devices and enterprise data and services along with conversion between JSON  and the new CICS Transaction Server Feature Pack for Mobile Extensions V1.0 and DB2 11 for z/OS (ESP).  Plus there is the new z/VM v6.3 and enhancements to the z/OS Management Facility.

As DancingDinosaur noted last week, expect z sales to get a boost in the next quarter or two as organizations choose the new zBC12. With its improved price/performance and low entry pricing and the Solution Edition deal for the zBC12 ELS the z should see a nice bounce.

System z Clouds Pay Off

January 9, 2013

From its introduction last August, IBM has aimed the zEC12 at cloud use cases, especially private clouds. The zEC12’s massive virtualization capabilities make it possible to handle private cloud environments consisting of thousands of distributed systems running Linux on zEC12.

One zEC12, notes IBM, can encompass the capacity of an entire multi-platform data center in a single system. The newest z also enables organizations to run conventional IT workloads and private cloud applications on one system.  If you are looking at a zEC12 coupled with the zBX you can have a hybrid private cloud running Linux, Windows, and AIX workloads.

There are three main reasons why z-based data centers should consider a private cloud:

  1. The z does it so naturally and seamlessly
  2. It boosts IT efficiency, mainly through user self service
  3. It increases enterprise agility, especially when it comes to provisioning and deploying IT resources and applications fast

Organizations everywhere are adopting private clouds (probably because C-level execs are more comfortable with private cloud security).  The Open Data Center Alliance reports faster private cloud adoption than originally predicted. Over half its survey respondents will be running more than 40% of their IT operations in private clouds by 2015.

Mainframes make a particularly good private cloud choice. Nationwide, the insurance company, consolidated 3000 distributed servers to Linux virtual servers running on a variety of z mainframes, creating a multi-platform private mainframe cloud optimized for its different workloads. The goal was to improve efficiency.

Nationwide initially intended to isolate its Linux and z/OS workloads on different physical mainframes. This resulted in a total of seven machines – a mixture of z9 and z10 servers – of which two were dedicated to Linux. To optimize this footprint, however, Nationwide ended up consolidating all workloads to four IBM zEnterprise 196 servers and two z10 servers, putting Linux and z/OS workloads on the same machines because its confidence level with Linux on the mainframe and the maturity of the platform made the Nationwide IT team comfortable mixing workloads.

The key benefit of this approach was higher utilization and better economies of scale, effectively making the mainframes into a unified private cloud—a single set of resources, managed with the same tools but optimized for a variety of workloads. The payback:  elimination of both capital and operational expenditures, expected to save about $15 million over three years. The more compact and efficient zEnterprise landscape also means low costs in the future. Specifically, Nationwide is realizing an 80% reduction in power, cooling and floor space despite an application workload that is growing 30% annually, and practically all of it handled through the provisioning of new virtual servers on the existing mainframe footprint.

Another z cloud was built by the City and County of Honolulu. It needed to increase government transparency by providing useful, timely data to its citizens. The goal was to boost citizen involvement, improve delivery of services, and increase the efficiency of city operations.

Honolulu built its cloud using an IFL engine running Linux on the city’s z10 EC machine. Between Linux and IBM z/VM the city created a customized cloud environment. This provided a scalable self-service platform on which city employees could develop open source applications, and it empowered the general public to create and deploy citizen-centric applications. Other components included IBM XIV storage, IBM Maximo Asset Management, IBM Tivoli OMEGAMON, Tivoli Workload Scheduler, and Tivoli Storage Manager.

The results: reduction in application deployment time from one week to only hours, 68% lower licensing costs for one database, and a new property tax appraisal system that increased tax revenue by $1.4 million in just three months.

There are even more examples of z clouds. For z shops a private cloud should be pretty straightforward; you’re probably over half-way there already. All you need are a few more components and a well-defined business case.  Give me a call, and I’ll even help you pull the business case together.

El Al Lowers System z Costs with GUI Tools

February 3, 2012

When DancingDinosaur looked at the cost per workload based on comparative workload analysis by IBM’s John Shedletsky the zEnterprise came out very well in most cases.  It wasn’t, however, the zEnterprise’s better price performance that necessarily made the difference. As Shedletsky noted, it was the lower cost of labor due to the more efficient management capabilities of the z, much of it resulting from the efficiency enabled by the Unified Resource Manager, which comes with the zEnterprise.

CA Technologies latched onto the idea that companies could lower their cost of mainframe computing by simplifying management tasks through a GUI and automation of routine tasks. To that end it launched the Mainframe 2.0 initiative a couple of years ago and has been steadily revamping its mainframe management tools for simplicity and efficiency. Back in May 2009, DancingDinosaur first looked at the CA initiative.

That, in fact, is what brought El Al, the Israeli airline, to CA Mainframe Chorus. “Cutbacks in budget and manpower make it hard for us to manage our growing processing power and cope with the intensifying complexity of our computing environment,” according to Arieh Berger, Operation System and Information Security Manager at EL AL. Like almost every other IT shop, the El Al system group is under pressure to do more with less.

For example, El Al deploys most of its core systems—ticket sales, reservations, the cargo system, financials, and more—on DB2 v8 running on a pair of z 10 machines, each with a zIIP.  The company plans to upgrade to DB2 10 in six months, noted Ita Israeli, El Al’s DB2 administrator. A long-time El Al database manager, Israeli was trained by her predecessor over a considerable period of time.  With Chorus’ new GUI and the information it makes easily available, such a lengthy orientation to the job won’t be necessary, even if the replacement is a mainframe newcomer.

“CA Chorus sits above all the other CA products we have and gathers information from them. It keeps a history in a way I couldn’t do before, at least not without a lot of extra work,” Israeli explained.  Without Chorus, the DBA administrator would have to search through each of the products, culling the information she wanted. “Now I can see it all in one screen through a browser. We don’t even have to install any code,” she added.

According to CA, EL AL has improved its IT operations by leveraging CA Mainframe Chorus’ enhanced reporting tools, more efficient handling of z/OS events, and more flexible automated monitoring of storage levels. The airline also has also taken advantage of faster problem detection and trouble-shooting, which is accelerated and simplified through a graphical display of complex business data, automated analysis of historical trends, and the generation of comparative diagrams to aid problem-solving.

“Now I would like our developers to start using it. For example, they will be able to see everything that connects to a table they are working on—all the rules—without having to navigate through a lot of screens,” Israeli said.  The airline has only had Chorus for a few months so uptake is just starting. At this point only Israeli and another work with the tool daily.

Although the tool enables the DBA team to function more efficiently every day it may deliver its biggest payback when veteran DBAs like Israeli retire in a few years. “It makes it very easy for people who don’t know the mainframe, who don’t know green screen, to see what’s going on,” she noted.

As the zEnterprise delivers increasingly better price/performance the cost of labor for ongoing operations and management becomes a bigger and more noticeable concern. Labor costs steadily increase, and the only way to rein in the cost is through automated, simplified tools that make possible greater efficiency and higher productivity.

IBM vs. Oracle HP Itanium

June 24, 2011

It was mind boggling enough when Oracle bought Sun Microsystems in April 2009, ostensibly to create “the only company that can engineer an integrated system – applications to disk – where all the pieces fit and work together..,” as Oracle CEO Larry Ellison declared at that time. Huh, had he ever heard of IBM?

Since then, Oracle has done nothing with Sun or SPARC while IBM has been converting Sun customers to System z and Power platforms by the hundreds. So, what possibly could Oracle been thinking when it recently declared Intel’s Itanium chip dead before Intel or HP did other than to hand IBM another banquet of system conversions to feast on? It’s not like Oracle can offer new Sun/SPARC servers as a viable replacement for Itanium. To the contrary, Oracle reported Q4 hardware sales down 6% year-over-year.

Let the IBM feasting begin. The company has identified several thousand potential HP Itanium prospects and started rolling out its strategy to bring them over to System z, Power, or System x.

In truth, Itanium already wasn’t doing well before Oracle stuck a knife in it. As IBM interprets the IDC market data it extended its lead in the UNIX servers in Q4 2010 by capturing 53.9% revenue share of that segment, gaining 5.9 points of share over competitors and leading the second-place vendor, HP, by more than 30%. If you then consider the huge gains made by the System z in the high end server market following the introduction of the zEnterprise, IBM clearly dominates the high end server market over both HP and Oracle/Sun. The latest IDC server market report, May 2011, can be found here.

For the high end server market, the choices are clear: after two years under Oracle Sun/SPARC is going nowhere and, despite what Intel may still say, Itanium isn’t going anywhere either, especially after what Oracle just did to it and HP. The only platforms capable of running enterprise-class UNIX or Linux applications are IBM System z (z10, z196, zEnterprise/zBX) and IBM Power Systems with Power7. At some point Intel may rev up Xeon but that would be well into the future.

Of course, IBM also provides high end x86 systems, the eX5 platform that uses the Xeon processor. And if you need a more enterprise-capable version of x86, the zEnterprise will be sporting x blades running Linux before the end of the year and ultimately run Windows.

Meanwhile, IBM is ramping up a worldwide full court press to win nervous or disgruntled or frustrated Itanium customers to one or another IBM platform. To date, it has had its biggest success in terms of conversions to the Power platforms. In 2009, it reports migrating over 600 customers to Power, 85% of which came from Sun or HP.  Over the past four years, IBM has migrated over 2000 companies to Power, again most coming from Sun or HP.  Earlier this year IBM reported that 61 companies adopted the System z platform either for the first time or returning after a long absence and some came from HP systems. You will find case studies on several of those HP refugees here and here.

To facilitate the exodus from Itanium to an IBM platform, IBM is offering help and incentives. The Stop & Think program brings a variety of technical and financial assessments to help reduce data and application management costs by up to 30%. If the organization wants to shift from the Oracle database to DB2 it will save one-third the cost and be able to cut the number of FTEs needed to administer the database by over half. Similarly, the Breakfree program offers a three-year 50% TCA savings. Along the way, IBM will discount or bundle in Migration Factory to actually get you there.

IBM recently compared an HP Itanian Superdome 12 with 32 cores and the Oracle database with an IBM Power 770 with 16 cores and migration to DB2 via IBM Migration Factory with the Breakfree discount applied. The 3-year HP cost, including the usual Oracle discounting end up 50% more than the 3-year IBM Power/DB2/Breakfree deal. Then IBM’s financing group will work the numbers to smooth the transition costs over the three years.

Most of IBM’s migration customers go with the Power platform. For those that want the advantages of the System z or zEnterprise there likely will be some very good deals cropping up this summer with the widely expected introduction of lower cost business class z196 machines and, probably, deeply discounted hardware/software/maintenance System z Solution Edition bundles. Since those are usually reserved for new workloads, a migration customer should automatically qualify.

If you were an HP Itanium customer or a Sun/SPARC customer pondering your organization’s IT roadmap, what would you do?

Dynamic Data Warehousing and System z

June 20, 2011

Data warehousing should be an ideal workload for the System z. It already houses the production data that mostly populates the data warehouse. It can run Cognos on Linux on z for BI and with a zEnterprise (z196 and zBX) it can run the Smart Analytics Optimizer, either as a zBX blade or as an appliance. And do it all with scalability, reliability, and performance.

But IBM is moving beyond conventional data warehousing, which entails an enterprise data store surrounded by myriad special purpose data marts. Data warehousing as it is mainly practiced today in the distributed environment is too complex, difficult to deploy, requires too much tuning, and too inefficient when it comes to bringing in analytics, which delays delivering the answers business managers need. And without fast analytics, well, what’s the point? In addition, such data warehousing requires too many people to maintain and administer, which makes it too costly.

On top of these problems, the world of data has changed dramatically since organizations began building enterprise data warehouses. Now a data warehouse should accommodate new types of data and rapidly changing forms of data.

IBM’s recommendation: evolve the traditional enterprise data warehouse into what it calls the enterprise data hub. This will entail consolidating the infrastructure and reducing the data mart sprawl. It also will simplify analytics, mainly by deploying analytics appliances like IBM’s Netezza. Finally, organizations will need to data governance and lifecycle management, probably through automated policy-based controls. The result should be better information faster and delivered in a more flexible and cost-effective way.

Ultimately, IBM wants to see organizations evolve the enterprise data warehouse into an enterprise data hub with a variety of BI and analytics engines connected to it along with engines tuned for analyzing streamed data and vast amounts of unstructured data of the type Hadoop been shown to be particularly good at. DancingDinosaur wrote about Hadoop on the z196 back in November.

The payback from all of this, according to IBM, will be increased enterprise agility and faster deployment of analytics, which should result in increased business performance. The consolidated enterprise data warehouse also should lower TCO and speed time to value for both the data warehouse and analytics. All desirable things, no doubt, but for many organizations this will have require a gradual process and a significant investment in new tools and technologies, from appliance to analytics.

Case in point is Florida Hospital, Orlando, which deployed a z10 with DB2 10, which provides enhanced temporal data capabilities, with a primary goal of converting its 15 years of clinical patient data into an analytical data warehouse for use in leading edge medical and genetics research. DancingDinosaur referenced the hospital’s plans recently.

The hospital calls for getting the data up and running on DB2 10 this year and attaching the Smart Analytics Optimizer as an appliance connected to the z10 in Q1 2012. Then it can begin cranking up the research analytics.  Top management has bought into this plan for now, but a lot can change in the next year, the earliest the first fruits of the hospital’s z-based analytical medical data exploration are likely to hit.

IBM does not envision the enterprise data hub exclusively as a System z effort. To the contrary its Power platform is as likely to be the preferred platform as any. Still, a zEnterprise loaded with Smart Analytics Optimizer blades might make a pretty good choice too. Florida Hospital probably would have gone with the z196 if it had known the machine was coming when it was upgrading from the z9 to z10.

The point here: existing data warehouses probably are obsolete. In a recent IBM study, half the business managers complained that they don’t have the information they need to do their jobs and 60% of CEOs admitted they need to do a better job of capturing and understanding information rapidly in order to make swift business decisions. That should be a signal to evolve your existing data warehouse into an enterprise data hub now and the z you have sitting there is just the vehicle for doing that.

zEnterprise and the Private Cloud

May 20, 2011

At a dinner following an IBM Systems & Technology Group (STG) analyst briefing a few weeks ago, IBM Senior Vice President Rod Adkins responded to a question from DancingDinosuar about IBM’s Tuned for the Task strategy, which supplants Fit for Purpose. Tuned for the Task, he suggested, better addresses the new workloads and new challenges of Smarter Computing.

As Adkins noted, with IBM’s rich capabilities in systems, middleware, and analytics, enterprises can build a roadmap for Smarter Computing infrastructures that are tuned to the task, designed for data, and managed in the cloud. More often than not that cloud will be a private cloud.

At the STG analyst briefing itself private clouds drew considerable attention with one study noting that 60% of enterprises planned to implement private clouds. In his session, Andy Wachs, IBM System Software Manager, gave a presentation here laying out a simple progression for any company’s journey to a private cloud. It starts with server, storage, and network virtualization. To achieve the efficiency and flexibility inherent in a private cloud those IT resources must be virtualized. Without that you can’t move forward.

Wachs focused primarily on IBM’s Power platform, but the System z, particularly the zEnterprise (z196 and zBX), are ideal for private clouds. As IBM puts it, the mainframe’s leading virtualization capabilities makes it an obvious choice for cloud computing workloads, especially with the ability to rapidly provision thousands of virtual Linux servers and share resources across the entire system. One zEnterprise can encompass the capacity of an entire multi-platform data center in a single system.

In his presentation, Wachs makes the point that server/storage/network virtualization is the starting point of any private cloud. Without being fully virtualized, you go nowhere with the cloud. Well, the System z is fully virtualized from the start and its share-all design principle for system components enables component reductions of as much as 90% for massive simplification, which significantly reduces TCO. Add to that the security and reliability of the z makes it a secure platform for multi-tenant business workloads from the application layer to the data source and all points between.

In the end, the z is a trusted repository both for highly secure information and as an open platform supporting anything from Web2.0 agile development environments with REST interfaces to enterprise middleware. It also can deliver the variety of management—provisioning, monitoring, workflow orchestration, tracking/metering resource usage—Wasch identifies as essential. More importantly, all this management must be automated. Here again, the z already has the right tools in Systems Director and the Tivoli suite of management tools along with the Unified Resource Manager for the hardware.

Properly managed and automated private clouds can enable efficient self-service, on-demand IT provisioning. Requested resources, ideally, can be selected from a catalog with the click of a browser and, after automatic governance review, materialize in the private cloud properly configured and ready for use within hours if not minutes.

In a recent report, IDC observes that private clouds present an opportunity to accelerate the shift to this kind of more automated, self-service form of computing. It not only enables organizations to reduce costs and boost IT utilization but to better match the IT resources provisioning process with the speed at which businesses need to move these days. Click here and scroll down to access the IDC report.

When Adkins talks about Smarter Computing, he’s not talking only about System z and zEnterprise private clouds. That is, however, a good place to start with Smarter Computing, and when lower cost zEnterprise machines roll out later this year even more organizations will be able to join this party.

BMC Tools for DB2 10 Drive z/OS Savings

April 25, 2011

This month BMC announced the upgrading of 23 tools for managing DB2 10 databases running on System z9, z10, and zEnterprise/z196.  When IBM introduced DB2 10 in 2010 it implied the database would reduce costs and optimize performance. Certainly running it on the z10 or the z196 with the latest zIIP engine would do both, but BMC’s updated tools make it easier to capture and expand those benefits.

IBM estimated 5-10% improvement in CPU performance out-of-the box. BMC’s solutions for DB2 10 for z/OS will help IT organizations further maximize cost savings as well as enhance the performance of their applications and databases by much as a 20% improvement if you deploy using their upgraded tools.

These DB2 improvements, which IBM refers to as operational efficiencies, revolve mainly around reducing CPU usage. This is possible because, as IBM explains it, DB2 10 optimizes processor times and memory access, leveraging the latest processor improvements, increased memory, and z/OS enhancements. Improved scalability and a reduced virtual storage constraint add to the savings. Continued productivity improvements for database and systems administrators can drive even more savings.

The key to the improvements may lie in your ability to fully leverage the zIIP assist processor. The zIIP co-processors take over some of the processing from the main CPU, saving money for those organizations that pay for their systems by MIPS (million instructions per second).

When IBM introduced version 10 of DB2 for z/OS in 2010, it promised customers that upgrading to this version would boost performance due to DB2’s use of these co-processors. Even greater gains in performance would be possible if the customer also would be willing to do some fine-tuning of the system. This is where the new BMC tools come in; some of tools specifically optimize the use the zIIP co-processors.

Some of BMC’s enhanced capabilities help offload the DB2 workload to the zIIP environment thereby reducing general purpose processor utilization. The amount of processing offloaded to zIIP engines varies. With the new release, for example, up to 80 percent of the data collection work for BMC SQL Performance for DB2 can be offloaded.

The BMC tools also help companies tune application and database performance in other ways that increase efficiency and lower cost. For example, BMC’s SQL Performance Workload Compare Advisor and Workload Index Advisor detect performance issues associated with changes in DB2 environments. Administrators can see the impact of changes before they are implemented, thereby avoiding performance problems.

An early adopter of BMC’s new DB2 10 tools is Florida Hospital, based in Orlando. The hospital, with seven campuses, considers itself the largest hospital in the US, and relies on DB2 running on a z10 to support dozens of clinical and administrative applications. The hospital currently runs a mix of DB2 8 and DB2 10, although it expects to be all DB2 10 within a year.

Of particular value to the hospital is DB2 10 support for temporal data or snapshots of data that let you see data changes over time. This makes it particularly valuable in answering time-oriented questions. Based on that capability, the hospital is deploying a second instance of DB2 10 for its data warehouse, for which it also will take full advantage of BMC’s SQL performance monitoring tools.

But the crowning achievement of the hospital’s data warehouse, says Robert Goodman, lead DBA at Florida Hospital, will be the deployment of IBM’s Smart Analytics Optimizer (SAO) with DB2 10 and the data warehouse. The SAO runs queries in a massively parallel in-memory infrastructure that bolts onto the z10 to deliver extremely fast performance. Watch for more details coming on this development.

DancingDinosuar doesn’t usually look at tool upgrades, but DB2 10, especially when combined with the updated BMC tools, promises to be a game changer. That certainly appears to be the case at Florida Hospital, even before it adds SAO capabilities.

System z (Mainframe) Census

April 11, 2011

The LinkedIn group, IBM Mainframe, has been cheerleading an effort to assemble a list of all organizations with an IBM System z. Here is what they have come up with so far.

This takes the form of a wiki, so readers can add mainframes they are aware of that have not been included. Joe Cotton at the LinkedIn IBM Mainframe group explains how it works if you want to register and add names to the list. The process is pretty straightforward; the instructions can be clicked on the left column of the wiki page. By the way, you can find me there as, what else, dancing dinosaur.

The list is far from comprehensive.  A quick glance immediately reveals a number of widely recognized mainframe shops missing from the list. The hope must be that others will participate in the wiki and add names they know should be included. In that way, the list can grow and become more comprehensive. There probably are about 3000 mainframe shops today so this list has a long way to go, but it is a good start. Appreciative thanks everyone who contributed to this effort.

In the past, IBM generally has played coy when it came to identifying customers. The company, however, has gotten better at it in recent years as it battles to counter the mainframe-is-dead FUD. Of course, the best way to do that is to show successful companies using mainframes and growing their mainframe footprints.

For example, two years ago, mainframes were being pushed out of Canada’s Department of National Defence (DND). Nobody had any complaints about the mainframes but interest simply had shifted to the distributed world, which seemed to be where the action was heading. A small dedicated mainframe group, however, thought this was a bad idea and made a compelling case for the mainframe.

This fiscal year, the DND mainframe team received funding for unprecedented mainframe growth, including virtualized Linux, model upgrades, increased redundancy and the upcoming purchase of a fourth mainframe. The DND now has a new z196 and is expecting a zBX imminently and intends, before the end of this year, to order two more z196 machines and two more zBX devices as it upgrades its existing z10 machines to run mixed z/OS, Linux, and AIX workloads. Independent Assessment recently completed the DND case study and will be posting a link to it shortly.

The initial reason to compile the wikidot list of mainframe shops, apparently, was to create a resource for mainframe people looking for jobs. Who is more likely to hire unemployed mainframers than shops that have a mainframe? Still, you can understand IBM’s reticence in revealing customers.

The real growth for the mainframe will come from new workloads. Companies will turn to the hybrid zEnterprise (z196, zBX) for the same reason as the DND—to host and manage multi-platform workloads as a single virtual consolidated system managed by the System z. Others will be looking to run IBM’s specialized software, such as the Smart Analytics Optimizer for DB2 10. DancingDinosaur will be taking up one company’s plans for the Smart Analytics Optimizer for DB2 10 on its z10 for its data warehouse soon—yet another new kind of workload for the System z.


z10 Delivers IBM’s Project Green and ECM

February 22, 2011

Virtualization is the key to IBM’s Enterprise Computing Model (ECM), and the System z is very good at that. The z196 should be even better, but when IBM set out several years ago to eliminate thousands of servers through its Project Green initiative, which exemplifies ECM, the z10 was the best play it had.

IBM’s ECM revolves around virtualization and consolidation, IT service management, and cloud computing. The combination of these is intended to transform the business. To date, the best example of ECM in action may be IBM’s own Project Green initiative, a mainframe case study if ever there was one.

In September 2007, Robert Braddock, VP for IBM’s Global Infrastructure Services and in charge of IBM’s own IT infrastructure account, presented the company’s efforts, as a major transformation initiative with the goal, at that time, of consolidating 3900 servers onto approximately 30 System z machines. The consolidated systems would reduce energy consumption by 80%, lower software licensing and support costs, and reduce the amount of real estate required.

From 1997 until 2007 IBM had reduced the number of CIO’s from 128 to 1, host data centers from 155 to 7, web hosting centers from 80 to 5, and 15,000 applications to 4700. Much of those consolidated workloads were moved to Linux on z. The savings, according to Braddock, amounted to $1 billion each year.

Jump forward to Feb. 2011 when John Adams, IBM’s ECM and Cloud implementation leader, updated the Project Green effort. By then IBM reported eliminating 5000 servers through virtualization and consolidation. Of those, 3900 servers are running as virtual Linux on z. The effort has saved 20k MWh of energy, enough, says Adams, to power a small town for a year, and freed up 47,000 sq. ft. of floor space.

In 2009, IBM had internal IT workloads running at 377 sites. It has consolidated 15% of those to date. The goal is to get down to just three global strategic IT sites, 19 special purpose sites, and 27 Notes sites. Although the bulk of the workloads are going to the System z some will run on IBM’s p and x systems based on fit-for-purpose and service requirements determinations.

One IBM platform apparently not in the mix, at least not yet, is the z196. This has been primarily a z10, Linux on z play. A z196 team member told DancingDinosaur a few months ago that IBM was running a z196 for testing and development but hadn’t yet put one into production use. Based on the performance and capacity specs already published on the z196, it should be able to virtualize even more workloads. When connected to the zBX with Power and x blades, it should do even better and at lower cost per workload.

Still, if you look at IBM’s Project Green and its ECM as a z10 case the results are impressive. More importantly, however, are some lessons an organization can learn from IBM’s results. The first being which consolidation strategy to follow: location, environment, application, or technology.

Most organizations will decide based on the environment, application, or technology involved. For example, you might consolidate all distributed Linux systems on the z or consolidate all web hosting on the z. IBM used these strategies at various times.

But the location strategy delivered the biggest payback. The other strategies produce what Adams refers to as Swiss cheese racks where some systems have been removed but others remain, leaving the entire rack in place.

However, if you target a location and consolidate, virtualize, or eliminate every system there you can remove entire racks, freeing up large amounts of floor space. Then just return the leased space and shut off the power. The lesson: you don’t get as big a payback consolidating and virtualizing servers in onesies and twosies.

Another lesson is the way IBM deployed virtualized software to get the biggest savings. By deploying Linux on z under z/VM it had to pay a license fee for each LPAR but each guest running the software within the LPAR ran at no additional charge.  The lesson here: it pays to plan which virtualized software you deploy under which LPAR.

There is more IT shops can learn from IBM’s Project Green and ECM beyond how best to go about consolidating and virtualizing on z, p, and x.


%d bloggers like this: