Posts Tagged ‘Smart Analytics Optimizer’

New Workloads for the zEnterprise

July 18, 2011

Even before the introduction of the z196 a year ago, IBM had been steadily promoting new mainframe workloads. With the introduction of the zEnterprise, consisting of a z196 and an attached zBX, the hybrid mainframe became real and with it the possibility of running and managing truly new workloads through the z.

Obvious new workloads would be AIX workloads previously running on Power Systems servers, but these aren’t truly new to the organization, only new to the z. The adoption of the Smart Analytics Optimizer, as Florida Hospital plans to do for medical research analytics, is a truly new application for the hospital and for the z.

The introduction of the z114 opens up the potential for new workloads on the z. This would be due mainly to its lower cost, entry pricing starts at $75,000. This lowers the risk of testing new workloads on the z. For example, would an organization now be more willing to try BI against production data residing on the z as a new workload if they could get a discounted price? They could, of course, run BI on a slew of Intel servers for less, but they would give up the proximity of their data and the potential for near real-time BI.

As recently as this past March Marie Wieck, General Manager of IBM Application and Integration Middleware, made the new workloads case in a presentation titled New Workload and New Strategic Thinking for z. In that presentation she identifies five categories of new workloads she deemed strategic. They are:

  1. Business Intelligence (BI) and Analytics
  2. Virtualization and Optimization
  3. Risk Management and Compliance
  4. Business Process Management (BPM)
  5. Cloud computing

None of these, with the possible exception of Cloud, are new workloads. Organizations have been running these workloads for years, just on other platforms. But yes, they certainly are not the traditional System z workloads, which typically revolve around CICS transaction processing, OLTP, and production database management.

DancingDinosaur would like to suggest some areas of other new workloads for the z, especially if you can grab a deeply discounted z114 cheap through the Solution Edition program. And since they are new workloads, they should automatically qualify for the discount program.

The first would be Linux development and testing using a deeply discounted enterprise Linux Solution Edition for the z114. Developers could put up and take down servers at will, runs gads of test data through them, and the machine wouldn’t break a sweat.

Another should be SOA. Enterprise Linux combined with CICS access to production data should be a ripe area for new services-oriented, web-based workloads. You could even pull in smartphones and tablets as access devices.

Finally, there should be much that organizations could do in terms of new workloads using Java, WebSphere, SAP, and even Lotus on the z114. Here too, there will likely need to be Solution Edition discount programs available to reduce costs even more.

And then there are the x blades for the zBX and the imminent arrival of Windows on those x blades. That has the potential to open maybe the largest set yet of new workloads for the z.

The big obstacle to new workloads on the z is that these workloads already are running in some form on other platforms in the organization. So, what the z gains, the other platforms and the teams that support them lose. That’s a difficult political battle to fight, and the best way to win is to offer an unbeatable z business case. Even with the z114, IBM isn’t there yet.

To get there, IBM has to add the last piece missing from the new workloads picture painted above—a System z Solution Edition discount program that also includes a deeply discounted zBX. That could prove irresistible to organizations otherwise contemplating new system z workloads.

Dynamic Data Warehousing and System z

June 20, 2011

Data warehousing should be an ideal workload for the System z. It already houses the production data that mostly populates the data warehouse. It can run Cognos on Linux on z for BI and with a zEnterprise (z196 and zBX) it can run the Smart Analytics Optimizer, either as a zBX blade or as an appliance. And do it all with scalability, reliability, and performance.

But IBM is moving beyond conventional data warehousing, which entails an enterprise data store surrounded by myriad special purpose data marts. Data warehousing as it is mainly practiced today in the distributed environment is too complex, difficult to deploy, requires too much tuning, and too inefficient when it comes to bringing in analytics, which delays delivering the answers business managers need. And without fast analytics, well, what’s the point? In addition, such data warehousing requires too many people to maintain and administer, which makes it too costly.

On top of these problems, the world of data has changed dramatically since organizations began building enterprise data warehouses. Now a data warehouse should accommodate new types of data and rapidly changing forms of data.

IBM’s recommendation: evolve the traditional enterprise data warehouse into what it calls the enterprise data hub. This will entail consolidating the infrastructure and reducing the data mart sprawl. It also will simplify analytics, mainly by deploying analytics appliances like IBM’s Netezza. Finally, organizations will need to data governance and lifecycle management, probably through automated policy-based controls. The result should be better information faster and delivered in a more flexible and cost-effective way.

Ultimately, IBM wants to see organizations evolve the enterprise data warehouse into an enterprise data hub with a variety of BI and analytics engines connected to it along with engines tuned for analyzing streamed data and vast amounts of unstructured data of the type Hadoop been shown to be particularly good at. DancingDinosaur wrote about Hadoop on the z196 back in November.

The payback from all of this, according to IBM, will be increased enterprise agility and faster deployment of analytics, which should result in increased business performance. The consolidated enterprise data warehouse also should lower TCO and speed time to value for both the data warehouse and analytics. All desirable things, no doubt, but for many organizations this will have require a gradual process and a significant investment in new tools and technologies, from appliance to analytics.

Case in point is Florida Hospital, Orlando, which deployed a z10 with DB2 10, which provides enhanced temporal data capabilities, with a primary goal of converting its 15 years of clinical patient data into an analytical data warehouse for use in leading edge medical and genetics research. DancingDinosaur referenced the hospital’s plans recently.

The hospital calls for getting the data up and running on DB2 10 this year and attaching the Smart Analytics Optimizer as an appliance connected to the z10 in Q1 2012. Then it can begin cranking up the research analytics.  Top management has bought into this plan for now, but a lot can change in the next year, the earliest the first fruits of the hospital’s z-based analytical medical data exploration are likely to hit.

IBM does not envision the enterprise data hub exclusively as a System z effort. To the contrary its Power platform is as likely to be the preferred platform as any. Still, a zEnterprise loaded with Smart Analytics Optimizer blades might make a pretty good choice too. Florida Hospital probably would have gone with the z196 if it had known the machine was coming when it was upgrading from the z9 to z10.

The point here: existing data warehouses probably are obsolete. In a recent IBM study, half the business managers complained that they don’t have the information they need to do their jobs and 60% of CEOs admitted they need to do a better job of capturing and understanding information rapidly in order to make swift business decisions. That should be a signal to evolve your existing data warehouse into an enterprise data hub now and the z you have sitting there is just the vehicle for doing that.

BMC Tools for DB2 10 Drive z/OS Savings

April 25, 2011

This month BMC announced the upgrading of 23 tools for managing DB2 10 databases running on System z9, z10, and zEnterprise/z196.  When IBM introduced DB2 10 in 2010 it implied the database would reduce costs and optimize performance. Certainly running it on the z10 or the z196 with the latest zIIP engine would do both, but BMC’s updated tools make it easier to capture and expand those benefits.

IBM estimated 5-10% improvement in CPU performance out-of-the box. BMC’s solutions for DB2 10 for z/OS will help IT organizations further maximize cost savings as well as enhance the performance of their applications and databases by much as a 20% improvement if you deploy using their upgraded tools.

These DB2 improvements, which IBM refers to as operational efficiencies, revolve mainly around reducing CPU usage. This is possible because, as IBM explains it, DB2 10 optimizes processor times and memory access, leveraging the latest processor improvements, increased memory, and z/OS enhancements. Improved scalability and a reduced virtual storage constraint add to the savings. Continued productivity improvements for database and systems administrators can drive even more savings.

The key to the improvements may lie in your ability to fully leverage the zIIP assist processor. The zIIP co-processors take over some of the processing from the main CPU, saving money for those organizations that pay for their systems by MIPS (million instructions per second).

When IBM introduced version 10 of DB2 for z/OS in 2010, it promised customers that upgrading to this version would boost performance due to DB2′s use of these co-processors. Even greater gains in performance would be possible if the customer also would be willing to do some fine-tuning of the system. This is where the new BMC tools come in; some of tools specifically optimize the use the zIIP co-processors.

Some of BMC’s enhanced capabilities help offload the DB2 workload to the zIIP environment thereby reducing general purpose processor utilization. The amount of processing offloaded to zIIP engines varies. With the new release, for example, up to 80 percent of the data collection work for BMC SQL Performance for DB2 can be offloaded.

The BMC tools also help companies tune application and database performance in other ways that increase efficiency and lower cost. For example, BMC’s SQL Performance Workload Compare Advisor and Workload Index Advisor detect performance issues associated with changes in DB2 environments. Administrators can see the impact of changes before they are implemented, thereby avoiding performance problems.

An early adopter of BMC’s new DB2 10 tools is Florida Hospital, based in Orlando. The hospital, with seven campuses, considers itself the largest hospital in the US, and relies on DB2 running on a z10 to support dozens of clinical and administrative applications. The hospital currently runs a mix of DB2 8 and DB2 10, although it expects to be all DB2 10 within a year.

Of particular value to the hospital is DB2 10 support for temporal data or snapshots of data that let you see data changes over time. This makes it particularly valuable in answering time-oriented questions. Based on that capability, the hospital is deploying a second instance of DB2 10 for its data warehouse, for which it also will take full advantage of BMC’s SQL performance monitoring tools.

But the crowning achievement of the hospital’s data warehouse, says Robert Goodman, lead DBA at Florida Hospital, will be the deployment of IBM’s Smart Analytics Optimizer (SAO) with DB2 10 and the data warehouse. The SAO runs queries in a massively parallel in-memory infrastructure that bolts onto the z10 to deliver extremely fast performance. Watch for more details coming on this development.

DancingDinosuar doesn’t usually look at tool upgrades, but DB2 10, especially when combined with the updated BMC tools, promises to be a game changer. That certainly appears to be the case at Florida Hospital, even before it adds SAO capabilities.

System z (Mainframe) Census

April 11, 2011

The LinkedIn group, IBM Mainframe, has been cheerleading an effort to assemble a list of all organizations with an IBM System z. Here is what they have come up with so far.

This takes the form of a wiki, so readers can add mainframes they are aware of that have not been included. Joe Cotton at the LinkedIn IBM Mainframe group explains how it works if you want to register and add names to the list. The process is pretty straightforward; the instructions can be clicked on the left column of the wiki page. By the way, you can find me there as, what else, dancing dinosaur.

The list is far from comprehensive.  A quick glance immediately reveals a number of widely recognized mainframe shops missing from the list. The hope must be that others will participate in the wiki and add names they know should be included. In that way, the list can grow and become more comprehensive. There probably are about 3000 mainframe shops today so this list has a long way to go, but it is a good start. Appreciative thanks everyone who contributed to this effort.

In the past, IBM generally has played coy when it came to identifying customers. The company, however, has gotten better at it in recent years as it battles to counter the mainframe-is-dead FUD. Of course, the best way to do that is to show successful companies using mainframes and growing their mainframe footprints.

For example, two years ago, mainframes were being pushed out of Canada’s Department of National Defence (DND). Nobody had any complaints about the mainframes but interest simply had shifted to the distributed world, which seemed to be where the action was heading. A small dedicated mainframe group, however, thought this was a bad idea and made a compelling case for the mainframe.

This fiscal year, the DND mainframe team received funding for unprecedented mainframe growth, including virtualized Linux, model upgrades, increased redundancy and the upcoming purchase of a fourth mainframe. The DND now has a new z196 and is expecting a zBX imminently and intends, before the end of this year, to order two more z196 machines and two more zBX devices as it upgrades its existing z10 machines to run mixed z/OS, Linux, and AIX workloads. Independent Assessment recently completed the DND case study and will be posting a link to it shortly.

The initial reason to compile the wikidot list of mainframe shops, apparently, was to create a resource for mainframe people looking for jobs. Who is more likely to hire unemployed mainframers than shops that have a mainframe? Still, you can understand IBM’s reticence in revealing customers.

The real growth for the mainframe will come from new workloads. Companies will turn to the hybrid zEnterprise (z196, zBX) for the same reason as the DND—to host and manage multi-platform workloads as a single virtual consolidated system managed by the System z. Others will be looking to run IBM’s specialized software, such as the Smart Analytics Optimizer for DB2 10. DancingDinosaur will be taking up one company’s plans for the Smart Analytics Optimizer for DB2 10 on its z10 for its data warehouse soon—yet another new kind of workload for the System z.



Get every new post delivered to your Inbox.

Join 572 other followers

%d bloggers like this: