Posts Tagged ‘BMC’

Latest BMC Mainframe Survey Points to Bright System z Future

September 27, 2013

BMC Software released its 8th annual mainframe survey, and the results shouldn’t surprise any readers of DancingDinosaur. Get a copy of the results here. The company surveyed over 1000 managers and executives at mainframe shops around the world, mostly BMC customers.  Guess you shouldn’t be surprised at how remarkably traditional the respondents’ attitudes about the mainframe are.

For example, of the new areas identified by IBM as hot—mobile, cloud, big data, social business—cloud, big data, and mobile barely registered and social was nowhere to be seen.  Cloud was listed as one of the top four priorities by 19% of the respondents. Big data was listed as one of the top priorities for the coming year by only 18% of the respondents, the same as mobile.  The only topic that was less of a priority was outsourcing at 15%.

So what were the main priorities? The top four:

  • IT cost reduction—85% of respondents
  • Application availability—66%
  • Business/IT alignment—50%
  • Application modernization—50%

Where the researchers did drill down into one of the new areas of focus, big data, the biggest number of respondents, 31%, reported identifying the business use case as their biggest challenge. Other challenges were the cost of transforming/loading mainframe data to a centralized data warehouse (24%) followed by the effort such a transformation required (20%).  Another 11% noted the lack of ETL tools for business analytics.  Ten percent cited lack of knowledge about mainframe data content—huh? That might have been the one thing DancingDinosaur found truly surprising, although without knowing the specific job titles or job descriptions it might not be so surprising after all.

When it came to big data, 28% of the respondents expected to move mainframe data off the mainframe for analytics. An almost equal number (27%) expected the mainframe to act as the big data analytic engine.  Another 12% reported federating data to an off platform analytics engine. Three percent reported Linux on z for hosting the unstructured data.

Moving data off the mainframe for big data analytics can be a slow and costly strategy. One of the benefits of doing big data on the System z or the hybrid zEnterprise/zBX is taking advantage of the proximity of the data. Moving petabytes or even terabytes of data is not a trivial undertaking. For all the hype it’s clear that big data as a business strategy is still in its infancy with much left to be learned.  It will be interesting to see what this survey turns up a few years from now.

Otherwise, the survey results are very supportive to those who are fighting the seemingly perpetual battle of the mainframe as an end-of-life technology.  Almost all the respondents (93%) considered the mainframe a long-term business strategy while almost half (49%) felt the mainframe will continue to grow and attract new workloads.

Some other tidbits from the survey:

  • 70% of respondents said the mainframe will have a key role in Big Data plans.
  • 76% of large shops expect MIPS capacity to grow as they modernize and add applications to address business needs. (This highlights the need for software that minimizes expensive MIPS consumption and exploits the mainframe’s cost-efficient specialty engines.)

No large shops anyway—and only 7% of all respondents—have plans to eliminate their mainframe environment. Glad it’s not worse.

Lastly, there still is time to register for IBM’s Enterprise 2013 conference in Orlando. It will combine the System z and the Power Systems technical universities with an Executive Summit.  The session programs already are out for the System z and Power Systems tracks. Check out the System z overview here and the Power Systems overview here. DancingDinosaur will be there. In the coming weeks this blog will look more closely some intriguing sessions.

BTW–please follow DancingDinosaur at its new name on Twitter, @mainframeblog

Free Stuff Lowers Mainframe Costs

September 9, 2013

Timothy Sipples’ blog Mainframe recently ran a piece listing free software you can get for the System z. The piece, here, is pretty exhaustive, including direct links. In keeping with DancingDinosaur’s continuing search for ways to lower mainframe computing costs, listed below some of Sipples’ freebies; find more in his full piece.

While freebies are always welcome, another proven way to lower mainframe costs is by paying close attention to mainframe software usage and costs. For this, BMC introduced a mainframe software cost analyzer tool that promises to reduce mainframe software operating costs by 20%. Called the BMC Cost Analyzer for zEnterprise, it aims to help IT departments plan, report and reduce their mainframe licensing charge (MLC) by identifying system peaks and recommending preemptive cost-reduction strategies. After deploying the BMC Cost Analyzer, a typical customer consuming 5,000 MIPS — at an annual cost of $3.6 million — could save $720,000 or more with the new BMC solution, according to the company.

Linux, an open source operating system licensed under the GNU Public License (GPL), is the first place Sipples looks for mainframe freebies. The GPL license means you don’t have to pay a license fee to obtain and use Linux; Linux distributors like Novell and Red Hat, however, do charge fees for their optional support services. Here are Sipples’ freebies for Linux on z:

Sipples’ list of IBM Freebies for z/OS, z/TPF, z/VSE, and z/VM

  • IBM makes its Java Software Development Kit (SDK) releases for z/OS available at no additional charge. You may also be interested in the Java technologies available from Dovetailed, such as Co:Z and Tomcat for z/OS.
  • The DB2 Accessories Suite for z/OS (5697-Q02) includes many useful tools and accessories to make DB2 for z/OS more powerful and more useful. Examples include IBM Data Studio, SPSS Modeler Server Scoring Adapter, Spatial Support, International Components for Unicode, and Text Search.
  • Go graphical! You can manage your z/OS system much more easily with graphical interfaces for every major subsystem and component. Grab the z/OS Management Facility. From the Explorer for z/OS you can then use (or directly install) plug-ins for CICS, IMS, IBM’s application development tools, IBM’s problem determination tools, and other products.
  • The XML Toolkit for z/OS (5655-J51) adds to the XML System Services that’s already part of z/OS.
  • IBM offers many more z/OS-related downloads including the IBM Encryption Facility for z/OS Client, Logrec Viewer, LookAt, z/OS UNIX System Service Tools and Toys, and many others.
  • Be sure to install the Alternate Library for REXX to run compiled REXX programs on your z/OS and z/VM systems. Compiled programs won’t run as efficiently as when the regular licensed REXX library is installed, but at least they’ll run.
  • The z/OS Ported Tools (5655-M23) include OpenSSH, IBM HTTP Server, and many other useful products. (Rocket Software also offers several ported tools.)

BMC conducts an annual survey of z data centers. This year, as in previous years, cost concerns were the number one issue.  The number two concern was business availability. The full survey will be released at the end of this month when DancingDinosaur expects to cover it in a little more detail.

Next Generation zEnterprise Developers

April 19, 2013

Mainframe development keeps getting more complicated.  The latest complication can be seen in Doug Balog’s reference to mobile and social business on the zEnterprise, reported by DancingDinosaur here a few weeks ago. That is what the next generation of z developers face.

Forget talk about shortages of System z talent due to the retirement of mainframe veterans.  The bigger complication comes from need for non-traditional mainframe development skills required to take advantage mobile and social business as well as other recent areas of interest such as big data and analytics. These areas entail combining new skills like JSON, Atom, Rest, Hadoop, Java, SOA, Linux, hybrid computing along with traditional mainframe development skills like CICS and COBOL, z/VM, SQL, VSAM, and IMS. This combination is next to impossible to find in one individual. Even assembling a coherent team encompassing all those skills presents a serious challenge.

The mainframe industry has been scrambling to address this in various ways.  CA Technologies added GUI to its various tools and BMC has similarly modernized its various management and DB2 tools. IBM, of course, has been steadily bolstering the Rational RDz tool set.   RDz is a z/OS Eclipse-based software IDE.  RDz streamlines and refactors z/OS development processes into structured analysis, editing, and testing operations with modern GUI tools, wizards, and menus that, IBM notes, are perfect for new-to the-mainframe twenty- and thirty-something developers, the next generation of z developers.

Compuware brings its mainframe workbench, described as a modernized interactive developer environment that introduces a new graphical user interface for managing mainframe application development activities.  The interactive toolset addresses every phase of the application lifecycle.

Most recently, Micro Focus announced the release of its new Enterprise Developer for IBM zEnterprise.  The product enables customers to optimize all aspects of mainframe application delivery and promises to drive down costs, increase productivity, and accelerate innovation. Specifically, it enables both on- and off-mainframe development, the latter without consuming mainframe resources, to provide a flexible approach to the delivery of new business functions. In addition, it allows full and flexible customization of the IDE to support unique development processes and provides deep integration into mainframe configuration management and tooling for a more comprehensive development environment. It also boasts of improved application quality with measurable improvement in delivery times.  These capabilities together promise faster developer adoption.

Said Greg Lotko, Vice President and Business Line Executive, IBM System z, about the new Micro Focus offering:  We are continually working with our technology partners to help our clients maximize the value in their IBM mainframes, and this latest innovation from Micro Focus is a great example of that commitment.

Behind all of this development innovation is an industry effort to cultivate the next generation of mainframe developers. Using a combination of trusted technology (COBOL and mainframe) and new innovation (zEnterprise, hybrid computing, expert systems, and Eclipse), these new developers; having been raised on GUI and mobile and social, can leverage what they learned growing up to build the multi-platform, multi-device mainframe applications that organizations will need going forward.

As these people come on board as mainframe-enabled developers organizations will have more confidence in continuing to invest in their mainframe software assets, which currently amount to an estimated 200-300 billion lines of source code and may even be growing as mainframes are added in developing markets, considered a growth market by IBM.  It only makes sense to leverage this proven code base than try to replace it.

This was confirmed in a CA Technologies survey of mainframe users a year ago, which found that 1) the mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise; 2) the machine is viewed as an enabler of innovation as big data and cloud computing transform the face of enterprise IT—now add mobile; and 3) companies are seeking candidates with cross-disciplinary skill sets to fill critical mainframe workforce needs in the new enterprise IT thinking.

Similarly, a recent study by the Standish Group showed that 70 percent of CIOs saw their organizations’ mainframes as having a central and strategic role in their overall business success.  Using the new tools noted above organizations can maximize the value of the mainframe asset and cultivate the next generation mainframe developers.

Getting the Payback from System z Outsourcing

February 1, 2013

A survey from Compuware Corporation on attitudes of CIOs toward mainframe outsourcing showed a significant level of dissatisfaction with one or another aspect of mainframe outsourcing. Check out the survey here.

Mainframe outsourcing has been a fixture of mainframe computing since the outset. The topic  is particularly interesting in light of the recent piece DancingDinosaur posted on winning the talent war a couple of weeks ago. Organizations intending to succeed are scrambling to find and retain the talent they need for all their IT systems, mainframe and otherwise.  In short, they need skills in all the new areas, like cloud computing, mobile access, and most urgently, big data analytics.  In addition, there is the ongoing need for Java, Linux, WebSphere, and CICS in growing System z data centers.  The rise of z-based hybrid computing and expert integrated hybrid PureSystems to some extent broadens the potential talent pool while reducing the amount of skilled experts required. Still, mainframe outsourcing remains a popular option.

The new Compuware survey found that reducing costs is a major driver for outsourcing mainframe application development, maintenance, and infrastructure. Yet multiple  associated costs are frustrating 71% of CIOs. These costs result from increases in MIPS consumption, as well as higher investments in testing and troubleshooting due mainly to poor application quality and performance.  In fact, two-thirds (67%) of respondents reported overall dissatisfaction with the quality of new applications or services provided by their outsourcer. The source of the problem: a widening in-house skills gap and difficulties with knowledge transfer and staff churn within outsource vendors.

Compuware has published a related white paper titled, Mainframe Outsourcing: Removing the Hidden Costs, which expands on the findings from the study. The company’s recommendations to remove the costs amount to reverse engineering the problems revealed in the initial survey. These include:

  • Utilize MIPS better
  • Explore pricing alternatives to CPU-based pricing
  • Improve the quality of new applications
  • Boost knowledge transfer between outsourcers and staff
  • Measure and improve code efficiency at the application level
  • Take advantage of baseline measurement to objectively analyze outsourcer performance

The System z offers numerous tools to monitor and manage usage and efficiency, and vendors like Compuware, CA, BMC, and others bring even more.

The MIPS consumption problem is typical. As Compuware reports: mainframes are being used more than ever, meaning consumption is naturally on the rise. This is not a bad thing.

However, where consumption is escalating due to inefficient coding, adding unnecessary costs. For example, MIPS costs are increasing on average by 21% year over year, with 40% of survey respondents claiming that consumption is getting out of control. Meanwhile, 88% of respondents using pay structures based on CPU consumption (approximately 42% of those surveyed) think their outsourcer could manage CPU costs better, and 57% of all respondents believe outsourcers do not worry about the efficiency of the applications that they write.

New workloads also are driving costs. For example, 60% of survey respondents believe that the increase in applications like mobile banking are driving higher MIPS usage and creating additional costs. Just think what they’d report when big data analytic applications start kicking in although some of this processing should be offloaded to assist processors.

The Compuware study is interesting and informative. Yes, outsourcers should be pressed to utilize MIPS more efficiently. At a minimum, they should shift workloads to assist processors that have lower cost per MIPS.  Similarly, developers should be pressed to boost the efficiency of their code. But this will require an investment in tools to measure and benchmark that code and hire QA staff.

A bigger picture view, however, suggests that focusing just on MIPS is counterproductive. You want to encourage more workloads on the z even if they use more MIPS because the z can run at near 100% utilization and still perform reliably. Higher utilization translates into lower costs per workload. And with the cost per MIPS decreasing with each rev of the zEnterprise the cost per workload keeps improving.  Measure, monitor, and benchmark and do whatever else you can to drive efficient operation, but aim to leverage the zEnterprise to the max for your best overall payback.

BMC Mainframe Survey Bolsters z-Hybird Computing

September 27, 2012

For the seventh year, BMC conducted a survey of mainframe shops worldwide. Find a copy of the study here and a video explaining it here. Most of the results probably won’t surprise you:

  • 90% of respondents consider the mainframe to be a long-term solution, and 50% expect it will attract new workloads.
  • Keeping IT costs down remains the top priority—not exactly shocking—as 69% report cost as a major focus, up from 60% from 2011.
  • 59% expect MIPS capacity to grow as they modernize and add applications to address expanding business needs.
  • More than 55% reported a need to integrate the mainframe into enterprise IT systems comprised of multiple mainframe and distributed platforms.

The last point especially suggests IBM is on the right track with hybrid computing. IBM also is on the right track in terms of keeping costs down, especially by enabling organizations to maximize the use of specialty engines in an effort to reduce consumption of costly GP MIPS.  The specialty engine advantage continues with the new zEC12, incorporating the same 20% price/performance boost, essentially more MIPS bang for the buck.

Two-thirds of the respondents were using at least one specialty engine. Of all respondents, 16% were using five or more engines, a few using dozens.  Not only do specialty engines deliver cheaper MIPS but they often are not considered in calculating software licensing charges, which lowers the cost even more.

About the only change noticeable in responses year-to-year is the jump in the respondent ranking of IT priorities. This year Business/IT alignment jumped from 7th to 4th in priority ranking. Priorities 1, 2, and 3 (Cost Reduction, Disaster Recovery, and Application Modernization respectively) remained the same.  Priorities 5 and 6 (Efficient Use of MIPS and Reduced Impact of Outages respectively) fell from a tie for 4th last year.

The greater emphasis on Business/IT alignment isn’t exactly new. Industry gurus have been harping on it for years.  Greater alignment between business and IT also suggests a strong need for hybrid computing, where varied business workloads can be mixed yet still be treated as a single system from the standpoint of efficiency management and operations. It also suggests IT needs to pay attention to business services management.

Despite the mainframe’s reputation for rock solid availability and reliability, the survey also noted that 39% of respondents reported unplanned outages. The primary causes for the outages were hardware failure (cited by 31% of respondents), system software failure (30%), in-house app failure (28%), and failed change process (22%). Of the respondents reporting outages, only 10% noted that the outage had significant impact. This was a new survey question this year so there is no comparison to previous years.

Respondents (59%) expect MIPS usage to continue to grow. Of that growth, 31% attribute it to increases in legacy and new apps while 9% attributed it to new apps. Legacy apps were cited by 19% of respondents.

In terms of modernizing apps, 46% of respondents planned to extend legacy code through SOA and web services while 43% wanted to increase the flexibility and agility of core apps.  Thirty-four percent of respondents hoped to reduce legacy app support costs through modernization.

Maybe the most interesting data point came where 60% of the respondents agreed that the mainframe needed to be a good IT citizen supporting varied workloads across the enterprise. That’s really what zEnterprise hybrid computing is about.

Gamification Comes to the zEnterprise

February 20, 2012

You can tell that gamification is coming to the zEnterprise when IBM, BMC, and CA Technologies are exploring gamification at roughly the same time.  It won’t be too long before gamification starts being applied to the zEnterprise tools and applications, probably starting with administrative tools.

Gamification refers to the process of applying gaming software techniques to non-game applications. The objective is to make the software and or business process more engaging and compelling.  Through gamification software should become easier and more intuitive.

Some of the first aspects of gaming to be applied mimic the scoring and rewards aspects of game playing.  For a management that is intent on measurement, gamification should be welcome by opening up a new dimension to metrics. At this point, however, gamification is talked about most frequently in reference to social networking and associated rewards and incentives.  DancingDinosaur’s sister blog, BottomlineIT, initially referenced it here .

IBM researchers looked at gamification in a recent paper here. The researchers noted that the goal of gamification is to incent repeat usage of social networks, increase contributions, and establish user reputations. They rely on incentives in the form of points, badges, and leveling that can help a player advance in status. In the workplace, game-like systems have been employed to collect information about employees and incent contribution within enterprise social software. Gamification also aims to create a sense of playfulness in non-game environments, which creates engagement or at least stickiness.

Based on their study, the researchers concluded that the removal of the points system (key to the incentives) resulted in a significant negative impact on the user activity of the site, and the contribution of content significantly decreased after the deactivation of the points system. This suggests that such extrinsic rewards did influence a segment of the user population to participate more intensely while the point system was in place. No big surprise there.

Gamification is being driven primarily by the smartphone and social networking crowd. Over a decade ago, DancingDinosaur published a book on knowledge management. A major obstacle then was getting knowledge experts to share their knowledge. The only solutions at that time appeared to be either bribe people (rewards, incentives) or threaten to fire them for not sharing. A few copies of that book, apparently, are still available on Amazon. The popularity of social networking along with gamification apparently has resolved this to some extent though social rewards and incentives.

For zEnterprise shops, the real question is: where does gamification add value.  A few places come to mind, such as system operations, administration, and help desk.  With vendors making z management tools increasingly accessible via devices like the iPhones and iPads, gamification could have real impact. DancingDinosaur first wrote about that in 2010 here. R. Wang and Insider Associates just published a survey of the processes gamification impacts. Check out their results here.

Trevor Eddolls, Toolbox.com, noted: Wouldn’t it be great to have software on your smartphone that not only identifies what you’re looking at (Web server or z/Linux LPAR, or whatever) and provides current performance information. And then makes it fun to resolve any problems that might have been identified. Perhaps the only green screens you’ll ever see will mean ‘game over’!

zEnterprise shops are unlikely to build gamification into the tools and processes on their own. Software vendors starting with BMC and CA, however, just might. At that point, gamification will come into the z data center through the tools they acquire. Who knows, maybe gamification will make job scheduling fun?

zEnterprise Private Cloud ROI

December 20, 2011

Many mainframe veterans think the System z has long acted as a private cloud, at least since SOA appeared on the System z, allowing users to access data and logic residing on the System z through nothing more than their browser. And they are right.

The distributed world, unlike the mainframe world, sees private clouds as something new and radical because it is not straightforward there to virtualize, service-enable, and integrate all the piece parts that make up a private cloud. The System z learned these tricks years ago, and the zEnterprise with x86 and p-blades in an attached zBX makes it even easier.

With the z114 and the System z Solution Edition for Cloud Computing program a mainframe-based private cloud becomes that much less expensive to acquire, especially since most of the piece parts already are included and optimized from the start. The System z Solution Edition for Cloud includes the z hardware, Tivoli software, and IBM services to deliver the foundation for the private cloud.

A private cloud, whether distributed or mainframe-based, does not come cheap. The payback, however, still is there; it just comes in a different form. The private cloud restructures IT around a services delivery model. Applications and users tap IT-based data and business logic as services. Cost savings are generated from the ensuing operational efficiency enabled through the standardization, automation and virtualization of IT services. When the organization progresses to the point where users can self-provision and self-configure the needed IT services through private cloud automation and management, the real efficiencies kick in.

According to IDC many of today’s private cloud business cases are being anchored by savings from application rationalization and IT staff productivity improvements in addition to expected optimization of hardware assets. But unlike the public cloud, which promises to shift IT spending from CAPEX to OPEX, private clouds actually drive increases in CAPEX since the organization is likely to invest in new hardware and software optimized for virtualized cloud services delivery and management automation.

With a mainframe private cloud, much of the investment in virtualized, optimized, and integrated hardware assets has already been made. The private cloud initially becomes more of an exercise in partitioning and reallocating those assets as a private cloud. Still, given the appeal of the IT services model, it is likely that the organization will boost its hardware assets to accommodate increasing demand and new services.

The greatest ROI of the private cloud, whether mainframe-based or distributed, comes from the business agility it enables. The virtualized pool of IT resources that makes up the private cloud can be easily reallocated as services to meet changing business needs. Instead of requiring weeks if not months to assemble and deploy the IT hardware and software resources necessary to support a new business initiative, those resources can be allocated from the pooled virtual resources in minutes or hours (provided, of course, sufficient resources are available). With a private cloud you can, in effect, change the business almost on-the-fly and with no additional investment.

As CIO, how are you going to put a value on this sudden agility? If it lets the organization effectively counter competitive challenges, seize new business opportunities, or satisfy new customer demands it could deliver astounding value. It all depends on the business leadership. If they aren’t terribly agile thinkers, however, the value might be minimal.

Other benefits from a private cloud include increased IT productivity and efficiency, the ability of business users to self-provision the desired IT resources (with appropriate policy-based automation controlling the provisioning behind the scenes), and an increased ability to monitor and measure IT consumption for purposes of chargeback or, as is more likely, show back. Such monitoring and measurement of IT consumption has long been a hallmark of the mainframe, whether a private cloud or not.

Even with a mainframe-based private cloud the organization will likely make additional investments, particularly in management automation to ensure efficient service delivery, monitoring, measurement, chargeback, self-provisioning, and orchestration. IBM Tivoli along with other mainframe ISVs like CA and BMC provide tools to do this.

In the end, the value of private cloud agility when matched with agile thinking business leadership should more than offset the additional investments required. And with a zEnterprise-based private hybrid cloud, which comes highly virtualized already, you have a head start on any distributed private cloud.

BMC Mainframe Survey with zEnterprise

September 27, 2011

This past spring BMC again conducted its annual mainframe survey. Each year the results are fairly predictable. The majority of respondents intend to continue and even expand their operations or MIPS. Controlling costs is a perennial concern.

As dancingdinosaur noted of last year’s results: only 4% said the mainframe is not viable. Fully 85% of the 1700 respondents expected to grow their mainframe MIPS or at least maintain the same amount despite the recession. Nearly 60% indicated that the mainframe will attract new workloads over the next year. About half expected to expand the use of z specialty engines. You can find the dancingdinosaur piece from a year ago here.

This year’s study turned up a few different results. For sure, the usual concerns about controlling costs and staffing issues made it in, but this year’s survey also reflected for the first time introduction of the hybrid zEnterprise (z196) last year. A summary of this year’s study can be found here.

Fully 72% of survey respondents intend to deploy the zEnterprise within the next 18 months.  Almost half already had a z196 in production. Judging from the latest survey (and supported by IBM’s System z sales figures for the last several quarters), the hybrid zEnterprise clearly has caught on. That 63% of respondents indicated interest in running new workloads on the z196 further suggests healthy and expanding mainframe environments.

The zBX, however, has been much slower to gain traction. By last spring, few had the zBX in production or were actively pursuing a trial or testing. The zBX has to be considered a work in progress. IBM still hasn’t introduced all the various blades it promised. Most notably absent is the x86 blade running Windows. If priced right, that should jump start some zBX interest when it arrives, which could be soon. Overall, IBM has been generally slow to talk publicly about zBX blade performance and pricing. Until these issues have been addressed, you can expect zBX adoption to continue at the current snail’s pace.

Specialty engines continued to show small but steady growth over previous years. The IFL, which runs Linux, led the way and was followed closely by the zIIP, which runs DB2 workloads. The zAAP, which handles Java, attracted significantly less interest.

IT priorities don’t appear to have changed much. The top priority, not surprisingly, was reducing the cost of IT, followed by disaster recovery and application modernization. These were the same top three priorities as last year. The first significant changes occurred at the fourth priority, where MIPS utilization and reduction of outages tied. Both fourth place priorities address, in one form or another, IT cost reduction.

Among IT strategies favored by the respondents, server virtualization and server consolidation attracted the most interest. Among the cloud-related strategies, private clouds and SaaS/IaaS/IaaS generated the most interest compared to public and hybrid clouds. In general, BMC reports, larger shops had the most interest in the various cloud options.

When it comes to social media, the respondents clearly were not enthusiastic. Where there was some interest, it revolved around using the iPad to manage the z from the popular device. Dancingdinosaur first addressed iPads and mainframes here last year. Beyond that, interest in social media seems negligible. However, there was solid, if not exactly overwhelming, interest in managing System z and distributed systems together using a unified toolset. BMC along with IBM and CA are the primary vendors offering such capabilities.

Data center managers may pull out some inspiration from the survey results, mainly around unified management and the specific priorities for application modernization; increasing the flexibility of core apps and extending them through SOA and web services. Beyond that the strongest message may be for IBM to start being more forthcoming about the zBX.

z/OS Problem Solving for Private Clouds

May 15, 2011

First fault software problem solving (FFSPS) is an old mainframe approach that calls for solving problems as soon as they occur. It’s an approach that has gone out of favor except in classic mainframe data centers, but it may be worth reviving as the IT industry moves toward cloud computing and especially private clouds, for which the zEnterprise (z196 and zBX) is particularly well suited.

The point of Dan Skwire’s book First Fault Software Problem Solving: Guide for Engineers, Managers, and Users, is that FFSPS is an effective approach even today. Troubleshooting after a problem has occurred is time-consuming, more costly, inefficient, and often unsuccessful. Complicating troubleshooting typically is lack of information. As Skwire  notes: if you have to start troubleshooting after the problem occurs, the odds indicate you will not solve the problem, and along the way, you consume valuable time, extra hardware and software, and other measurable resources.

The FFSPS trick is to capture problem solving data from the start. This is what mainframe data centers did routinely. Specially, they used trace tables and included recovery routines. This continues to be the case with modern z/OS today.

So why should IT managers today care about mainframe disciplines like FFSPS? Skwire’s answer: there surely will be greater customer satisfaction if you solve and repair the customer‘s problem, or if he is empowered to solve and repair his own problem rapidly.

Another reason is risk minimization. As classic mainframe shops have become increasingly heterogeneous, the mainframe disciplines that kept the mainframe rock solid have not been enforced across the new platforms.

Skwire also likes to talk about System Yuk. You probably have a few System Yuks in your shop. What’s System Yuk? As Skwire explains, System Yuk is very complex. It makes many decisions, and analyzes much data. However, the only means it has of conveying an error is the single message to the operator console: SYSTEM HAS DETECTED AN ERROR, which is not particularly helpful.

System Yuk has no trace table or FFSPS tools. To diagnose problems in Yuk you must re-create the environment in your Yuk test-bed, and add instrumentation (write statements, traces, etc) and various tools to get a decent explanation of problems with Yuk, or setup some second-fault tool to capture more and better data on the production System Yuk, which is high risk.

Toward the end of the book Skwire gets into what you can do about System Yuk. It amounts to a call for defensive programming. He then introduces a variety of tools to troubleshoot and fix software problems. These include:ServiceLink by AxedaAlarmPoint Systems, and LogLogic. Of course, mainframe shops have long relied on management tools from IBM, CA, BMC, and others to enable FFSPS.

With the industry gravitating toward private clouds as a way to efficiently deliver IT as a flexible service, the disciplined methodologies that continue to keep the mainframe a still critical platform in large enterprises will be worth adopting.  FFSPS should be one in particular to keep in mind.

BMC Tools for DB2 10 Drive z/OS Savings

April 25, 2011

This month BMC announced the upgrading of 23 tools for managing DB2 10 databases running on System z9, z10, and zEnterprise/z196.  When IBM introduced DB2 10 in 2010 it implied the database would reduce costs and optimize performance. Certainly running it on the z10 or the z196 with the latest zIIP engine would do both, but BMC’s updated tools make it easier to capture and expand those benefits.

IBM estimated 5-10% improvement in CPU performance out-of-the box. BMC’s solutions for DB2 10 for z/OS will help IT organizations further maximize cost savings as well as enhance the performance of their applications and databases by much as a 20% improvement if you deploy using their upgraded tools.

These DB2 improvements, which IBM refers to as operational efficiencies, revolve mainly around reducing CPU usage. This is possible because, as IBM explains it, DB2 10 optimizes processor times and memory access, leveraging the latest processor improvements, increased memory, and z/OS enhancements. Improved scalability and a reduced virtual storage constraint add to the savings. Continued productivity improvements for database and systems administrators can drive even more savings.

The key to the improvements may lie in your ability to fully leverage the zIIP assist processor. The zIIP co-processors take over some of the processing from the main CPU, saving money for those organizations that pay for their systems by MIPS (million instructions per second).

When IBM introduced version 10 of DB2 for z/OS in 2010, it promised customers that upgrading to this version would boost performance due to DB2′s use of these co-processors. Even greater gains in performance would be possible if the customer also would be willing to do some fine-tuning of the system. This is where the new BMC tools come in; some of tools specifically optimize the use the zIIP co-processors.

Some of BMC’s enhanced capabilities help offload the DB2 workload to the zIIP environment thereby reducing general purpose processor utilization. The amount of processing offloaded to zIIP engines varies. With the new release, for example, up to 80 percent of the data collection work for BMC SQL Performance for DB2 can be offloaded.

The BMC tools also help companies tune application and database performance in other ways that increase efficiency and lower cost. For example, BMC’s SQL Performance Workload Compare Advisor and Workload Index Advisor detect performance issues associated with changes in DB2 environments. Administrators can see the impact of changes before they are implemented, thereby avoiding performance problems.

An early adopter of BMC’s new DB2 10 tools is Florida Hospital, based in Orlando. The hospital, with seven campuses, considers itself the largest hospital in the US, and relies on DB2 running on a z10 to support dozens of clinical and administrative applications. The hospital currently runs a mix of DB2 8 and DB2 10, although it expects to be all DB2 10 within a year.

Of particular value to the hospital is DB2 10 support for temporal data or snapshots of data that let you see data changes over time. This makes it particularly valuable in answering time-oriented questions. Based on that capability, the hospital is deploying a second instance of DB2 10 for its data warehouse, for which it also will take full advantage of BMC’s SQL performance monitoring tools.

But the crowning achievement of the hospital’s data warehouse, says Robert Goodman, lead DBA at Florida Hospital, will be the deployment of IBM’s Smart Analytics Optimizer (SAO) with DB2 10 and the data warehouse. The SAO runs queries in a massively parallel in-memory infrastructure that bolts onto the z10 to deliver extremely fast performance. Watch for more details coming on this development.

DancingDinosuar doesn’t usually look at tool upgrades, but DB2 10, especially when combined with the updated BMC tools, promises to be a game changer. That certainly appears to be the case at Florida Hospital, even before it adds SAO capabilities.


Follow

Get every new post delivered to your Inbox.

Join 573 other followers

%d bloggers like this: