Posts Tagged ‘MIPS’

Mainframe Appeal Continues in 9th BMC Survey

October 30, 2014

With most of the over 1100 respondents (91%) reporting that the mainframe remains a viable long-term platform for them and a clear majority (60%) expecting to increase MIPS due to the normal growth of legacy applications and new application workloads the z continues to remain well entrenched. Check out the results for yourself here.

Maybe even more reassurance comes from almost half the respondents who reported that they expect the mainframe to attract and grow new workloads.  Most likely these will be Java and Linux workloads but one-third of the respondents listed cloud as a priority, jumping it up to sixth on the list of mainframe priorities. Mobile was cited as priority by 27% of the respondents followed by big data with 26% respondents.

ibm zec12

Apparently IBM’s steady promotion of cloud, mobile, and big data for the z over the past year is working. At Enterprise2014 IBM even made a big news with real time analytics and Hadoop on the z along with a slew of related announcements.

That new workloads like cloud, mobile, and big data made it into the respondents’ top 10 IT priorities for the year didn’t surprise Jonathan Adams, BMC vice president/general manager for z solutions.  The ease of developing in Java and its portability make it a natural for new workloads today, he noted.

In the survey IT cost reduction/optimization tops the list of IT priorities for 2014 by a large margin, 70% of respondents, followed by application availability, 52%.  Rounding out the top five are application modernization with 48%, data privacy, 47%, and business/IT alignment, 44%. Outsourcing finished out the top 10 priorities with 16%.

When asked to look ahead in terms of MIPS growth, the large majority of respondents expected growth to continue or at least remain steady. Only 9% expected MIPS to decline and 6% expected to eliminate the mainframe.  This number has remained consistent for years, noted Adams. DancingDinosaur periodically checks in with shops that announce plans to eliminate their mainframe and finds that a year later many have barely made any progress.

The top mainframe advantages shouldn’t surprise you:  availability (53%); security (51%); centralized data serving (47%) and transaction throughput (42%). More interesting results emerged when the respondents addressed new workloads. The mainframe’s cloud role includes data access (33%), cloud management from Linux on z (22%) and dynamic test environments via self-service (15%). Surprisingly, when it comes to big data analytics, 34% report that the mainframe acts as their analytics engine. This wasn’t supposed to be the case, at least not until BigInsights and Hadoop on z gained more traction.

Meanwhile, 28% say they move data off platform for analytics, and 14% report they federate mainframe data to an off-platform analytics engine. Yet, more than 81% now incorporate the mainframe into their Big Data strategy, up from 70% previously. The non-finance industries are somewhat more likely to use the mainframe as the big data engine, BMC noted. Those concerned with cost should seriously consider doing their analytics on the z, where the data is. It is costly to keep moving data around.

In terms of mobility, making existing applications accessible for mobile ranked as the top issue followed by developing new mobile applications and securing corporate data on mobile devices. Mobile processing increases for transaction volume came in at the bottom of mobility issues, but that will likely change when mobile transactions start impacting peak workload volumes and trigger increased costs. Again, those concerned about costs should consider IBM’s mobile transaction discount, which was covered by DancingDinsosaur here in the spring.

Since cost reduction is such a big topic again, the survey respondents offered their cost reduction priorities.  Reducing resource usage during peak led the list.  Other cost reduction priorities included consolidating mainframe software vendors, exploiting zIIP and specialty engines (which have distinctly lower cost/MIPS), and moving workloads to Linux on z.

So, judging from the latest BMC survey the mainframe is far from dead. But at least one recent IT consultant and commentator, John Appleby, seems to think so. This prediction has proven wrong so often that DancingDinosaur has stopped bothering to refute it.

BTW, change came to BMC last year  in the form of an acquisition by a venture capital group. Adams reports that the new owners have already demonstrated a commitment to continued investment in mainframe technology products, and plans already are underway for next year’s survey.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or see more of his writing at Technologywriter.com or in wide-ranging blogs here.

February 25, 2014

How the 50 Year-Old Mainframe Remains Relevant

The mainframe turns 50 years old this year and the many pundits and experts who predicted it would be long gone by now must be scratching their heads.  Yes, it is still around and has acquired over 260 new accounts just since zEnterprise launch. It also has shipped over 320 hybrid computing units (not to be confused with zBX chassis only) since the zBX was introduced and kicked off hybrid mainframe computing.

As for MIPS, although IBM experienced a MIPS decline last quarter that follows the largest MIPS shipment in mainframe history a year ago resulting in a 2-year CGR of +11%.  (Mainframe sales follow the new product release cycle in a predictable pattern.) IBM brought out the last System z release, the zEC12, faster than the mainframe’s historic release cycle. Let’s hope IBM repeats the quick turnaround with the next release.

Here’s what IBM is doing to keep the mainframe relevant:

  • Delivered steady price/performance improvements with each release. And with entry-level BC-class pricing and the System z Solution Edition programs you can end up with a mainframe system that is as competitive or better than x86-based systems while being more secure and more reliable out of the box.
  • Adopted Linux early, before it had gained the widespread acceptance it has today. Last year over three-quarters of the top 100 enterprises had IFLs installed. This year IBM reports a 31% increase in IFL MIPS. In at least two cases where DancingDinosaur recently interviewed IT managers, Linux on z was instrumental in bringing their shops to the mainframe.
  • Supported for SOA, Java, Web services, and cloud, mobile, and social computing continues to put the System z at the front of the hot trends. It also prominently plays with big data and analytics.  Who ever thought that the mainframe would be interacting with RESTful APIs? Certainly not DancingDinosaur’s computer teacher back in the dark ages.
  • Continued delivery of unprecedented scalability, reliability, and security at a time when the volumes of transactions, data, workloads, and users are skyrocketing.  (IDC predicts millions of apps, billions of users, and trillions of things connected by 2020.)
  • Built a global System z ecosystem of tools and technologies to support cloud, mobile, big data/analytics, social and non-traditional mainframe workloads. This includes acquisitions like SoftLayer and CSL Wave to deliver IBM Wave for z/VM, a simplified and cost effective way to harness the consolidation capabilities of the IBM System z platform along with its ability to host the workloads of tens of thousands of commodity servers. The mainframe today can truly be a fully fledged cloud player.

And that just touches on the mainframe platform advantages. While others boast of virtualization capabilities, the mainframe comes 100% virtualized out of the box with virtualization at every level.  It also comes with a no-fail redundant architecture and built-in networking. 

Hybrid computing is another aspect of the mainframe that organizations are just beginning to tap.  Today’s multi-platform compound workloads are inherently hybrid, and the System z can manage the entire multi-platform workload from a single console.

The mainframe anniversary celebration, called Mainframe50, officially kicks off in April but a report from the Pulse conference suggests that Mainframe50 interest already is ramping up. A report from Pulse 2014 this week suggests IBM jumped the gun by emphasizing how the z provides new ways never before thought possible to innovate while tackling challenges previously out of reach.

Pulse 2014, it turns out, offered 38 sessions on System z topics, of which 27 will feature analysts or IBM clients. These sessions promise to address key opportunities and challenges for today’s mainframe environments and the latest technology solutions for meeting them, including OMEGAMON, System Automation, NetView, GDPS, Workload Automation Tivoli Asset Discovery for z/OS and Cloud.

One session featured analyst Phil Murphy, Vice President and Principal Analyst from Forrester Research, discussing the critical importance of a robust infrastructure in a mixed mainframe/distributed cloud environment—which is probably the future most DancingDinosaur readers face—and how it can help fulfill the promise of value for cloud real time.

Another featured mainframe analyst Dot Alexander from Wintergreen Research who looked at how mainframe shops view executing cloud workloads on System z. The session focused on the opportunities and challenges, private and hybrid cloud workload environments, and the impact of scalability, standards, and security.

But the big celebration is planned for April 8 in NYC. There IBM promises to make new announcements, launch new research projects, and generally focus on the mainframe’s future.  A highlight promises to be Showcase 20, which will focus on 20 breakthrough areas referred to by IBM as engines of progress.  The event promises to be a sellout; you should probably talk to your System z rep if you want to attend. And it won’t stop on April 8. IBM expects to continue the Mainframe50 drumbeat all year with new announcements, deliverables, and initiatives. Already in February alone IBM has made a slew of acquisitions and cloud announcements that will touch every mainframe shop with any cloud interests (which should be every mainframe shop at one point or another).

In coming weeks stay tuned to DancingDinosaur for more on Mainframe50. Also watch this space for details of the upcoming Edge 2014 conference, with an emphasis on infrastructure innovation coming to Las Vegas in May.

Please follow DancingDinosaur on Twitter, @mainframeblog

Rocket z/SQL Accesses Non-SQL Mainframe Data

August 2, 2013

Rocket Software’s z/SQL enables access to non-SQL mainframe data using standard SQL commands and queries.  The company is offering a z/SQL free trial; you can install it no charge and get full access for as many users as you want. The only caveat, the free version is limited to three files. You can download the free trial here.

z/SQL will run SQL queries against any data source that speaks ANSI 92. “The tool won’t even know it is running relational data,” explained Gregg Willhoit, managing director of the Rocket Data Lab. That means you can run it against VSAM, IMS, Adabas, DB2 for z/OS, and physical sequential files.  In addition, you can use z/SQL to make real-time SQL queries directly to mainframe programs, including CICS TS, IMS TM, CA IDMS, and Natural.

By diverting up to 99% of processing-intensive data mapping and transformation from the mainframe’s CPU to the zIIP, z/SQL lowers MIPS capacity usage and its associated costs, effectively reducing TCO. And, it opens up the zIIP to extend programs and systems of record data to the full range of environments noted above.

z/SQL’s ability to automatically detect the presence of the z’s zIIP assist processor allows it to apply its patent pending technology to further boost the zIIP’s performance advantages.  The key attributes of the zIIP processor—low  cost,  speeds often greater than the speed of the mainframe engines (sub-capacity mainframe license), and its typical low utilization—are fully exploited by z/SQL for lowering a mainframe shop’s  TCO while providing for an accelerated ROI.

Rocket z/SQL is built on Metal C, a z/OS compiler option that provides C-language extensions allowing you to specify assembly statements that call system services directly. The DRDA support and the ANSI 92 SQL engine have been developed using what amounts to a new language that allows even more of z/SQL’s work to continue to run on the zIIP.  One of the key features in Metal C is allowing z/SQL to optimize its code paths for the hardware that it’s running on.  So, no matter if you’re running on older z9 or z10 or the latest zEC12 and zBC12 processors, z/SQL chooses the code path most optimized for your hardware.

With z/SQL you can expand your System z analytics effort and push a wider range of mainframe data analytics to near real time.  Plus, the usual ETL and all of its associated disadvantages are no longer a factor.  As such z/SQL promises to be a disruptive technology that eliminates the need for ETL while pushing the analytics to where the data resides as opposed to ETL, which must bring the data to the analytics.  The latter, noted Willhoit, is fraught with performance and data currency issues.

It’s not that you couldn’t access non-SQL data before z/SQL, but it was more cumbersome and slower.  You would have to replicate data, often via FTP to something like Excel. Rocket, instead, relies on assembler to generate an optimized SQL engine for the z9, z10, z196, zEC12, and now the zBC12.  With z/SQL the process is remarkably simple: no replication, no rewriting of code, just recompile. It generates the optimized assembler (so no assembler work required on your part).

Query performance, reportedly, is quite good.  This is due, in part, because it is written in assembler, but also because it takes advantage of the z’s multi-threading. It reads the non-relational data source with one thread and uses a second thread to process the network I/O.  This parallel I/O architecture for data promises game changing performance, especially for big data, through significant parallelism of network and database I/O.  It also takes full advantage of the System z hardware by using buffer pools and large frames, essentially eliminating dynamic address translation.

z/SQL brings its own diagnostic capabilities, providing a real-time view into transaction threads with comprehensive trace/browse capabilities for diagnostics.  It enables a single, integrated approach to identifying, diagnosing and correcting data connectivity issues between distributed ODBC, ADO.NET, and JDBC client drivers and mainframes. Similarly z/SQL provides dynamic load balancing and a virtual connection facility that reduces the possibility of application failures, improves application availability and performance, as well as supports virtually unlimited concurrent users and transaction rates, according to the company. Finally, it integrates with mainframe RACF, CA-TopSecret, and CA-ACF2 as well as SSL and client-side, certificate-based authentication on distributed platforms. z/SQL fully participates in the choreography of SSL between the application platform and the mainframe.

By accessing mainframe programs and data stored in an array of relational and non-relational formats z/SQL lets you leave mainframe data in place, on the z where it belongs, and avoids the cost and risk of replication or migration. z/SQL becomes another way to turn the z into an enterprise analytics server for both SQL and non-SQL data.

Rocket calls z/SQL the world’s most advanced mainframe access and integration software. A pretty bold statement that begs to be proven through data center experience. Test it in your data center for free.  As noted above, you can download the free trial here. If you do, please let me know how it works out. (Promise it won’t be publicized here.)

Lessons from IBM Eagle- zEnterprise TCO Analyses

March 18, 2013

Lessons from IBM Eagle-IBM Systems z  TCO Analyses

A company running an obsolete z890 2-way machine with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server multi-core (41x more cores than the z890) distributed environment only its 4-yearTCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS shop which budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, already 6 months late, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the dual-running period at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. It also could be used to secure a new opportunity for the z. Since 2007, the team reports that its TCO studies secured wins amounting to over $1.6 billion in revenue.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing z shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; for example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Take advantage of sub-capacity, which may produce free workloads
  • Consolidate System z Linux, which invariably saves money; many IT people don’t realize how many Linux virtual servers can run on a z core. (A debate raging on LinkedIn focused on how many virtual instances can run on an IFL with quite a few suggesting a max of 20. The official IBM figure:  consolidate up to 60 distributed cores or more on a single System z core, thousands on a single footprint; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs
  • Workloads amenable to specialty processors, like the IFL, zIIP, and zAAP, reduce mainframe costs through lower cost/MIPS and fewer general processor cycles
  • Consider the  System z Solution Edition (DancingDinosaur has long viewed the Solution Edition program as the best System z  deal going although you absolutely must be able to operate within the strict usage constraints the deal imposes.)

The Eagle team also suggests other things to consider, especially when the initial cost of a distributed platform looks attractive to management. To begin the System z responds flexibly to unforeseen business events; a distributed system may have to be augmented or the deployment re-architected, both of which drive up cost and slow responsiveness.  Also, the cost of adding incremental workloads to System z is less than linear. Similarly, the cost of administrative labor is lower on System z, and the System z cost per unit of work is much lower than with distributed systems.

DancingDinosaur generally is skeptical of TCO analyses from vendors. To be useful the analysis needs context, technical details (components, release levels, and prices), and specific verifiable quantitative results.  In addition, there are soft costs that must be considered.  In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

Getting the Payback from System z Outsourcing

February 1, 2013

A survey from Compuware Corporation on attitudes of CIOs toward mainframe outsourcing showed a significant level of dissatisfaction with one or another aspect of mainframe outsourcing. Check out the survey here.

Mainframe outsourcing has been a fixture of mainframe computing since the outset. The topic  is particularly interesting in light of the recent piece DancingDinosaur posted on winning the talent war a couple of weeks ago. Organizations intending to succeed are scrambling to find and retain the talent they need for all their IT systems, mainframe and otherwise.  In short, they need skills in all the new areas, like cloud computing, mobile access, and most urgently, big data analytics.  In addition, there is the ongoing need for Java, Linux, WebSphere, and CICS in growing System z data centers.  The rise of z-based hybrid computing and expert integrated hybrid PureSystems to some extent broadens the potential talent pool while reducing the amount of skilled experts required. Still, mainframe outsourcing remains a popular option.

The new Compuware survey found that reducing costs is a major driver for outsourcing mainframe application development, maintenance, and infrastructure. Yet multiple  associated costs are frustrating 71% of CIOs. These costs result from increases in MIPS consumption, as well as higher investments in testing and troubleshooting due mainly to poor application quality and performance.  In fact, two-thirds (67%) of respondents reported overall dissatisfaction with the quality of new applications or services provided by their outsourcer. The source of the problem: a widening in-house skills gap and difficulties with knowledge transfer and staff churn within outsource vendors.

Compuware has published a related white paper titled, Mainframe Outsourcing: Removing the Hidden Costs, which expands on the findings from the study. The company’s recommendations to remove the costs amount to reverse engineering the problems revealed in the initial survey. These include:

  • Utilize MIPS better
  • Explore pricing alternatives to CPU-based pricing
  • Improve the quality of new applications
  • Boost knowledge transfer between outsourcers and staff
  • Measure and improve code efficiency at the application level
  • Take advantage of baseline measurement to objectively analyze outsourcer performance

The System z offers numerous tools to monitor and manage usage and efficiency, and vendors like Compuware, CA, BMC, and others bring even more.

The MIPS consumption problem is typical. As Compuware reports: mainframes are being used more than ever, meaning consumption is naturally on the rise. This is not a bad thing.

However, where consumption is escalating due to inefficient coding, adding unnecessary costs. For example, MIPS costs are increasing on average by 21% year over year, with 40% of survey respondents claiming that consumption is getting out of control. Meanwhile, 88% of respondents using pay structures based on CPU consumption (approximately 42% of those surveyed) think their outsourcer could manage CPU costs better, and 57% of all respondents believe outsourcers do not worry about the efficiency of the applications that they write.

New workloads also are driving costs. For example, 60% of survey respondents believe that the increase in applications like mobile banking are driving higher MIPS usage and creating additional costs. Just think what they’d report when big data analytic applications start kicking in although some of this processing should be offloaded to assist processors.

The Compuware study is interesting and informative. Yes, outsourcers should be pressed to utilize MIPS more efficiently. At a minimum, they should shift workloads to assist processors that have lower cost per MIPS.  Similarly, developers should be pressed to boost the efficiency of their code. But this will require an investment in tools to measure and benchmark that code and hire QA staff.

A bigger picture view, however, suggests that focusing just on MIPS is counterproductive. You want to encourage more workloads on the z even if they use more MIPS because the z can run at near 100% utilization and still perform reliably. Higher utilization translates into lower costs per workload. And with the cost per MIPS decreasing with each rev of the zEnterprise the cost per workload keeps improving.  Measure, monitor, and benchmark and do whatever else you can to drive efficient operation, but aim to leverage the zEnterprise to the max for your best overall payback.

New CA and BMC System z tools deliver payback

June 1, 2010

CA Technologies (formerly known as CA and before that Computer Associates) and BMC each introduced new System z tools. Might be odd timing now, just months before IBM’s widely expected rev of the System z itself, but May is a nice month for big users conferences from which these kinds of announcements typically emerge.

The CA announcements focus on mainframe software management and database management for DB2 for z/OS. BMC announced zIIP support for some of its DB2 management tools. Both announcements will lead to cost reductions.

The most interesting of the new CA announcements is the release of Mainframe Software Manager, v3. It adds a GUI interface, which, as has been written about here previously, improves mainframe management productivity.  It also provides wizards, guided workflows in CA-speak, amounting to the equivalent of Install Shield for mainframes. DancingDinosaur has been cheerleading the adoption of GUIs for the System z as essential for cultivating the next generation of System z staff and lowering cost so this is welcome news.

To bolster database management CA also introduced CA Mainframe Chorus, described as a graphical workspace. CA refers to it as role-based, interactive visualization that integrates features across multiple products and disciplines for the purpose of facilitating collaboration and knowledge sharing between expert and novice mainframe staff.

You can build an attractive business case for GUI-based mainframe tools based on increased admin and operator productivity and the opportunity to use less skilled (meaning lower paid) people. Better yet, the ability to take advantage of System z assist processors (zIIP, zAAP, and IFL) provides an immediate payback by shifting workloads off the z’s general processor. This brings immediate licensing cost advantages.

BMC’s new DB2 for z/OS products take advantage of IBM’s zIIP, enabling mainframe shops to move more of their DB2 workloads to the lower-cost processors, effectively reducing mainframe operational costs. As BMC describes it, “this new zIIP offloading capability, along with previously-introduced BMC MainView zIIP exploitation efforts represent a significant step to reduce the costs of MIPS” (Million Instructions per Second). MIPS are considered a primary cost driver in mainframe environments.

BMC takes particular pains to note that this approach has been blessed by IBM. “We use the IBM-approved API,” noted Jay Lipovich, BMC director of mainframe product development. The company clearly does not want its use of the assist processor to be confused with NEON’s zPrime, which allows workloads not blessed by IBM to run on assist processors. NEON currently is entangled in lawsuits and counter-lawsuits with IBM.

One BMC customer reports offloading 30% of its MainView workload to the zIIP environment. As BMC reports: hardware plus software costs for a zIIP processor run $150 to $200 per MIPS compared with $2,200 to $3,400 for a general purpose processor. In addition, BMC’s recent global mainframe survey found mainframe capacity has continued to grow, which puts more pressure on the budgets of organizations that rely on the mainframe. In large shops with more than 10,000 MIPS, more than 47 percent of survey respondents said MIPS utilization is a top priority.

Mainframe computing often is more expensive than it need be. Yet there are ways to substantially cut mainframe computing costs. The use of assist processors and GUI tools represent an easy way to start doing just that.


%d bloggers like this: