Posts Tagged ‘zEC12’

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.

Post-Silicon-R&D_Infographic_070715_Final

Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Real Time Analytics on the IBM z13

June 4, 2015

For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.

IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.

Web

Courtesy of IBM (click to enlarge)

The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.

The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.

You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.

IBM SPSS improves accuracy by scoring directly within the transactional application against the latest committed data. As such it delivers the performance needed to meet operations SLAs and avoid data governance and security issues, effectively saving network bandwidth, data copying latency, and disk storage.

In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.

Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.

In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics.  In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.

The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).

The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Storage Looms Large at IBMEdge 2015

April 17, 2015

Been a busy year in storage with software defined storage (SDS), real-time compression, flash, storage virtualization, OpenStack, and more all gaining traction. Similarly, big data, analytics, cloud, and mobile are impacting storage. You can expect to find them and more at IBM Edge2015, coming May 10-15 in Las Vegas.

 But storage continues to make news every week. Recently IBM scientists demonstrated an areal recording density triumph, hitting 123 billion bits of uncompressed data per square inch on low cost, particulate magnetic tape. That translates into the equivalent of a 220 terabyte tape cartridge that could fit in the palm of your hand, or comparable to 1.37 trillion mobile text messages or the text of 220 million books, which would require a 2,200 km bookshelf spanning from Las Vegas to Houston, Texas. (see graphic below)

Tape compression breakthrough

Courtesy of IBM (click to enlarge)

Let’s take a look at some sessions delving into the current hot storage topics at Edge2015, starting with tape, since we’ve been talking about it.

(sSS1335) The Future of Tape; presenter Mark Lantz. He discusses current and future scaling trends of magnetic tape technology—see announcement above—from the perspective of IBM Research. He begins by first comparing recent scaling trends of both tape and hard disk drive technology. He then looks at future capacity scaling potential of tape and hard disks. In that context he offers an in-depth look at a new world record tape areal density demonstration of more than 100 Gb/in2, performed by IBM research in collaboration with Fujifilm, using low cost particulate tape media. He also discusses the new hardware and tape media technologies developed for this demonstration as well as key challenges for the continued scaling of tape.

If you are thinking future, check out this session too. (sBA2523) Part III: A Peek into the Future; presenter Bruce Hillsberg. This session looks at novel and innovate technologies to address clients’ most challenging technical and business problems across a wide range of technologies and disciplines. The presentation looks at everything from the most fundamental materials level all the way to working on the world’s largest big data problems. Many of the technologies developed by the Storage Systems research team lead to new IBM products or become new features in existing products. Topics covered in this lecture include atomic scale storage, research into new materials, advances in current storage media, advanced object stores, cloud storage, and more.

Combine big data, flash, and the z13 all here. (sBA1952) How System z13 and IBM DS8870 Flash Technology Enables Your Hadoop Environments; presenter Renan Ugalde.  Analyzing large amounts of data introduces challenges that can impact the goals of any organization. Companies require a reliable and high performing infrastructure to extract value from their structure and unstructured data. The unique features offered by the integration of IBM System z13 and DS8870 Flash technology enable a platform to support real-time decisions such as fraud detection. This session explains how integration among System z13, DS8870, and Hadoop maximizes performance by enabling the infrastructure’s unique big data capabilities.

Jon Toigo is an outstanding non-IBM presenter and somewhat of an iconoclast when it comes to storage. This year he is offering a 3-part session on Disaster Recovery Planning in an Era of Mobile Computing and Big Data:

  • (aBA2511) Part I: For all the hype around hypervisor-based computing and new software-defined infrastructure models, the ongoing need for disaster preparedness is often being buried in the discussion. High availability server clustering is increasingly believed to trump disaster recovery preparations, despite the fact that the transition to an agile data center is fraught with disaster potentials. In the first of three sessions, Toigo looks at the trends that are occurring in IT and the potential they present for disruption.
  • sBA2512) Part II: builds on the previous session by examining the technologies available for data protection and the trend away from backups in favor of real-time mirroring and replication. He notes promising approaches, including storage virtualization and object storage that can make a meaningful contribution.
  • (sBA2513) Part III: completes his disaster recovery planning series with the use of mobile computing technologies and public clouds as an adjunct to successful business recovery following an unplanned interruption event. Here he discusses techniques and technologies that either show promise as recovery expediters or may place businesses at risk of an epic fail.

Several SDS sessions follow: (sSS0884) Software Defined Storage — Why? What? How? Presenter: Tony Pearson. Here Pearson explains why companies are excited about SDS, what storage products and solutions IBM has to offer, and how they are deployed. This session provides an overview of the new IBM Spectrum Storage family of offerings.

 A second session by Pearson. (sCV3179): IBM Spectrum Storage Integration in IBM Cloud Manager with OpenStack: IBM’s Cloud Storage Options; presenter Tony Pearson. This session will look at the value of IBM storage products in the cloud with a focus on OpenStack. Specifically, it will look at how Spectrum Virtualize can be integrated and used in a complete 3-tier app with OpenStack.

Finally, (sSS2453) Myth Busting Software Defined Storage – Top 7 Misconceptions; presenter Jeffrey Barnett. This session looks at the top misconceptions to cut through the hype and understand the real value potential. DancingDinosaur could only come up with six misconceptions. Will have to check out this session for sure.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. There also will be a weird but terrific group, 2Cellos. Stick with it to the end (about 3 min.) for the kicker.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

IBM z Systems as a Cloud Platform

February 13, 2015

DancingDinosaur wrote a long paper for an audience of x86 users. The premise of the paper: the z Systems in many cases could be a better and even lower cost alternative to x86 for a private or hybrid cloud. The following is an excerpt from that paper.

 cloud_computing_providers

BTW, IBM earlier this month announced it signed a 10-year, large-scale services agreement with Shop Direct to move the multi-brand digital retailer to a hybrid cloud model to increase flexibility and quickly respond to changes in demand as it grows, one of many such IBM wins recently. The announcement never mentioned Shop Direct’s previous platform. But it or any company in a similar position could have opted to build its own hybrid (private/public) cloud platform.

A hybrid cloud a company builds today probably runs on the x86 platform and the Windows OS. Other x86-based clouds run Linux. As demand for the organization’s hybrid cloud grows and new capabilities are added traffic increases.  The conventional response is to scale out or scale up, adding more or faster x86 processors to handle more workloads for more users.

So, why not opt for a hybrid cloud running on the z? As a platform, x86 is far from perfect; too unstable and insecure for starters. By adopting a zEC12 or a z13 to host your hybrid cloud you get one of the fastest general commercial processors in the market and the highest security rating for commercial servers, (EAL 5+). But most x86-oriented data centers would balk. Way too expensive would be their initial reaction. Even if they took a moment to look at the numbers their IT staff would be in open revolt and give you every reason it couldn’t work.

The x86 platform, however, is not nearly as inexpensive as it was believed, and there are many ways to make the z cost competitive. Due to the eccentricities of Oracle licensing on the z Systems, for instance, organizations often can justify the entire cost of the mainframe just from the annual Oracle software license savings. This can amount to hundreds of thousands of dollars or more each year. And the entry level mainframe has a list price of $75,000, not much more than an x86 system of comparable MIPS. And that’s before you start calculating the cost of x86 redundancy, failover, and zero downtime that comes built into the mainframe or consider security. Plus with the z Systems Solution Edition program, IBM is almost giving the mainframe away for free.

Some x86 shops could think of the mainframe as a potent Linux machine that can handle thousands of Linux instances without breaking a sweat. The staff wouldn’t even have to touch z/OS. It also runs Java and Hadoop. And it delivers an astonishingly fast and efficient Linux environment that provides a level of performance that would require a much great number of x86 cores to try to match. And if you want to host an on-premises or hybrid cloud at enterprise scale it takes a lot of cores. The cost of acquiring all those x86 cores, deploying them, and managing them will break almost any budget.

Just ask Jim Tussing, Chief Technology Officer for infrastructure and operations at Nationwide Insurance (DancingDinosaur has covered Tussing before): “We had literally 3000 x86 servers deployed that were underutilized,” which is common in the x86 environment even with VMware or Hyper-V virtualization. At a time when Nationwide was seeking to increase the pace of innovation across its products and channels, but rolling out new environments were taking weeks or months to provision and deploy, again not unheard of in the x86 world. The x86 environment at Nationwide was choking the company.

So, Nationwide consolidated and virtualized as many x86 servers on a mainframe as possible, creating what amounted to an on-premises and hybrid cloud. The payoff: Nationwide reduced power, cooling, and floor space requirements by 80 percent. And it finally reversed the spiraling expenditure on its distributed server landscape, saving an estimated $15 million over the first three years, money it could redirect into innovation and new products. It also could provision new virtual server instances fast and tap the hybrid cloud for new capabilities.

None of this should be news to readers of DancingDinosaur. However some mainframe shops still face organizational resistance to mainframe computing. Hope this might help reinforce the z case.

DancingDinsosaur is Alan Radding, a long-time IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my IT writing at Technologywriter.com and here.

New Software Pricing for IBM z13

February 6, 2015

Every new mainframe causes IBM to rethink its pricing. This makes sense because mainframe software licensing is complex. The z13 enables different workloads and combinations of uses that merit reexamining the software licensing. But overall, IBM is continuing its strategy to enhance software price/performance with each generation of hardware. This has been the case for as long as DancingDinosaur has been covering the mainframe. (click graphic below to enlarge)

 IBM z13 technology update pricing

DancingDinsosaur along with other mainframe analysts recently listened to Ray Jones, IBM Vice President, z Systems Sales go through the new z13 software pricing. In short, expect major new structural enhancements coming in the first half of 2015. Of particular interest will be two changes IBM is instituting:

  1. IBM Collocated Application Pricing (ICAP), which lets you run your systems the way that make sense in your organization
  2. Country Multiplex Pricing, an evolution of Sysplex pricing that allows for greater flexibility and simplicity which treats all your mainframes in one country as a single sysplex.

Overall, organizations running the z under AWLC should see a 5% discount on average.

But first let’s take a moment to review AWLC (Advanced Workload License Charge). This monthly program from the start has been intended to allow you to grow hardware capacity without necessarily increasing software charges. Most organizations under AWLC can grow hardware capacity without necessarily increasing software charges. In general you’ll experience a low cost of incremental growth and you can manage software cost by managing workload utilization and deployment across LPARs and peak hours.

A brief word about MSU. DancingDinosaur thinks of MSU as mainframe service unit. It is the measurement of the amount of processing or capacity of your mainframe. IBM determines the MSU rating of a particular mainframe configuration by some arcane process invisible to most of us. The table above starts with MSU; just use the number IBM has assigned your z configuration.

OK, now we’re ready to look at ICAP pricing. IBM describes ICAP as the next evolution of z Systems sub-capacity software pricing. ICAP allows workloads to be priced as if in a dedicated environment although technically you have integrated them with other workloads. In short, you can run your systems and deploy your ICAP workloads the way you want to run them. For example, you might want to run a new anti-fraud app or a new instance of MQ and do it on the same LPAR you’re running some other workload.

ICAP is for new workloads you’re bringing onto the z. You have to define the workloads and are responsible for collecting and reporting the CPU time. It can be as simple as creating a text file to report it. However, don’t rush to do this; IBM suggested an ICAP enhancement to the MWRT sub-capacity reporting tool will be coming.

In terms of ICAP impact on pricing, IBM reports no effect on the reported MSUs for other sub-capacity middleware programs (adjusts MSUs like an offload engine, similar to Mobile Workload Pricing for z/OS). z/OS shops could see 50% of the ICAP-defining program MSUs will be removed, which can result in real savings. IBM reports that ICAP provides a price benefit similar to zNALC for z/OS, but without the requirement for a separate LPAR. Remember, with ICAP you can deploy your workloads where you see fit.

For Country Multiplex Pricing a country multiplex to IBM is the collection of all zEnterprise and later machines in a country, and they are measured like one machine for sub-capacity reporting (applicable to all z196, z114, zEC12, zBC12, and z13 machines). It amounts to a new way of measuring and pricing MSUs, as opposed to aggregating under current rules. The result should be flexibility to move and run work anywhere, the elimination of Sysplex pricing rules, and the elimination of duplicate peaks when workloads move between machines.

In the end, the cost of growth is reduced with one price per product based on growth anywhere in the country. Hardware and software migrations also become greatly simplified because Single Version Charging (SVC) and Cross Systems Waivers (CSW) will no longer be relevant.  And as with ICAP, a new Multiplex sub-capacity reporting tool is coming.

Other savings also remain in play, especially the z/OS mobile pricing discounts, which significantly reduces the level at which mobile activity is calculated for peak load pricing. With the expectation that mobile activity will only grow substantially going forward, these savings could become quite large.

DancingDinosaur is Alan Radding, a veteran mainframe and IT writer and analyst. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my writing at Technologywriter.com and here.

IBM’s z13 Redefines Mainframe Performance, Economics, and Versatility

January 14, 2015

With the introduction of the new IBM z13, the latest rev of the 50-year old mainframe product line introduced today, it will be hard for IT people to persist in the mistaken belief that the mainframe can’t handle today’s workloads or that it is too expensive. Built around an 8 core, 22nm processor, the IBM z13’s 141 configurable cores (any mix of CP, IFL, zIIP, ICF, SAP) delivers a 40% total capacity improvement over the zEC12.

 IBM z113

The z13 looks like the zEC12 but under the hood it’s far more powerful

The IBM z13 will handle up to 8,000 virtual enterprise-grade Linux servers per system, more than 50 per core.  Remember when Nationwide Insurance consolidated 3000 x86 servers mainly running Linux on a System z and saved $15 million over three years, a figure later revised considerably higher. They got a lot of press out of that, including from DancingDinosaur as recently as last May. With the IBM z13 Nationwide could consolidate more than twice the number of Linux servers at a lower cost and the resulting saving would be higher still.

If you consider Linux VMs synonymous with cloud services, the new machine will enable superior Cloud services at up to 32% lower cost than an x86-based cloud. It also will cost up to 60% less than Public Cloud over three years. In almost every metric, the IBM z13 delivers more capacity or performance at lower cost.

IBM delivered an almost constant stream of innovations that work to optimize performance and reduce cost. For example, it boosted single thread capacity by 10% over the zEC12. It also delivers 3x more memory to help both z/OS and Linux workloads. The more memory combined with a new cache design, improved I/O bandwidth, and compression will boost analytics on the machine. In fact, with the z13 you can do in-memory analytics if you want it.

The one thing it doesn’t do is boast the fastest commercial processor in terms of sheer speed. The zEC12 processor still is the fastest but with all the optimizations and enhancements IBM has built in the z13 should beat the z12 in handling the workloads organizations most want to run. For instance, the z13 performs 2X faster than the most common server processors, 300 percent more memory, 100 percent more bandwidth and delivers vector processing analytics to speed mobile transactions. As a result, the z13 transaction engine is capable of analyzing transactions in real time.

Similarly, simultaneous multi-threading delivers more throughput for Linux and zIIP-eligible workloads while larger caches optimize data serving. It also improved on-chip hardware compression, which saves disk space and cuts data transfer time.  Also, there is new workload container pricing and new multiplex pricing, both of which again will save money.

In addition, IBM optimized this machine for both mobile and analytics, as well as for cloud. This is the new versatility of this redefined mainframe. Last year, IBM discounted the cost of mobile transactions on the z. The new machine continues to optimize for mobile with consolidated REST APIs for all z/OS transactions through z/OS Connect while seamlessly channeling z/OS transactions to mobile devices with the MobileFirst Platform. It also ensures end-to-end security from mobile device to mainframe with z/OS, RACF, and MobileFirst products.

For analytics, IBM continues to optimize Hadoop and expand the analytics portfolio on the z13. Specifically, the massive memory capability, up to 10TB, opens new opportunities for in-memory computing. The ability to perform analytics by combining data from different data sources and do it in-memory and in real-time within the platform drives more efficiencies, such as eliminating the need for ETL and the need to move data between platforms, as had previously often been the case. Now, just use Hadoop on z to explore data there within the secure zone of the mainframe. This opens a wide variety of analytics workloads, anything from fraud prevention to customer retention.

In addition to improved price/performance overall, IBM announced Technology Update Pricing for z13, including AWLC price reductions for z13 that deliver 5% price/performance on average in addition to performance gains in software exploitation of z13. DancingDinosaur will dig deeper into the new z13 software pricing in a subsequent post.

And the list of new and improved capabilities with the z13 just keeps going on and on.  With security IBM has accelerated the speed of encryption up to 2x over the zEC12 to help protect the privacy of data throughout its life cycle.  It also extended enhanced public key support for constrained digital environments using Elliptic Curve Cryptography (ECC), which helps applications like Chrome, Firefox, and Apple’s iMessage. In addition, the z13 sports a few I/O enhancements, like the first system to use a standards based approach for enabling Forward Error Correction for a complete end-to-end solution.

Finally, IBM has not abandoned hybrid computing, where you can mix a variety of blades, including x86 Windows blades and others in the zBX extension cabinet. With the z13 IBM introduced the new Mod 004 zBX cabinet, an upgrade from the previous Mod 002 and 003.

DancingDinosaur expects the introduction of the z13 along with structural organization changes, will drive System z quarterly financial performance back into the black as soon as deliveries roll. And if IBM stays consistent with past behavior within a year or so you can expect a scaled down, lower cost business class version of the z13 although it may be not be called business class. Stay tuned; it should be an exciting year.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. You can follow him on Twitter, @mainframeblog, or check out more of his writing and analysis at Technologywriter.com or here.

The Mainframe at the Heart of the Security Storm

December 18, 2014

A survey of Chief Information Security Officers (CISOs) released by IBM in early December found more than 80% of security leaders believe the challenge posed by external threats is on the rise, while 60% also agree their organizations are outgunned in the cyber war. Even mainframe shops—the zEC12 has received the highest security rating, EAL 5+ —should not get complacent. There are a lot of bad guys gunning for the data center. Just ask Sony.

 ciso study ibm 2014

At least top management is putting resources into security. Three quarters of the CISO respondents expect their security budgets to increase dramatically over the next 3-5 years. IBM is jumping in with a security paper geared specifically for mainframe shops titled Security Intelligence for Mainframe Environments.

So what are the threats keeping CISOs awake at night?  Based on the study sophisticated external threats were identified by 40% of security leaders as their top concerns. Expect the extra budget to be thrown at these threats, which will require the most organizational effort over the next three to five years, as much as regulations, new technologies, and internal threats combined, according to the IBM analysts.

Although a majority of the CISOs surveyed appear confident their mature, traditional technologies that focus on network intrusion prevention, advanced malware detection, and network vulnerability scanning will fend off outside threats, nearly half reported that deploying new security technology is the top focus area for their organization. Their top worries: data leakage, cloud security, and mobile/device security.

Some other interesting findings from the survey:

  • While concern over cloud security remains strong, still close to 90% of respondents have adopted cloud or are currently planning cloud initiatives. Of this group, most expect their cloud security budget to increase dramatically over the next three to five years.
  • Over 70% of security leaders said real-time security intelligence is increasingly important to their organization. Yet about half found areas such as data classification and discovery and security intelligence analytics have relatively low maturity and require improvement or transformation.
  • Not surprisingly, despite the growing mobile workforce, only 45% believe they have an effective mobile device management approach. According to the study, mobile and device security ranked at the bottom of the maturity list.

Although your data center provides a tempting target to attackers, it also can protect you with an effective counter-punch. That counter-punch is delivered through increasingly powerful and fast analytics, especially real-time analytics. The objective is to identify attacks as they are underway. Otherwise, you are left scrambling to close the proverbial barn door after the horses (data) have left.

This will entail systems that identify who did what and when, recognizing what’s normal behavior versus abnormal, and obtaining visibility into subtle connections between millions of data points. This requires a great deal of contextual data and the analytical means to make sense of it. And here is where you come in: your team needs to integrate mainframe data with distributed events to gain insights that apply to the entire enterprise.

In fact, IBM identifies a series of issues that put the mainframe squarely at the heart of the challenge and the solution:

  • Complexity: The mainframe is an integral component of multiple, often large and complex business services, making it difficult to identify and analyze threats.
  • Visibility: Mainframe processes, procedures and reports are often siloed, impeding cross-enterprise information sharing to combat threats. (But silos also help protect mainframe data—be selective in breaking down the silos.)
  • Compliance: Verification of compliance is frequently a manual task—with problem alerts all too often received only after a problem has occurred.
  • Cost: Mainframe management requires highly skilled administrators, who often are costly and in short supply.

You already have many of the solutions IBM recommends, like RACF, CA-Top Secret, and CA-ACF2. The mainframe security paper cited above covers the rest. Given what happened to Sony, it’s worth reading the paper closely.

Best wishes for the holidays. DancingDinosaur is Alan Radding. You can follow DancingDinosaur on Twitter, @mainframeblog. Check out more of my IT writing and analysis at Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 796 other followers

%d bloggers like this: