Posts Tagged ‘System z’

Storage Looms Large at IBMEdge 2015

April 17, 2015

Been a busy year in storage with software defined storage (SDS), real-time compression, flash, storage virtualization, OpenStack, and more all gaining traction. Similarly, big data, analytics, cloud, and mobile are impacting storage. You can expect to find them and more at IBM Edge2015, coming May 10-15 in Las Vegas.

 But storage continues to make news every week. Recently IBM scientists demonstrated an areal recording density triumph, hitting 123 billion bits of uncompressed data per square inch on low cost, particulate magnetic tape. That translates into the equivalent of a 220 terabyte tape cartridge that could fit in the palm of your hand, or comparable to 1.37 trillion mobile text messages or the text of 220 million books, which would require a 2,200 km bookshelf spanning from Las Vegas to Houston, Texas. (see graphic below)

Tape compression breakthrough

Courtesy of IBM (click to enlarge)

Let’s take a look at some sessions delving into the current hot storage topics at Edge2015, starting with tape, since we’ve been talking about it.

(sSS1335) The Future of Tape; presenter Mark Lantz. He discusses current and future scaling trends of magnetic tape technology—see announcement above—from the perspective of IBM Research. He begins by first comparing recent scaling trends of both tape and hard disk drive technology. He then looks at future capacity scaling potential of tape and hard disks. In that context he offers an in-depth look at a new world record tape areal density demonstration of more than 100 Gb/in2, performed by IBM research in collaboration with Fujifilm, using low cost particulate tape media. He also discusses the new hardware and tape media technologies developed for this demonstration as well as key challenges for the continued scaling of tape.

If you are thinking future, check out this session too. (sBA2523) Part III: A Peek into the Future; presenter Bruce Hillsberg. This session looks at novel and innovate technologies to address clients’ most challenging technical and business problems across a wide range of technologies and disciplines. The presentation looks at everything from the most fundamental materials level all the way to working on the world’s largest big data problems. Many of the technologies developed by the Storage Systems research team lead to new IBM products or become new features in existing products. Topics covered in this lecture include atomic scale storage, research into new materials, advances in current storage media, advanced object stores, cloud storage, and more.

Combine big data, flash, and the z13 all here. (sBA1952) How System z13 and IBM DS8870 Flash Technology Enables Your Hadoop Environments; presenter Renan Ugalde.  Analyzing large amounts of data introduces challenges that can impact the goals of any organization. Companies require a reliable and high performing infrastructure to extract value from their structure and unstructured data. The unique features offered by the integration of IBM System z13 and DS8870 Flash technology enable a platform to support real-time decisions such as fraud detection. This session explains how integration among System z13, DS8870, and Hadoop maximizes performance by enabling the infrastructure’s unique big data capabilities.

Jon Toigo is an outstanding non-IBM presenter and somewhat of an iconoclast when it comes to storage. This year he is offering a 3-part session on Disaster Recovery Planning in an Era of Mobile Computing and Big Data:

  • (aBA2511) Part I: For all the hype around hypervisor-based computing and new software-defined infrastructure models, the ongoing need for disaster preparedness is often being buried in the discussion. High availability server clustering is increasingly believed to trump disaster recovery preparations, despite the fact that the transition to an agile data center is fraught with disaster potentials. In the first of three sessions, Toigo looks at the trends that are occurring in IT and the potential they present for disruption.
  • sBA2512) Part II: builds on the previous session by examining the technologies available for data protection and the trend away from backups in favor of real-time mirroring and replication. He notes promising approaches, including storage virtualization and object storage that can make a meaningful contribution.
  • (sBA2513) Part III: completes his disaster recovery planning series with the use of mobile computing technologies and public clouds as an adjunct to successful business recovery following an unplanned interruption event. Here he discusses techniques and technologies that either show promise as recovery expediters or may place businesses at risk of an epic fail.

Several SDS sessions follow: (sSS0884) Software Defined Storage — Why? What? How? Presenter: Tony Pearson. Here Pearson explains why companies are excited about SDS, what storage products and solutions IBM has to offer, and how they are deployed. This session provides an overview of the new IBM Spectrum Storage family of offerings.

 A second session by Pearson. (sCV3179): IBM Spectrum Storage Integration in IBM Cloud Manager with OpenStack: IBM’s Cloud Storage Options; presenter Tony Pearson. This session will look at the value of IBM storage products in the cloud with a focus on OpenStack. Specifically, it will look at how Spectrum Virtualize can be integrated and used in a complete 3-tier app with OpenStack.

Finally, (sSS2453) Myth Busting Software Defined Storage – Top 7 Misconceptions; presenter Jeffrey Barnett. This session looks at the top misconceptions to cut through the hype and understand the real value potential. DancingDinosaur could only come up with six misconceptions. Will have to check out this session for sure.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. There also will be a weird but terrific group, 2Cellos. Stick with it to the end (about 3 min.) for the kicker.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM z Systems at Edge2015

April 9, 2015

There are so many interesting z Systems sessions at IBM Edge2015 that DancingDinosaur can’t come close to attending them all or even writing about them.  Edge2015 will be in Las Vegas, May 10-15, at the Venetian, a huge hotel that just happens to have a faux Venice canal running within it (and Vegas is in the desert, remember).

The following offers a brief summation of a few z Systems sessions that jumped out at me.  In the coming weeks Dancing Dinosaur will look at sessions on Storage, Power Systems, cross-platform sessions, and middleware. IBM bills Edge2015 as the Infrastructure Innovation Conference so this blog will try at least to touch on bits of all of it. Am including the session numbers and presenters but please note that session and presenters may change.

radcliffe mobile as the next evolutionCourtesy of IBM (click to enlarge)

Session zBA1909; Mobile and Analytics Collide – A New Tipping Point; presenter Mark Simmonds

DancingDinosaur starting following mobile on z in 2012 and was reporting IBM mobile successes as recently as last month, click here. In this session Simmonds observes organizations being driven to deliver more insight and smarter outcomes in pursuit of increasing revenue and profit while lowering business costs and risks. The ubiquity of mobile devices adds two important dimensions to business analytics, the time and location of customers. Now you have an opportunity to leverage both via the mobile channel but only if your analytics strategy can respond to the demands of the mobile moment. At this session you’ll see how customers are using IBM solutions and the z to deliver business critical insight across the mobile community and hear how organizations are setting themselves apart by delivering near real-time analytics.

Session zBA1822; Hadoop and z Systems; presenter Alan Fellwock

DancingDinosaur looked at Hadoop on z as early as 2011. At that point it was mainly an evolving promise. By this past fall it had gotten real, click here.  In this session, Fellwock notes that various use cases are emerging that require Hadoop processing in conjunction with z Systems. In one category, the data originates on the z Systems platform itself—this could be semi-structured or unstructured data held in DB2 z/OS, VSAM or log files in z/OS. In another category, the data originates outside z Systems –this could be social media data, email, machine data, etc.—but needs to be integrated with core data on z Systems. Security and z Systems governance becomes critical for use cases where data originates on z Systems. There are several z Hadoop approaches available, ranging from Hadoop on Linux to an outboard Hadoop cluster under z governance to a cloud model that integrates with SoftLayer.

Session zAD1876; Bluemix to Mainframe – Making Development Accessible in the Cloud; presenter Rosalind Radcliffe

Cloud capability and technology is changing the way enterprises go to market. DancingDinosaur interviewed Radcliffe for a posting on DevOps for the mainframe in March. DevOps is about bringing the entire organization together, including development and operations, to more efficiently deliver business value be it on premise, off premise, or in a hybrid cloud environment. This session promises to explore how IBM DevOps solutions can transform the enterprise into a high quality application factory by leveraging technology across platforms and exploiting both systems of record and systems of engagement applications. It will show how to easily expose your important data and customer applications to drive innovation in a nimble, responsive way, maintaining the logic and integrity of your time-tested systems.

Session zAD1620; APIs to the Enterprise: Unlocking Mainframe Assets for Mobile and Cloud Applications; presenter Asit Dan

The emergence of APIs has changed how organizations build innovative mobile and web applications, enter new markets, and integrate with cloud and third party applications. DancingDinosaur generally refers to this as the API economy and it will become only more important going forward. IBM z Systems data centers have valuable assets that support core business functions. Now they can leverage these assets by exposing them as APIs for both internal and external consumption. With the help of IBM API Management, these organizations can govern the way APIs are consumed and get detailed analytics on the success of the APIs and applications that are consuming them. This session shows how companies can expose z Systems based functions as APIs creating new business opportunities.

Session zAD1469; Java 8 on IBM z13 – An Unstoppable Force Meets an Immovable Object; presenter Elton De Souza

What happens when you combine the most powerful commercially available machine on the planet with the latest iteration of the most popular programming language on the planet? An up to 50% throughput improvement for your generic applications and up to 2x throughput improvement for your security-enabled applications – that’s what! This session covers innovation and performance of Java 8 and IBM z13. With features such as SMT, SIMD and cryptographic extensions (CPACF) exploitation, IBM z Systems is once again pushing the envelope on Java performance. Java 8 is packed with features such as lambdas and streams along with improved performance, RAS and monitoring that continues a long roadmap of innovation and integration with z Systems. Expect to hear a lot about z13 at Edge2015.

Of course, there is more at Edge2015 than just z Systems sessions. There also is free evening entertainment. This year the headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. Check her out here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

IBM DevOps for the Mainframe

March 27, 2015

DevOps is not just for distributed platforms. IBM has a DevOps strategy for large enterprises (usually mainframe shops) too. Nationwide, a longtime mainframe shop, is an early adopter of DevOps and already is reporting significant gains: reduction in critical software defects by 80% and a 20% efficiency gain in its maintenance and support operations in just 18 months.

DevOps, an agile methodology, establishes a continuous feedback loop between software development and deployment/operations that speeds development and deployment while ensuring quality. This is a far cry from the waterfall development methodologies of the mainframe past.

 desz DevOps adoption model

Courtesy of IBM (click to enlarge)

The IBM DevOps initiative, announced last November (link above), taps into the collaborative capabilities of IBM’s Cloud portfolio to speed the delivery of software that drives new models of engagement and business. Software has become the rock star of IT with software-driven innovation becoming a primary strategy for creating and delivering new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70% of the time. As such, IBM notes, DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Some mainframe shops, however, continue to operate from a software standpoint as if client/server computing and PCs were still the new game in town. Meanwhile the business units keep complaining about how long it takes to make software changes while long backlogs drag on the IT budget.

DevOps is about continuous software development and deployment. That means continuous business planning, continuous collaborative dev, continuous testing, continuous release and deployment, continuous monitoring, and continuous feedback and optimization in a never ending cycle. Basically, continuous everything.  And it really works, as Nationwide can attest.

But DevOps makes traditional mainframe shops nervous. Mainframe applications are rock solid and crashes and failures almost unheard of. How can they switch to DevOps without risking everything the mainframe stands for, zero failure?

The answer: mainframe DevOps that leads straight into continuous testing, not deployment. The testing can and should be as rigorous and extensive as is necessary to reassure that everything works as it should and anything that will fail has failed. Only then does it go into production.

It would be comforting to the data centers to say that DevOps only addresses systems of engagement; those pesky mobile, collaborative, and social systems that suddenly are making demands on the core mainframe production applications. But that is not correct. DevOps is about integrating systems of engagement with systems of record, the enterprise’s mainframe crown jewels. The trick is to bring together the culture, processes, and tools across the entire software delivery lifecycle, as IBM says, to span it all—mobile to mainframe, slowing down only to conduct as exhaustive testing as the enterprise requires.

Mainframe tools from the era of waterfall methodologies won’t cut it. Rational offers a set of tools starting with Blue Agility. IBM also offers an expanded set of tools acquired through acquisitions such as UrbanCode (release automation) and GreenHat (software quality and testing solutions for the cloud and more) that offer an integrated developer experience on open cloud platforms such as Bluemix to expedite DevOps collaboration, according to IBM.

Expect push back from any attempt to introduce DevOps into a traditional mainframe development culture. Some shops have been developing systems the same way for 30 years or more. Resistance to change is normal. Plan to start gradually, implementing DevOps incrementally.

Some shops, however, may surprise you. Here the mainframe team senses they are falling behind. IBM, of course, has tools to help (see above). Some experts recommend focusing on automated testing early on; when testing is automated DevOps adoption gets easier, they say, and old school developers feel more reassured.

At IBM Edge2015, there are at least 2 sessions on DevOps: Light Up Performance of Your LAMP Apps and DevOps with a Power Optimized Stack; and CICS Cloud DevOps = Agility2. BTW, it is a good time to register for IBM Edge2015 right away, when you can still get a discount. IBM Edge2015, being billed as the Infrastructure Innovation Conference, takes place May 11 – 15 at The Venetian in Las Vegas. DancingDinsosaur will be there. Have just started pouring over the list of sessions on hundreds of topics for every IBM platform and infrastructure subject. IBM Edge2015 combines what previously had been multiple conferences into one.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

Vodafone Spain Picks IBM zSystem for Smarter Cities Initiative

March 19, 2015

The Vodafone initiative, as reported here previously, leverages the most advanced mobile communications technology including citywide sensors and a Global M2M Platform that will enable the connection of thousands of sensors to the intelligent Vodafone Connected City system. The new cloud-based system will run on IBM Linux z Systems. The Linux z Systems were selected for their high security, which protects cloud services while also delivering the speed, availability, and efficiency required to drive mobile services at scale.  To do something at scale you really do want the z System.

 vodafone zsystem running linux

Courtesy of IBM: zSystem and Linux

For Vodafone this represents the beginning of what they refer to as a Smarter Cities services initiative. The effort targets local governments and city councils with populations ranging between 20.000 – 200.000 citizens. The services provided will address customer’s needs in the following key areas: urban vitality, public lighting, energy efficiency, waste management, and citizen communications.

In effect, Vodafone is becoming a SaaS provider by leveraging their new zSystem. Vodafone’s customers for this are the government groups that opt to participate. The company announced the effort at the World Mobile Congress in Barcelona at the beginning of the month.

One of the initial participants will be Seville, the capital of the province of Andalucía, where a control and development center will be established by Vodafone. The telco will invest more than 243 million euros over two years on telecommunications infrastructure, encouraging the development of the technology sector and developing projects to create strategic growth in the region.

Initially, the center will focus on creating smart city solutions that can easily and efficiently be used by cities ranging from 20,000 to 150,000 residents; cities that otherwise may not have the funds to invest in smart city infrastructure projects on their own. This center is also expected to help make the Andalucía territory of Spain a leader in the development of Big Data and smart solutions.

IBM is delivering the full stack to Vodafone: a set of cloud services that include an enterprise zSystem Linux server (IBM zBC12), v7000 storage, IBM intelligent operations, an information services solution, and more.  Vodafone opted for the z and Linux to enable cost-efficient, highly secure cloud services while also delivering the speed, availability and efficiency required to drive mobile services at scale. IBM Intelligent Operations software will provide monitoring and management of city services. IBM’s MobileFirst platform will be used to create citizen-facing mobile applications while IBM Information Server and Maximo asset management software will round out the IBM stack.

Overall, IBM, the zSystem, and Linux brought a number of benefits to this initiative. Specifically, the zSystem proved the least expensive when running more than seven vertical services as Vodafone is planning. An example of such a vertical service is the public lighting of a city. This also is where scalability brings a big advantage. Here again, the zSystem running Linux delivers scalability along with greater security and regulatory compliance. Finally, another critical capability for Vodafone was the zSystem’s ability to isolate workloads.

In short, the zSystem’s security and regulation compliance; reliability, resilience, and robustness; strong encoding and workload isolation, workload management and ability to meet SLAs; scalability; and high efficiency clinched the Vodafone deal.

This could prove a big win for IBM and the zSystem. Vodafone has mobile operations in 26 countries, partners with mobile networks in 54 more, and runs fixed broadband operations in 17 markets. As of the end of 2014, Vodafone had 444 million mobile customers and 11.8 million fixed broadband customers. Vodafone Spain’s 14.811.000 mobile customers and 2.776.000 broadband ones will certainly take maximum advantage of the zSystem’s scalability and reliability.

…as a follow up to last week’s report on recent success coming from the OpenPower Foundation that string continued this week at the OpenPOWER Inaugural Summit with the OpenPOWER Foundation announcing more than ten hardware solutions spanning systems, boards, cards, and a new microprocessor customized for the Chinese market.  Built collaboratively by OpenPOWER members, the new solutions exploit the POWER architecture to provide more choice, customization and performance to customers, including hyperscale data centers.

Among the products and prototypes OpenPOWER members revealed are:

  • Firestone, a prototype of a new high-performance server targeting exascale computing and projected to be 5-10x faster than today’s supercomputers. It incorporate technology from NVIDIA and Mellanox.
  • The first GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950, resulting from collaboration between NVIDIA, Tyan, and Cirrascale.
  • An open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack by Rackspace and designed to run OpenStack services.

Other member-developed new products leverage the Coherent Accelerator Processor Interface (CAPI), a hallmark feature built into the POWER architecture. DancingDinosaur initially covered CAPI here.

Reminder: it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

IBM z Systems as a Cloud Platform

February 13, 2015

DancingDinosaur wrote a long paper for an audience of x86 users. The premise of the paper: the z Systems in many cases could be a better and even lower cost alternative to x86 for a private or hybrid cloud. The following is an excerpt from that paper.

 cloud_computing_providers

BTW, IBM earlier this month announced it signed a 10-year, large-scale services agreement with Shop Direct to move the multi-brand digital retailer to a hybrid cloud model to increase flexibility and quickly respond to changes in demand as it grows, one of many such IBM wins recently. The announcement never mentioned Shop Direct’s previous platform. But it or any company in a similar position could have opted to build its own hybrid (private/public) cloud platform.

A hybrid cloud a company builds today probably runs on the x86 platform and the Windows OS. Other x86-based clouds run Linux. As demand for the organization’s hybrid cloud grows and new capabilities are added traffic increases.  The conventional response is to scale out or scale up, adding more or faster x86 processors to handle more workloads for more users.

So, why not opt for a hybrid cloud running on the z? As a platform, x86 is far from perfect; too unstable and insecure for starters. By adopting a zEC12 or a z13 to host your hybrid cloud you get one of the fastest general commercial processors in the market and the highest security rating for commercial servers, (EAL 5+). But most x86-oriented data centers would balk. Way too expensive would be their initial reaction. Even if they took a moment to look at the numbers their IT staff would be in open revolt and give you every reason it couldn’t work.

The x86 platform, however, is not nearly as inexpensive as it was believed, and there are many ways to make the z cost competitive. Due to the eccentricities of Oracle licensing on the z Systems, for instance, organizations often can justify the entire cost of the mainframe just from the annual Oracle software license savings. This can amount to hundreds of thousands of dollars or more each year. And the entry level mainframe has a list price of $75,000, not much more than an x86 system of comparable MIPS. And that’s before you start calculating the cost of x86 redundancy, failover, and zero downtime that comes built into the mainframe or consider security. Plus with the z Systems Solution Edition program, IBM is almost giving the mainframe away for free.

Some x86 shops could think of the mainframe as a potent Linux machine that can handle thousands of Linux instances without breaking a sweat. The staff wouldn’t even have to touch z/OS. It also runs Java and Hadoop. And it delivers an astonishingly fast and efficient Linux environment that provides a level of performance that would require a much great number of x86 cores to try to match. And if you want to host an on-premises or hybrid cloud at enterprise scale it takes a lot of cores. The cost of acquiring all those x86 cores, deploying them, and managing them will break almost any budget.

Just ask Jim Tussing, Chief Technology Officer for infrastructure and operations at Nationwide Insurance (DancingDinosaur has covered Tussing before): “We had literally 3000 x86 servers deployed that were underutilized,” which is common in the x86 environment even with VMware or Hyper-V virtualization. At a time when Nationwide was seeking to increase the pace of innovation across its products and channels, but rolling out new environments were taking weeks or months to provision and deploy, again not unheard of in the x86 world. The x86 environment at Nationwide was choking the company.

So, Nationwide consolidated and virtualized as many x86 servers on a mainframe as possible, creating what amounted to an on-premises and hybrid cloud. The payoff: Nationwide reduced power, cooling, and floor space requirements by 80 percent. And it finally reversed the spiraling expenditure on its distributed server landscape, saving an estimated $15 million over the first three years, money it could redirect into innovation and new products. It also could provision new virtual server instances fast and tap the hybrid cloud for new capabilities.

None of this should be news to readers of DancingDinosaur. However some mainframe shops still face organizational resistance to mainframe computing. Hope this might help reinforce the z case.

DancingDinsosaur is Alan Radding, a long-time IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my IT writing at Technologywriter.com and here.

New Software Pricing for IBM z13

February 6, 2015

Every new mainframe causes IBM to rethink its pricing. This makes sense because mainframe software licensing is complex. The z13 enables different workloads and combinations of uses that merit reexamining the software licensing. But overall, IBM is continuing its strategy to enhance software price/performance with each generation of hardware. This has been the case for as long as DancingDinosaur has been covering the mainframe. (click graphic below to enlarge)

 IBM z13 technology update pricing

DancingDinsosaur along with other mainframe analysts recently listened to Ray Jones, IBM Vice President, z Systems Sales go through the new z13 software pricing. In short, expect major new structural enhancements coming in the first half of 2015. Of particular interest will be two changes IBM is instituting:

  1. IBM Collocated Application Pricing (ICAP), which lets you run your systems the way that make sense in your organization
  2. Country Multiplex Pricing, an evolution of Sysplex pricing that allows for greater flexibility and simplicity which treats all your mainframes in one country as a single sysplex.

Overall, organizations running the z under AWLC should see a 5% discount on average.

But first let’s take a moment to review AWLC (Advanced Workload License Charge). This monthly program from the start has been intended to allow you to grow hardware capacity without necessarily increasing software charges. Most organizations under AWLC can grow hardware capacity without necessarily increasing software charges. In general you’ll experience a low cost of incremental growth and you can manage software cost by managing workload utilization and deployment across LPARs and peak hours.

A brief word about MSU. DancingDinosaur thinks of MSU as mainframe service unit. It is the measurement of the amount of processing or capacity of your mainframe. IBM determines the MSU rating of a particular mainframe configuration by some arcane process invisible to most of us. The table above starts with MSU; just use the number IBM has assigned your z configuration.

OK, now we’re ready to look at ICAP pricing. IBM describes ICAP as the next evolution of z Systems sub-capacity software pricing. ICAP allows workloads to be priced as if in a dedicated environment although technically you have integrated them with other workloads. In short, you can run your systems and deploy your ICAP workloads the way you want to run them. For example, you might want to run a new anti-fraud app or a new instance of MQ and do it on the same LPAR you’re running some other workload.

ICAP is for new workloads you’re bringing onto the z. You have to define the workloads and are responsible for collecting and reporting the CPU time. It can be as simple as creating a text file to report it. However, don’t rush to do this; IBM suggested an ICAP enhancement to the MWRT sub-capacity reporting tool will be coming.

In terms of ICAP impact on pricing, IBM reports no effect on the reported MSUs for other sub-capacity middleware programs (adjusts MSUs like an offload engine, similar to Mobile Workload Pricing for z/OS). z/OS shops could see 50% of the ICAP-defining program MSUs will be removed, which can result in real savings. IBM reports that ICAP provides a price benefit similar to zNALC for z/OS, but without the requirement for a separate LPAR. Remember, with ICAP you can deploy your workloads where you see fit.

For Country Multiplex Pricing a country multiplex to IBM is the collection of all zEnterprise and later machines in a country, and they are measured like one machine for sub-capacity reporting (applicable to all z196, z114, zEC12, zBC12, and z13 machines). It amounts to a new way of measuring and pricing MSUs, as opposed to aggregating under current rules. The result should be flexibility to move and run work anywhere, the elimination of Sysplex pricing rules, and the elimination of duplicate peaks when workloads move between machines.

In the end, the cost of growth is reduced with one price per product based on growth anywhere in the country. Hardware and software migrations also become greatly simplified because Single Version Charging (SVC) and Cross Systems Waivers (CSW) will no longer be relevant.  And as with ICAP, a new Multiplex sub-capacity reporting tool is coming.

Other savings also remain in play, especially the z/OS mobile pricing discounts, which significantly reduces the level at which mobile activity is calculated for peak load pricing. With the expectation that mobile activity will only grow substantially going forward, these savings could become quite large.

DancingDinosaur is Alan Radding, a veteran mainframe and IT writer and analyst. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my writing at Technologywriter.com and here.

Compuware Topaz Brings Distributed Style to the Mainframe

January 30, 2015

Early in January Compuware launched the first of what it promised would be a wave of tools for the mainframe that leverage the distributed graphical style of working with systems.  The company hopes the tool, Topaz, will become a platform that hooks people experienced with distributed computing, especially Millennials, on working with the mainframe. The company is aiming not just for IT newbies but experienced distributed IT people who find the mainframe alien.

Compuware is pitching Topaz as a solution for addressing the problem of the wave of retirements of experienced mainframe veterans. The product promises to help developers, data architects, and other IT professionals discover, visualize, and work with both mainframe and non-mainframe data in a familiar, intuitive manner.  They can work with it without actually having to directly encounter mainframe applications and databases in their native formats.

compuware topaz screen

Topaz Relationship Visualizer (click to enlarge)

DancingDinosaur has received the full variety of opinions on the retiring mainframe veteran issue, ranging from a serious concern to a bogus issue. Apparently the issue differs with each mainframe shop. In this case, demographics ultimately rule, and people knowledgeable about the mainframe (including DancingDinosaur, sadly) are getting older.  Distributed IT folks, however, know how to operate data centers, manage applications, handle data, and run BI and analytics—all the things we want any competent IT shop to do. So, to speed their introduction to the mainframe it makes sense to give them familiar tools that let them work in accustomed ways.

And Topaz definitely has a familiar distributed look-and-feel. Check out a demonstration of it here. What you will see are elements of systems, applications, and data represented graphically. Click an item and the relevant relationships are exposed. Click again to drill down to detail. To move data between hosts just drag and drop the desired files between distributed hosts and the mainframe.  You also can use a single distributed-like editor to work with data on Oracle, SQL Server, IMS, DB2 and others across the enterprise. The actions are simple, intuitive, and feel like any GUI tool.

The new tool should seem familiar. Compuware built Topaz using open source Eclipse. It also made use of ISPF, the mainframe toolset. Read about Eclipse here.

With Topaz Compuware is trying to address a problem IBM has been tackling through its System z Academic Initiative—to answer where next generation of mainframers will come from.  With its contests and university curriculum IBM is trying to captivate young people early with job possibilities and slick technologies, and catch them as young as high school.

Compuware is aiming for working IT professionals in the distributed environment. They may not be much younger than their mainframe counterparts. but Compuware is giving them a tool that will allow them to immediately start doing meaningful work with both distributed and mainframe systems and do it in a way they immediately grasp.

Topaz treats mainframe and non-mainframe assets in a common manner. As Compuware noted: In an increasingly dynamic big data world it makes less and less sense to treat any platform as an island of information. Topaz takes a huge step in the right direction.

Finally, expect to see Topaz updates and enhancements quarterly. Compuware describes Topaz as an agile development effort, drawing a pointed contrast to the rather languid pace of some mainframe ISVs in getting out updates.  If the company is able to achieve its aggressive release cycle goals that alone may help change perceptions of the mainframe as a staid, somewhat dull platform.

With Topaz Compuware is off to a good start, but you can see where and how the toolset can be expanded upon.  And Compuware even hinted at opening the Topaz platform to other ISVs. Don’t hold your breath, but at the least it may get other mainframe ISVs to speed their own efforts, making the mainframe overall a more dynamic platform. With the z13 IBM raised the innovation bar (see DancingDinosaur here and here). Now other mainframe ISVs must up their game.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. You also can read more of my writing at Technologywriter.com and here.

IBM z13 Chip Optimized for Top Enterprise Performance

January 23, 2015

With the zEC12 IBM boasted of the fastest commercial chip in the industry. It is not making that boast with the z13. Instead, it claims a 40% total capacity improvement over the zEC12. The reason: IBM wants the z13 to excel at mobile, cloud, and analytics as well as fast extreme scale transaction processing. This kind of performance requires optimization up and down the stack; not just chip speed but I/O processing, memory access, instruction tweaks, and more.

 z13 mobile

Testing mobile transactions on the z13

This is not to suggest that the machine is not fast.  It is.  Timothy Prickett Morgan writing in his 400 blog notes that the z13 chip runs a 22 nm core at 5 GHz, half a GHz slower than the zEC12. The zEC12 processor, the one touted as the fastest commercial processor, was a 32nm core that clocked at 5.5 GHz.  Still, the z13 delivers about a 10 percent performance bump per core thanks, he writes, to other tweaks in the core design, such as better branch prediction and better pipelining in the core. The slightly slower clock speed reduces heat.

Up and down the stack IBM has been optimizing the z13 for maximum performance.

  • 2X performance boost for cryptographic coprocessors
  • 2X increase in channel speed
  • 2X increase in I/O bandwidth
  • 3X increase in memory capacity
  • 2X increase in cache and a new level of cache

At 5 GHz, the z13, given all the enhancements IBM has made, remains the fastest. According to IBM, it is the first system able to process 2.5 billion transactions a day, equivalent of 100 Cyber Mondays every day of the year.  Maybe even more importantly, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion (that’s trillion with a T) mobile transactions per day by 2025.

Given that mobile is shaping up to be the device of the future the z13 is the first system to make practical real-time encryption of all mobile transactions at any scale, notes IBM. Specifically, the z13 speeds real-time encryption of mobile transactions to help protect the transaction data and ensure response times consistent with a positive customer experience.  With mobile overall, the machine delivers up to 36% better response time, up to 61% better throughput, and up to 17% lower cost per mobile transaction. And IBM discounts transactions running on z/OS.

To boost security performance the machine benefits from 500 new patents including cryptographic encryption technologies that enable more security features for mobile initiated transactions. In general IBM has boosted the speed of encryption up to 2x over the zEC12 to help protect the privacy of data throughout its life cycle.

Combined with the machine’s embedded analytics it can provide real-time insights on all transactions. This capability helps enable an organization to run real-time fraud detection on 100 percent of its business transactions.  In terms of analytics, the machine deliver insights up to 17x faster at 13x better price performance than its competitors.

Further boosting performance is the increase of memory in the machine. For starters, the machine can handle up to 10 TB of memory onboard to help with z/OS and Linux workloads. To encourage organizations to take advantage of the extra memory IBM is discounting the cost of memory. Today memory runs $1500/GB but organizations can populate the z13 with new memory starting at $1000/GB. With various discounts you can get memory for as little as $200/GB.

So what will you do with a large amount of discounted memory? Start by running more applications in-memory to boost performance.  Do faster table scans in memory to speed response or avoid the need for I/O calls. Speed sorting and analytics by doing it in memory to enable faster, almost real-time decision making. Or you can run more Java without increasing paging and simplify the tuning of DB2, IMS and CICS. Experience 10x faster response time with Flash Express and a 37% increase in throughput compared to disk.

As noted above IBM optimized just about everything that can be optimized. It provides 320 separate channels dedicated just to drive I/O throughput as well as performance goodies only your geeks will appreciate like simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine.

Mainframes have the fastest processors in the industry –none come close–and with the addition of more memory, faster I/O,  and capabilities like SMT and SIMD noted above, the z13 clearly is the fastest. For workloads that benefit from this kind of performance, the z13 is where they should run.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. You can follow DancingDinosaur on Twitter, @mainframeblog. Check out his other IT writing at Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 738 other followers

%d bloggers like this: