Archive for the ‘Uncategorized’ Category

IBM DevOps for the Mainframe

March 27, 2015

DevOps is not just for distributed platforms. IBM has a DevOps strategy for large enterprises (usually mainframe shops) too. Nationwide, a longtime mainframe shop, is an early adopter of DevOps and already is reporting significant gains: reduction in critical software defects by 80% and a 20% efficiency gain in its maintenance and support operations in just 18 months.

DevOps, an agile methodology, establishes a continuous feedback loop between software development and deployment/operations that speeds development and deployment while ensuring quality. This is a far cry from the waterfall development methodologies of the mainframe past.

 desz DevOps adoption model

Courtesy of IBM (click to enlarge)

The IBM DevOps initiative, announced last November (link above), taps into the collaborative capabilities of IBM’s Cloud portfolio to speed the delivery of software that drives new models of engagement and business. Software has become the rock star of IT with software-driven innovation becoming a primary strategy for creating and delivering new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70% of the time. As such, IBM notes, DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Some mainframe shops, however, continue to operate from a software standpoint as if client/server computing and PCs were still the new game in town. Meanwhile the business units keep complaining about how long it takes to make software changes while long backlogs drag on the IT budget.

DevOps is about continuous software development and deployment. That means continuous business planning, continuous collaborative dev, continuous testing, continuous release and deployment, continuous monitoring, and continuous feedback and optimization in a never ending cycle. Basically, continuous everything.  And it really works, as Nationwide can attest.

But DevOps makes traditional mainframe shops nervous. Mainframe applications are rock solid and crashes and failures almost unheard of. How can they switch to DevOps without risking everything the mainframe stands for, zero failure?

The answer: mainframe DevOps that leads straight into continuous testing, not deployment. The testing can and should be as rigorous and extensive as is necessary to reassure that everything works as it should and anything that will fail has failed. Only then does it go into production.

It would be comforting to the data centers to say that DevOps only addresses systems of engagement; those pesky mobile, collaborative, and social systems that suddenly are making demands on the core mainframe production applications. But that is not correct. DevOps is about integrating systems of engagement with systems of record, the enterprise’s mainframe crown jewels. The trick is to bring together the culture, processes, and tools across the entire software delivery lifecycle, as IBM says, to span it all—mobile to mainframe, slowing down only to conduct as exhaustive testing as the enterprise requires.

Mainframe tools from the era of waterfall methodologies won’t cut it. Rational offers a set of tools starting with Blue Agility. IBM also offers an expanded set of tools acquired through acquisitions such as UrbanCode (release automation) and GreenHat (software quality and testing solutions for the cloud and more) that offer an integrated developer experience on open cloud platforms such as Bluemix to expedite DevOps collaboration, according to IBM.

Expect push back from any attempt to introduce DevOps into a traditional mainframe development culture. Some shops have been developing systems the same way for 30 years or more. Resistance to change is normal. Plan to start gradually, implementing DevOps incrementally.

Some shops, however, may surprise you. Here the mainframe team senses they are falling behind. IBM, of course, has tools to help (see above). Some experts recommend focusing on automated testing early on; when testing is automated DevOps adoption gets easier, they say, and old school developers feel more reassured.

At IBM Edge2015, there are at least 2 sessions on DevOps: Light Up Performance of Your LAMP Apps and DevOps with a Power Optimized Stack; and CICS Cloud DevOps = Agility2. BTW, it is a good time to register for IBM Edge2015 right away, when you can still get a discount. IBM Edge2015, being billed as the Infrastructure Innovation Conference, takes place May 11 – 15 at The Venetian in Las Vegas. DancingDinsosaur will be there. Have just started pouring over the list of sessions on hundreds of topics for every IBM platform and infrastructure subject. IBM Edge2015 combines what previously had been multiple conferences into one.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

Vodafone Spain Picks IBM zSystem for Smarter Cities Initiative

March 19, 2015

The Vodafone initiative, as reported here previously, leverages the most advanced mobile communications technology including citywide sensors and a Global M2M Platform that will enable the connection of thousands of sensors to the intelligent Vodafone Connected City system. The new cloud-based system will run on IBM Linux z Systems. The Linux z Systems were selected for their high security, which protects cloud services while also delivering the speed, availability, and efficiency required to drive mobile services at scale.  To do something at scale you really do want the z System.

 vodafone zsystem running linux

Courtesy of IBM: zSystem and Linux

For Vodafone this represents the beginning of what they refer to as a Smarter Cities services initiative. The effort targets local governments and city councils with populations ranging between 20.000 – 200.000 citizens. The services provided will address customer’s needs in the following key areas: urban vitality, public lighting, energy efficiency, waste management, and citizen communications.

In effect, Vodafone is becoming a SaaS provider by leveraging their new zSystem. Vodafone’s customers for this are the government groups that opt to participate. The company announced the effort at the World Mobile Congress in Barcelona at the beginning of the month.

One of the initial participants will be Seville, the capital of the province of Andalucía, where a control and development center will be established by Vodafone. The telco will invest more than 243 million euros over two years on telecommunications infrastructure, encouraging the development of the technology sector and developing projects to create strategic growth in the region.

Initially, the center will focus on creating smart city solutions that can easily and efficiently be used by cities ranging from 20,000 to 150,000 residents; cities that otherwise may not have the funds to invest in smart city infrastructure projects on their own. This center is also expected to help make the Andalucía territory of Spain a leader in the development of Big Data and smart solutions.

IBM is delivering the full stack to Vodafone: a set of cloud services that include an enterprise zSystem Linux server (IBM zBC12), v7000 storage, IBM intelligent operations, an information services solution, and more.  Vodafone opted for the z and Linux to enable cost-efficient, highly secure cloud services while also delivering the speed, availability and efficiency required to drive mobile services at scale. IBM Intelligent Operations software will provide monitoring and management of city services. IBM’s MobileFirst platform will be used to create citizen-facing mobile applications while IBM Information Server and Maximo asset management software will round out the IBM stack.

Overall, IBM, the zSystem, and Linux brought a number of benefits to this initiative. Specifically, the zSystem proved the least expensive when running more than seven vertical services as Vodafone is planning. An example of such a vertical service is the public lighting of a city. This also is where scalability brings a big advantage. Here again, the zSystem running Linux delivers scalability along with greater security and regulatory compliance. Finally, another critical capability for Vodafone was the zSystem’s ability to isolate workloads.

In short, the zSystem’s security and regulation compliance; reliability, resilience, and robustness; strong encoding and workload isolation, workload management and ability to meet SLAs; scalability; and high efficiency clinched the Vodafone deal.

This could prove a big win for IBM and the zSystem. Vodafone has mobile operations in 26 countries, partners with mobile networks in 54 more, and runs fixed broadband operations in 17 markets. As of the end of 2014, Vodafone had 444 million mobile customers and 11.8 million fixed broadband customers. Vodafone Spain’s 14.811.000 mobile customers and 2.776.000 broadband ones will certainly take maximum advantage of the zSystem’s scalability and reliability.

…as a follow up to last week’s report on recent success coming from the OpenPower Foundation that string continued this week at the OpenPOWER Inaugural Summit with the OpenPOWER Foundation announcing more than ten hardware solutions spanning systems, boards, cards, and a new microprocessor customized for the Chinese market.  Built collaboratively by OpenPOWER members, the new solutions exploit the POWER architecture to provide more choice, customization and performance to customers, including hyperscale data centers.

Among the products and prototypes OpenPOWER members revealed are:

  • Firestone, a prototype of a new high-performance server targeting exascale computing and projected to be 5-10x faster than today’s supercomputers. It incorporate technology from NVIDIA and Mellanox.
  • The first GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950, resulting from collaboration between NVIDIA, Tyan, and Cirrascale.
  • An open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack by Rackspace and designed to run OpenStack services.

Other member-developed new products leverage the Coherent Accelerator Processor Interface (CAPI), a hallmark feature built into the POWER architecture. DancingDinosaur initially covered CAPI here.

Reminder: it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

OpenPOWER Starts Delivering the Goods

March 13, 2015

Are you leery of multi-vendor consortiums? DancingDinosaur as a rule is skeptical of the grand promises they make until they actually start delivering results. That was the case with OpenPOWER last spring when you read here that the OpenPOWER Foundation was introduced and almost immediately forgotten.

 power8 cpu blocks

IBM POWER8 processor, courtesy of IBM (click to enlarge)

But then last fall DancingDinosaur reported on NVIDIA and its new GPU accelerator integrated directly into the server here. This too was an OpenPOWER Foundation-based initiative. Suddenly, DancingDinosaur is thinking the OpenPOWER Foundation might actually produce results.

For example, IBM introduced a new range of systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 Processor-based systems. The result:  a superior alternative to closed, commodity-based data center servers. Better performance and at a lower price. What’s not to like?

The first place you probably want to apply this improved price/performance is to big data, which generates 2.5 quintillion bytes of data across the planet every day. Even the miniscule portion of this amount that you actually generate will very quickly challenge your organization to build a sufficiently powerful technology infrastructures to gain actionable insights from this data fast enough and at a price you can afford.

The commodity x86 servers used today by most organizations are built on proprietary Intel processor technology and are increasingly stretched to their limits by workloads related to big data, cloud and mobile. By contrast, IBM is designing a new data centric approach to systems that leverages the building blocks of the OpenPOWER Foundation.

This is plausible given the success of NVIDIA with its GPU accelerator. And just this past week Altera demonstrated its OpenPOWER-based FPGA, now being used by several other Foundation members who are collaborating to develop high-performance compute solutions that integrate IBM POWER chips with Altera’s FPGA-based acceleration technologies.

Formed in late 2013, the OpenPOWER Foundation has grown quickly from 5 founders to over 100 today. All are collaborating in various ways to leverage the IBM POWER processor’s open architecture for broad industry innovation.

IBM is looking to offer the POWER8 core and other future cores under the OpenPOWER initiative but they are also making previous designs available for licensing. Partners are required to contribute intellectual property to the OpenPOWER Foundation to be able to gain high level status. The earliest successes have been around accelerators and such, some based on POWER8’s CAPI (Coherence Attach Processor Interface) expansion bus built specifically to integrate easily with external coprocessors like GPUs, ASICs and FPGAs. DancingDinosaur will know when the OpenPOWER Foundation is truly on the path to acceptance when a member introduces a non-IBM POWER8 server. Have been told that may happen in 2015.

In the meantime, IBM itself is capitalizing on the OpenPower Foundation. Its new IBM Power S824L servers are built on IBM’s POWER8 processor and tightly integrate other OpenPOWER technologies, including NVIDIA’s GPU accelerator. Built on the OpenPOWER stack, the Power S824L provides organizations the ability to run data-intensive tasks on the POWER8 processor while offloading other compute-intensive workloads to GPU accelerators, which are capable of running millions of data computations in parallel and are designed to significantly speed up compute-intensive applications.

Further leveraging the OpenPOWER Foundation at the start of March IBM announced that SoftLayer will offer OpenPOWER servers as part of its portfolio of cloud services. Organizations will then be able to select OpenPOWER bare metal servers when configuring their cloud-based IT infrastructure from SoftLayer, an IBM company. The servers were developed to help organizations better manage data-intensive workloads on public and private clouds, effectively extending their existing infrastructure inexpensively and quickly. This is possible because OpenPOWER servers leverage IBM’s licensable POWER processor technology and feature innovations resulting from open collaboration among OpenPOWER Foundation members.

Due in the second quarter, the SoftLayer bare metal servers run Linux applications and are based on the IBM POWER8 architecture. The offering, according to IBM, also will leverage the rapidly expanding community of developers contributing to the POWER ecosystem as well as independent software vendors that support Linux on Power and are migrating applications from x86 to the POWER architecture. Built on open technology standards that begin at the chip level, the new bare metal servers are built to assist a wide range of businesses interested in building custom hybrid, private, and public cloud solutions based on open technology.

BTW, it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.DancingDinosaur is Alan Radding, a veteran IT analyst and writer.

Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Recent IBM Successes Target Mobile

March 6, 2015

IBM introduced its latest set of MobileFirst for iOS apps at Mobile World Conference, held this week in Barcelona. Not coincidentally, Vodafone Spain, along with IBM, announced its Connected City Initiative to help Spanish cities drive new efficiency. BTW, IBM launched its first set of iOS business apps this past December.

 ibm ios retail appCourtesy of IBM: retailers gain real-time perspective and data-driven recommendations (click to enlarge)

The Vodafone initiative, according to the company, leverages the most advanced mobile communications technology including citywide sensors and a Global M2M Platform that will enable the connection of thousands of sensors with the intelligent Vodafone Connected City system. The new cloud-based system will run on IBM Linux z Systems. The Linux z Systems were selected for their high security, which enables cloud services while also delivering the speed, availability, and efficiency required to drive mobile services at scale.  To do something at scale you want the z System.

But more interesting to DancingDinosaur readers may be the latest set of MobileFirst for iOS apps that target banking and financial services, airlines, and retail. These are strong areas of interest at IBM z Systems shops. At the announcement, more than 50 foundational clients including Air Canada, American Eagle Outfitters, Banorte, Boots UK, Citi, and Sprint have signed on for apps that deliver the complete expertise and capabilities of those companies to their employees wherever they interact with clients and do it faster, more easily, and more securely than ever before via the Apple iPhone and iPad.

These apps promise to transform the job roles and effectiveness of hundreds of millions of professionals globally. For example, Boots UK staff will gain access to real-time data and insight from across the company through their iOS device, allowing them to offer shoppers even greater levels of service, such as real-time stock availability and easy in store ordering. The result: new levels of convenience and accessibility for Boots customers.

Specifically, the latest IBM MobileFirst for iOS applications address:

  • Passenger Care (Travel)empowers customer service agents to address traveler needs from anywhere by enabling a smoother, more personalized experience while speeding check-in and easing airport congestion.
  • Dynamic Buy(Retail)—retailers gain real-time perspective and data-driven recommendations on how products are performing and data-driven recommendations that help retailers realize better return on investments.
  • Advisor Alerts(Banking and Financial Services)—uses analytics to help financial professionals prioritize client-related tasks on-the-go while backed by customized analytics that tells the advisor what’s most important through a personalized dashboard that displays recommended next steps.

ibm ios financial app

Courtesy of IBM:  analytics for financial services staff (click to enlarge)

The new apps are designed to fundamentally redefine the ways enterprises empower their professionals to interact, learn, connect, and perform. Built exclusively for iPhone and iPad, IBM MobileFirst for iOS apps are delivered in a secure environment, embedded with analytics, and linked to core enterprise processes. The apps can be customized for any organization and easily deployed, managed and upgraded via cloud services from IBM specifically for iOS devices.

Making this happen behind the scenes at many organizations will be IBM z System-based data and logic, including not only CICS but also Hadoop and other analytics running on the z, increasingly in real time. Of course, it all revolves around IBM’s MobileFirst, which allows enterprises to streamline and accelerate mobile adoption. In addition to the IBM-Apple iOS applications organizations can develop their own mobile apps using streamlined tool sets like IBM Bluemix.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing onTechnologywriter.com and here.

BTW, it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.

IBM Redefines Software Defined Storage

February 25, 2015

On Feb. 17 IBM unveiled IBM Spectrum Storage, a new storage software portfolio designed to address data storage inefficiencies by changing the economics of storage with a layer of intelligent software; in short, a software defined storage (SDS) initiative.  IBM’s new software creates an efficient data footprint that dynamically stores every bit of data at the optimal cost, helping maximize performance and ensuring security, according to the IBM announcement here.

Jared Lazarus/Feature Photo Service for IBM

Courtesy of IBM: IBM Storage GM demonstrates new Spectrum storage management dashboard

To accelerate the development of next-generation storage software, IBM included plans to invest more than $1 billion in its storage software portfolio over the next five years. The objective: extend its storage technology leadership, having recently been ranked #1 in SDS platforms for the first three quarters of 2014 by leading industry analyst firm IDC. The investment will focus on R&D of new cloud storage software, object storage, and open standard technologies including OpenStack.

“Traditional storage is inefficient in today’s world where the value of each piece of data is changing all the time,” according to Tom Rosamilia, Senior Vice President, IBM Systems, in the announcement. He went on: “IBM is revolutionizing storage with our Spectrum Storage software that helps clients to more efficiently leverage their hardware investments to extract the full business value of data.”

Two days later IBM announced another storage initiative, flash products aimed directly at, EMC. The announcement focused on two new all-flash enterprise storage solutions, FlashSystem V9000 and FlashSystem 900. Each promises industry-leading performance and efficiency, along with outstanding reliability to help lower costs and accelerate data-intensive applications. The new solutions can provide real-time analytical insights with up to 50x better performance than traditional enterprise storage, and up to 4x better capacity in less rack space than EMC XtremIO flash technology.

Driving interest in IBM Spectrum storage is research suggesting that less than 50% of storage is effectively utilized. Storage silos continue to be rampant throughout the enterprise as companies recreate islands of Hadoop-based data along with more islands of storage to support ad hoc cloud usage. Developers create yet more data silos for dev, testing, and deployment.

IBM Storage Spectrum addresses these issues and more through a SDS approach that separates storage capabilities and intelligence from the physical devices. The resulting storage is self-tuning and leverages analytics for efficiency, automation, and optimization. By capitalizing on its automatic data placement capabilities IBM reports it can meet services levels while reducing storage costs by as much as 90%.

Specifically, IBM Spectrum consists of six storage software elements:

  1. IBM Spectrum Control—analytics-driven data management to reduce costs by up to 50%
  2. IBM Spectrum Protect—optimize data protection to reduce backup costs by up to 38%
  3. IBM Spectrum Archive—fast data retention that reduces TCO for archive data by up to 90%
  4. IBM Spectrum Virtualize—virtualization of mixed environment to store up to 5x more data
  5. IBM Spectrum Accelerate—enterprise storage for cloud, which can be deployed in minutes instead of months
  6. IBM Spectrum Scale—high-performance, highly scalable storage for unstructured data

Each of these elements can be mapped back to existing IBM storage solutions.  Spectrum Accelerate, for example, uses IBM’s XIV capabilities. Spectrum virtualization is based on IBM’s San Volume Controller (SVC) technology. Spectrum Scale is based on GPFS, now called Elastic Storage, to handle file and object storage at massive scale yet within a single global name space.  Spectrum Archive, based on IBM’s LTFS, allows an organization to treat tape as a low cost, fully active tier.  In effect, with IBM Spectrum, an organization can go from flash cache to tape, all synced worldwide within a single name space.

A big part of what IBM is doing amounts to repackaging the capabilities it has built into its storage systems and proven in various products like XIV or GPFS or SVC as software components to be used as part of an SDS deployment. This raises some interesting possibilities. For instance, is it cheaper to use Spectrum Accelerate with a commodity storage array or buy the conventional XIV storage product?  The same probably could be asked of Spectrum Virtualize with SVC or Spectrum Archive with LTFS.

DancingDinosaur asked the Spectrum marketing team exactly that question.  Their response: With Accelerate you have the flexibility to size the server to the performance needs of the solution, so while the software cost remains the same regardless of the server you select. The cost of the server will vary depending on what the client needs. We will make available a sizing guide soon so each client’s situation can be modeled based on the solution requirements. In all cases it really depends on the hardware chosen vs. the (IBM) appliance. If the hardware closely matches the hardware of the appliance then costs differences will be minimal. It all depends on the price the client gets, so yes, in theory, a white box may be lower cost.

With Spectrum Accelerate (XIV), IBM continues, the client can also deploy the software on a cluster of just 3 servers (minimum) and leverage existing Ethernet networking.  This minimum configuration will be much lower cost than the minimum XIV system configuration cost. Spectrum Accelerate can also be licensed on a monthly basis, so those clients with variable needs or deploying to the cloud the client can deploy and pay for only what they need when they need it.

It is a little different for the other Spectrum offerings. DancingDinosaur will continue chasing down those details. Stay tuned. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Follow more of his IT writing on Technologywriter.com and here.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

IBM z Systems as a Cloud Platform

February 13, 2015

DancingDinosaur wrote a long paper for an audience of x86 users. The premise of the paper: the z Systems in many cases could be a better and even lower cost alternative to x86 for a private or hybrid cloud. The following is an excerpt from that paper.

 cloud_computing_providers

BTW, IBM earlier this month announced it signed a 10-year, large-scale services agreement with Shop Direct to move the multi-brand digital retailer to a hybrid cloud model to increase flexibility and quickly respond to changes in demand as it grows, one of many such IBM wins recently. The announcement never mentioned Shop Direct’s previous platform. But it or any company in a similar position could have opted to build its own hybrid (private/public) cloud platform.

A hybrid cloud a company builds today probably runs on the x86 platform and the Windows OS. Other x86-based clouds run Linux. As demand for the organization’s hybrid cloud grows and new capabilities are added traffic increases.  The conventional response is to scale out or scale up, adding more or faster x86 processors to handle more workloads for more users.

So, why not opt for a hybrid cloud running on the z? As a platform, x86 is far from perfect; too unstable and insecure for starters. By adopting a zEC12 or a z13 to host your hybrid cloud you get one of the fastest general commercial processors in the market and the highest security rating for commercial servers, (EAL 5+). But most x86-oriented data centers would balk. Way too expensive would be their initial reaction. Even if they took a moment to look at the numbers their IT staff would be in open revolt and give you every reason it couldn’t work.

The x86 platform, however, is not nearly as inexpensive as it was believed, and there are many ways to make the z cost competitive. Due to the eccentricities of Oracle licensing on the z Systems, for instance, organizations often can justify the entire cost of the mainframe just from the annual Oracle software license savings. This can amount to hundreds of thousands of dollars or more each year. And the entry level mainframe has a list price of $75,000, not much more than an x86 system of comparable MIPS. And that’s before you start calculating the cost of x86 redundancy, failover, and zero downtime that comes built into the mainframe or consider security. Plus with the z Systems Solution Edition program, IBM is almost giving the mainframe away for free.

Some x86 shops could think of the mainframe as a potent Linux machine that can handle thousands of Linux instances without breaking a sweat. The staff wouldn’t even have to touch z/OS. It also runs Java and Hadoop. And it delivers an astonishingly fast and efficient Linux environment that provides a level of performance that would require a much great number of x86 cores to try to match. And if you want to host an on-premises or hybrid cloud at enterprise scale it takes a lot of cores. The cost of acquiring all those x86 cores, deploying them, and managing them will break almost any budget.

Just ask Jim Tussing, Chief Technology Officer for infrastructure and operations at Nationwide Insurance (DancingDinosaur has covered Tussing before): “We had literally 3000 x86 servers deployed that were underutilized,” which is common in the x86 environment even with VMware or Hyper-V virtualization. At a time when Nationwide was seeking to increase the pace of innovation across its products and channels, but rolling out new environments were taking weeks or months to provision and deploy, again not unheard of in the x86 world. The x86 environment at Nationwide was choking the company.

So, Nationwide consolidated and virtualized as many x86 servers on a mainframe as possible, creating what amounted to an on-premises and hybrid cloud. The payoff: Nationwide reduced power, cooling, and floor space requirements by 80 percent. And it finally reversed the spiraling expenditure on its distributed server landscape, saving an estimated $15 million over the first three years, money it could redirect into innovation and new products. It also could provision new virtual server instances fast and tap the hybrid cloud for new capabilities.

None of this should be news to readers of DancingDinosaur. However some mainframe shops still face organizational resistance to mainframe computing. Hope this might help reinforce the z case.

DancingDinsosaur is Alan Radding, a long-time IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my IT writing at Technologywriter.com and here.

New Software Pricing for IBM z13

February 6, 2015

Every new mainframe causes IBM to rethink its pricing. This makes sense because mainframe software licensing is complex. The z13 enables different workloads and combinations of uses that merit reexamining the software licensing. But overall, IBM is continuing its strategy to enhance software price/performance with each generation of hardware. This has been the case for as long as DancingDinosaur has been covering the mainframe. (click graphic below to enlarge)

 IBM z13 technology update pricing

DancingDinsosaur along with other mainframe analysts recently listened to Ray Jones, IBM Vice President, z Systems Sales go through the new z13 software pricing. In short, expect major new structural enhancements coming in the first half of 2015. Of particular interest will be two changes IBM is instituting:

  1. IBM Collocated Application Pricing (ICAP), which lets you run your systems the way that make sense in your organization
  2. Country Multiplex Pricing, an evolution of Sysplex pricing that allows for greater flexibility and simplicity which treats all your mainframes in one country as a single sysplex.

Overall, organizations running the z under AWLC should see a 5% discount on average.

But first let’s take a moment to review AWLC (Advanced Workload License Charge). This monthly program from the start has been intended to allow you to grow hardware capacity without necessarily increasing software charges. Most organizations under AWLC can grow hardware capacity without necessarily increasing software charges. In general you’ll experience a low cost of incremental growth and you can manage software cost by managing workload utilization and deployment across LPARs and peak hours.

A brief word about MSU. DancingDinosaur thinks of MSU as mainframe service unit. It is the measurement of the amount of processing or capacity of your mainframe. IBM determines the MSU rating of a particular mainframe configuration by some arcane process invisible to most of us. The table above starts with MSU; just use the number IBM has assigned your z configuration.

OK, now we’re ready to look at ICAP pricing. IBM describes ICAP as the next evolution of z Systems sub-capacity software pricing. ICAP allows workloads to be priced as if in a dedicated environment although technically you have integrated them with other workloads. In short, you can run your systems and deploy your ICAP workloads the way you want to run them. For example, you might want to run a new anti-fraud app or a new instance of MQ and do it on the same LPAR you’re running some other workload.

ICAP is for new workloads you’re bringing onto the z. You have to define the workloads and are responsible for collecting and reporting the CPU time. It can be as simple as creating a text file to report it. However, don’t rush to do this; IBM suggested an ICAP enhancement to the MWRT sub-capacity reporting tool will be coming.

In terms of ICAP impact on pricing, IBM reports no effect on the reported MSUs for other sub-capacity middleware programs (adjusts MSUs like an offload engine, similar to Mobile Workload Pricing for z/OS). z/OS shops could see 50% of the ICAP-defining program MSUs will be removed, which can result in real savings. IBM reports that ICAP provides a price benefit similar to zNALC for z/OS, but without the requirement for a separate LPAR. Remember, with ICAP you can deploy your workloads where you see fit.

For Country Multiplex Pricing a country multiplex to IBM is the collection of all zEnterprise and later machines in a country, and they are measured like one machine for sub-capacity reporting (applicable to all z196, z114, zEC12, zBC12, and z13 machines). It amounts to a new way of measuring and pricing MSUs, as opposed to aggregating under current rules. The result should be flexibility to move and run work anywhere, the elimination of Sysplex pricing rules, and the elimination of duplicate peaks when workloads move between machines.

In the end, the cost of growth is reduced with one price per product based on growth anywhere in the country. Hardware and software migrations also become greatly simplified because Single Version Charging (SVC) and Cross Systems Waivers (CSW) will no longer be relevant.  And as with ICAP, a new Multiplex sub-capacity reporting tool is coming.

Other savings also remain in play, especially the z/OS mobile pricing discounts, which significantly reduces the level at which mobile activity is calculated for peak load pricing. With the expectation that mobile activity will only grow substantially going forward, these savings could become quite large.

DancingDinosaur is Alan Radding, a veteran mainframe and IT writer and analyst. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my writing at Technologywriter.com and here.

Compuware Topaz Brings Distributed Style to the Mainframe

January 30, 2015

Early in January Compuware launched the first of what it promised would be a wave of tools for the mainframe that leverage the distributed graphical style of working with systems.  The company hopes the tool, Topaz, will become a platform that hooks people experienced with distributed computing, especially Millennials, on working with the mainframe. The company is aiming not just for IT newbies but experienced distributed IT people who find the mainframe alien.

Compuware is pitching Topaz as a solution for addressing the problem of the wave of retirements of experienced mainframe veterans. The product promises to help developers, data architects, and other IT professionals discover, visualize, and work with both mainframe and non-mainframe data in a familiar, intuitive manner.  They can work with it without actually having to directly encounter mainframe applications and databases in their native formats.

compuware topaz screen

Topaz Relationship Visualizer (click to enlarge)

DancingDinosaur has received the full variety of opinions on the retiring mainframe veteran issue, ranging from a serious concern to a bogus issue. Apparently the issue differs with each mainframe shop. In this case, demographics ultimately rule, and people knowledgeable about the mainframe (including DancingDinosaur, sadly) are getting older.  Distributed IT folks, however, know how to operate data centers, manage applications, handle data, and run BI and analytics—all the things we want any competent IT shop to do. So, to speed their introduction to the mainframe it makes sense to give them familiar tools that let them work in accustomed ways.

And Topaz definitely has a familiar distributed look-and-feel. Check out a demonstration of it here. What you will see are elements of systems, applications, and data represented graphically. Click an item and the relevant relationships are exposed. Click again to drill down to detail. To move data between hosts just drag and drop the desired files between distributed hosts and the mainframe.  You also can use a single distributed-like editor to work with data on Oracle, SQL Server, IMS, DB2 and others across the enterprise. The actions are simple, intuitive, and feel like any GUI tool.

The new tool should seem familiar. Compuware built Topaz using open source Eclipse. It also made use of ISPF, the mainframe toolset. Read about Eclipse here.

With Topaz Compuware is trying to address a problem IBM has been tackling through its System z Academic Initiative—to answer where next generation of mainframers will come from.  With its contests and university curriculum IBM is trying to captivate young people early with job possibilities and slick technologies, and catch them as young as high school.

Compuware is aiming for working IT professionals in the distributed environment. They may not be much younger than their mainframe counterparts. but Compuware is giving them a tool that will allow them to immediately start doing meaningful work with both distributed and mainframe systems and do it in a way they immediately grasp.

Topaz treats mainframe and non-mainframe assets in a common manner. As Compuware noted: In an increasingly dynamic big data world it makes less and less sense to treat any platform as an island of information. Topaz takes a huge step in the right direction.

Finally, expect to see Topaz updates and enhancements quarterly. Compuware describes Topaz as an agile development effort, drawing a pointed contrast to the rather languid pace of some mainframe ISVs in getting out updates.  If the company is able to achieve its aggressive release cycle goals that alone may help change perceptions of the mainframe as a staid, somewhat dull platform.

With Topaz Compuware is off to a good start, but you can see where and how the toolset can be expanded upon.  And Compuware even hinted at opening the Topaz platform to other ISVs. Don’t hold your breath, but at the least it may get other mainframe ISVs to speed their own efforts, making the mainframe overall a more dynamic platform. With the z13 IBM raised the innovation bar (see DancingDinosaur here and here). Now other mainframe ISVs must up their game.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. You also can read more of my writing at Technologywriter.com and here.

IBM z13 Chip Optimized for Top Enterprise Performance

January 23, 2015

With the zEC12 IBM boasted of the fastest commercial chip in the industry. It is not making that boast with the z13. Instead, it claims a 40% total capacity improvement over the zEC12. The reason: IBM wants the z13 to excel at mobile, cloud, and analytics as well as fast extreme scale transaction processing. This kind of performance requires optimization up and down the stack; not just chip speed but I/O processing, memory access, instruction tweaks, and more.

 z13 mobile

Testing mobile transactions on the z13

This is not to suggest that the machine is not fast.  It is.  Timothy Prickett Morgan writing in his 400 blog notes that the z13 chip runs a 22 nm core at 5 GHz, half a GHz slower than the zEC12. The zEC12 processor, the one touted as the fastest commercial processor, was a 32nm core that clocked at 5.5 GHz.  Still, the z13 delivers about a 10 percent performance bump per core thanks, he writes, to other tweaks in the core design, such as better branch prediction and better pipelining in the core. The slightly slower clock speed reduces heat.

Up and down the stack IBM has been optimizing the z13 for maximum performance.

  • 2X performance boost for cryptographic coprocessors
  • 2X increase in channel speed
  • 2X increase in I/O bandwidth
  • 3X increase in memory capacity
  • 2X increase in cache and a new level of cache

At 5 GHz, the z13, given all the enhancements IBM has made, remains the fastest. According to IBM, it is the first system able to process 2.5 billion transactions a day, equivalent of 100 Cyber Mondays every day of the year.  Maybe even more importantly, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion (that’s trillion with a T) mobile transactions per day by 2025.

Given that mobile is shaping up to be the device of the future the z13 is the first system to make practical real-time encryption of all mobile transactions at any scale, notes IBM. Specifically, the z13 speeds real-time encryption of mobile transactions to help protect the transaction data and ensure response times consistent with a positive customer experience.  With mobile overall, the machine delivers up to 36% better response time, up to 61% better throughput, and up to 17% lower cost per mobile transaction. And IBM discounts transactions running on z/OS.

To boost security performance the machine benefits from 500 new patents including cryptographic encryption technologies that enable more security features for mobile initiated transactions. In general IBM has boosted the speed of encryption up to 2x over the zEC12 to help protect the privacy of data throughout its life cycle.

Combined with the machine’s embedded analytics it can provide real-time insights on all transactions. This capability helps enable an organization to run real-time fraud detection on 100 percent of its business transactions.  In terms of analytics, the machine deliver insights up to 17x faster at 13x better price performance than its competitors.

Further boosting performance is the increase of memory in the machine. For starters, the machine can handle up to 10 TB of memory onboard to help with z/OS and Linux workloads. To encourage organizations to take advantage of the extra memory IBM is discounting the cost of memory. Today memory runs $1500/GB but organizations can populate the z13 with new memory starting at $1000/GB. With various discounts you can get memory for as little as $200/GB.

So what will you do with a large amount of discounted memory? Start by running more applications in-memory to boost performance.  Do faster table scans in memory to speed response or avoid the need for I/O calls. Speed sorting and analytics by doing it in memory to enable faster, almost real-time decision making. Or you can run more Java without increasing paging and simplify the tuning of DB2, IMS and CICS. Experience 10x faster response time with Flash Express and a 37% increase in throughput compared to disk.

As noted above IBM optimized just about everything that can be optimized. It provides 320 separate channels dedicated just to drive I/O throughput as well as performance goodies only your geeks will appreciate like simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine.

Mainframes have the fastest processors in the industry –none come close–and with the addition of more memory, faster I/O,  and capabilities like SMT and SIMD noted above, the z13 clearly is the fastest. For workloads that benefit from this kind of performance, the z13 is where they should run.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. You can follow DancingDinosaur on Twitter, @mainframeblog. Check out his other IT writing at Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 734 other followers

%d bloggers like this: