Posts Tagged ‘mobile’

Compuware Triples Down on Promised Quarterly z System Releases

October 14, 2016

Since Jan 2015 Compuware has been releasing enhancements to its mainframe software portfolio quarterly.  The latest quarterly release, dated Oct. 3, delivers REST APIs for ISPW source code management and DevOps release automation; Integration of Compuware Abend-AID with Syncsort Ironstream to create their own custom cross-platform DevOps toolchains; and a new Seasoft Plug-In for Topaz Workbench. The Seasoft plug-in will help less skilled IBM z/OS developers to manage mainframe batch processing along with other z platform tasks


Compuware’s point is to position the mainframe at the heart of agile DevOps computing. As part of the effort, it needs to deliver slick, modern tools that will appear to the non-mainframers who are increasingly moving into multi-platform development roles that include the mainframe. These people want to work as if they are dealing with a Windows or Linux machine. They aren’t going to wrestle with arcane mainframe constructs like Abends or JCL.  Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets. The new dev and ops people who are filling out data center ranks haven’t the patience to learn what they view as antiquated mainframe concepts. They need intelligent tools that visualize the issue and let them intuitively click, drag, drop, and swipe their way through whatever needs to be done.

This is driven by the long-expected attrition of veteran mainframers and the mainframe knowledge and application insight they brought. Only the recession that began in 2008 slowed the exit of aging mainframers. Now they are leaving; one mainframe credit card processor reportedly lost 50 mainframe staff in a month.  The only way to replace this kind of experience is with intelligent and easy to learn tools and expert automation.

Compuware’s response has been to release new tools and enhancements every quarter. It started with Topaz in 2015. DancingDinosaur covered it Jan. 2015 here.  The beauty of Topaz lies in its graphical ease-of-use. Data center newbies didn’t need to know z/OS; they could understand what they were seeing and do meaningful work. With each quarterly release Compuware, in one way or another, has advanced this basic premise.

The most recent advances are streamlining the DevOps process in a variety of ways.  DevOps has emerged as critical with mainframe shops scrambling to remain relevant and effective in a rapidly evolving app dev environment. Just look at Bluemix if you want to see where things are heading.

In the first announcement, Compuware extended mainframe DevOps innovation with REST APIs for ISPW SCM and release automation. The new APIs enable large enterprises to flexibly integrate their numerous other mainframe and non-mainframe DevOps tools with ISPW to create their own custom cross-platform DevOps toolchains. Part of that was  the acquisition of the assets associated with Itegrations’s source code management (SCM) migration practice and methodology, which will  enable Compuware users to more easily migrate their SCM systems from Agile-averse products such as CA Endevor, CA Panvalet, CA Librarian, and Micro Focus/Serena ChangeMan as well as internally developed SCM systems—to ISPW

According to Compuware, these DevOps toolchains are becoming increasingly important for two reasons:

  • Enterprises must aggressively adopt DevOps disciplines in their mainframe environments to fulfill business requirements for digital agility. Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets to counter new, digitally nimble market disruptors.
  • Data centers need to better integrate the toolchains that support their newly adopted mainframe DevOps workflows with those that support DevOps across their various other platforms. This is because mainframe applications and data so often function as back-end systems-of-record for front-end web and mobile systems-of-engagement in multi-tier/cross-platform environments.

In the second announcement Compuware integrated Abend-AID and Syncsort’s Ironstream to give fast, clear insight into mainframe issues. Specifically, the integration of Abend-AID and Ironstream \ enables IT to more quickly discover and act upon correlations between application faults and broader conditions in the mainframe environment. This is particularly important, notes Compuware, as enterprises, out of necessity, shift operational responsibilities for the platform to staffs with limited experience on z/OS. Just put yourself into the shoes of a distributed system manager now dealing with a mainframe. What might appear to be a platform issue may turn out to be software faults, and vice versa.  The retired 30-year mainframe veterans would probably see it immediately (but not always). Mainframe newcomers need a tool with the intelligence to recognize it for them.

With the last announcement Compuware and Software Engineering of America (SEA) introduced the release of SEA’s JCLplus+ Remote Plug-In and $AVRS Plug-In for Compuware’s Topaz Workbench mainframe IDE. Again think about mainframe neophytes. The new plug-ins for Topaz significantly ease challenging JCL- and output-related tasks, according to Compuware, effectively enabling both expert and novice IT staff to perform those tasks more quickly and more accurately in the context of their other mainframe DevOps activities.

An encouraging aspect of this is that Compuware is not doing this alone. The company is teaming up with SEA and with Syncsort to make this happen. As the mainframe vendors work to make mainframe computing easier and more available to lesser trained people it will be good for the mainframe industry as a whole and maybe even help lower the cost of mainframe operations.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth


IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.




IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Leads in TBR Private and Hybrid Cloud Surveys

August 4, 2016

IBM has been named number one in private clouds by independent technology market research firm Technology Business Research (TBR) as well as number one in TBR’s hybrid cloud environments survey. Ironically, as fast as IBM has been trying to distance itself from its legacy platform heritage it brings an advantage when it comes to clouds for some customers. “A footprint in legacy IT solutions and management is a strong predictor of private cloud vendor success, as private cloud solutions are typically the first step toward hybrid IT environments,” wrote TBR Cloud Senior Analyst Cassandra Mooshian.

1800FLOWERS Taps IBM Commerce Cloud

Courtesy of IBM: 1800 FLOWERS Taps IBM Cloud

Coming out on top of IBM’s 2Q16 financials reported here, were the company’s strategic initiatives, mainly cloud, analytics, and mobile, which generated positive revenue results. The TBR reports provide welcome reinforcement for IBM strategy doubters. As reported by IBM, the annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent while security revenue increased 18 percent.

The TBR report also noted IBM leadership in overall vendor adoption for private cloud and in select private cloud segments due to its broad cloud and IT services portfolio, its variety of deployment options, and accompanying integration and optimization support. As a result, the company’s expertise and knowledge of both cloud and legacy technology make it easier for customers to opt for an IBM migration path to both private and hybrid clouds.

TBR also specifically called out of IBM cloud-friendly capabilities, including the comprehensive portfolio of cloud and hardware assets with security; cloud professional services that can span a customer’s entire IT environment; and a vertical approach to cloud combined with Watson technology. As for hybrid clouds, Kelsey Mason, Cloud Analyst at TBR, noted in the announcement: “Hybrid integration is the next stage in cloud adoption and will be the end state for many enterprise IT environments.” Enterprise hybrid adoption, TBR observed, now matches public adoption of a year ago, which it interprets as signaling a new level of maturity in companies’ cloud strategies.

What really counts, however, are customers who vote with their checkbooks.  Here IBM has been racking up cloud wins. For example, Pratt & Whitney, a United Technologies Corp. company in July announced it will move the engine manufacturer’s business, engineering, and manufacturing enterprise systems to a fully managed and supported environment on the IBM Cloud infrastructure.

Said Brian Galovich, vice president and chief information officer, Pratt & Whitney, in the published announcement:  “Working with IBM and moving our three enterprise systems to a managed cloud service will give us the ability to scale quickly and meet the increased demands for computing services, data processing and storage based on Pratt & Whitney’s forecasted growth over the next decade.

Also in July, Dixons Carphone Group, Europe’s largest telecommunications retail and services company as the result of a 2014 merger, announced plans to migrate to the IBM Cloud from IBM datacenters in the United Kingdom to integrate two distinct infrastructures and enable easy scaling to better manage the peaks and valleys of seasonal shopping trends. Specifically, the company expects to migrate about 2,500 server images from both enterprises with supporting database and middleware components from both infrastructures to an IBM hybrid cloud platform that comprises a private IBM Cloud with bare metal servers for production workloads and public IBM Cloud platform for non-production workloads.

As a merged company it saw an opportunity to consolidate the infrastructures by leveraging cloud solutions for flexibility, performance and cost savings. After assessing the long-term values and scalability of multiple cloud providers, the company turned to IBM Cloud for a smooth transition to a hybrid cloud infrastructure. “We can trust IBM Cloud to seamlessly integrate the infrastructures of both companies into one hybrid cloud that will enable us to continue focusing on other parts of the business,” said David Hennessy, IT Director, Dixons Carphone, in the announcement.

As IBM’s 2Q16 report makes clear, once both these companies might have bought new IBM hardware platforms but that’s not the world today. At least they didn’t opt for AWS or Azure.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Oracle Aims at Intel and IBM POWER

July 8, 2016

In late June Oracle announced the SPARC S7 processor, a new 20nm, 4.27 GHz, 8-core/64-thread SPARC processor targeted for scale-out Cloud workloads that usually go to Intel x86 servers. These are among the same workloads IBM is aiming for with POWER8, POWER9, and eventually POWER10, as reported by DancingDinosaur just a couple of weeks ago.

oracle roadmap trajectory

Oracle 5-year SPARC trajectory (does not include newly announced S series).

According to Oracle, the latest additions to the SPARC platform are built on the new 4.27 GHz, 8-core/64-thread SPARC S7 microprocessor with what Oracle calls Software-in-Silicon features such as Silicon Secured Memory and Data Analytics Accelerators, which enable organizations to run applications of all sizes on the SPARC platform at commodity price points. All existing commercial and custom applications will also run on the new SPARC enterprise cloud services and solutions unchanged while experiencing improvements in security, efficiency, and simplicity.

By comparison, the IBM POWER platform includes with the POWER8, which is delivered as a 12-core, 22nm processor. The POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators, which ensure delivery of more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even a 7nm, processor, based on the existing micro-architecture. And an even beefier POWER10 is expected to arrive around 2020.

At the heart of the Oracle’s new scale-out, commodity-priced server, the S7. According to Oracle, the SPARC S7 delivers balanced compute performance with 8 cores per processor, integrated on-chip DDR4 memory interfaces, a PCIe controller, and coherency links. The cores in the SPARC S7 are optimized for running key enterprise software, including Java applications and database. The SPARC S7–based servers use very high levels of integration that increase bandwidth, reduce latencies, simplify board design, reduce the number of components, and increase reliability, according to Oracle. All this promises an increase in system efficiency with a corresponding improvement in the economics of deploying a scale-out infrastructure when compared to other vendor solutions.

Oracle’s SPARC S7 processor, based on Oracle enterprise class M7 servers, is optimized for horizontally scalable systems with all the key functionality included in the microprocessor chip. Its Software-in-Silicon capabilities, introduced with the SPARC M7 processor, are also available in the SPARC S7 processor to enable improved data protection, cryptographic acceleration, and analytics performance. These features include Security-in-Silicon, which provides Silicon Secured Memory and cryptographic acceleration, and Data Analytics Accelerator (DAX) units, which provide In-memory query acceleration and in-line decompression

SPARC S7 processor–based servers include single- and dual-processor systems that are complementary to the existing mid-range and high-end systems based on Oracle’s SPARC M7 processor. SPARC S7 processor–based servers include two rack-mountable models. The SPARC S7-2 server uses a compact 1U chassis, and the SPARC S7-2L server is implemented in a larger, more expandable 2U chassis. Uniformity of management interfaces and the adoption of standards also should help reduce administrative costs, while the chassis design provides density, efficiency, and economy as increasingly demanded by modern data centers. Published reports put the cost of the new Oracle systems at just above $11,000 with a single processor, 64GB of memory and two 600GB disk drives, and up to about $50,000 with two processors and a terabyte of memory.

DancingDinosaur doesn’t really have enough data to compare the new Oracle system with the new POWER8 and upcoming POWER9 systems. Neither Oracle nor IBM have provided sufficient details. Oracle doesn’t even offer a roadmap at this point, which might tell you something.

What we do know about the POWER machines is this: POWER9 promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and is being optimized for new deployment models like hyperscale, cloud, and technical computing through scale-out deployment. Available for either clustered or multiple formats, it will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

According to IBM, you can expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

At least IBM showed its POWER roadmap. There is no comparable information from Oracle. At best, DancingDinosaur was able to dig up the following sketchy details for 2017-2019: Next Gen Core, 2017 Software-in-Silicon V1, Scale Out fully integrated Software-in-Silicon V1 or 2; 2018- 2019 Core Enhancements, Increased Cache, Increased Bandwidth, Software-in-Silicon V3.

Both Oracle and IBM have made it clear neither really wants to compete in the low cost, scale out server market. However, as both companies’ large clients turn to scale out, hyperscale Intel-based systems they have no choice but to follow the money. With the OpenPOWER Foundation growing and driving innovation, mainly in the form of accelerators, IBM POWER may have an advantage driving a very competitive price/performance story against Intel. With the exception of Fujitsu as an ally of sorts, Oracle has no comparable ecosystem as far as DancingDinosaur can tell.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Compuware Continues Mainframe GUI Tool Enhancements

July 1, 2016

Early in 2015 Compuware announced the first in what it promised would be a continuing stream of new mainframe tools and tool enhancements. Did anyone really believe them? Mainframe ISVs are not widely regarded for their fast release cycles. DancingDinosaur reported on it then here and has continued to follow up and report its progress through a handful of new releases. This past week, DancingDinosaur received new Compuware mainframe tool announcements. For a mainframe ISV this is almost unheard of. IBM sometimes releases new mainframe products in intense spurts but then quickly resumes its typical languid release pace.

compuware ispw

Screen from Compuware’s ISPW for Continuous Delivery to the Mainframe

Let’s take a look at each of these new releases. First, ISPW Deploy, an advanced mainframe release automation solution that enables large enterprises to bring continuous delivery best practices to their IBM z/OS environments. ISPW Deploy, built on the ISPW technology Compuware acquired in January 2016, facilitates faster and more reliable mainframe software deployment. Specifically, it helps, according to Compuware, in three ways, through:

  1. Automation that rapidly moves code through the deployment process, including test staging and approvals, while also providing greatly simplified full or partial rollbacks.
  1. Visualization that enables DevOps managers to quickly pinpoint deployment issues in order to both solve immediate rollout problems and address persistent bottlenecks in code promotion.
  1. Integrations with both third-party solutions and Compuware’s own industry-leading mainframe toolkit that allow IT to build complete SCM-to-production DevOps pipelines and to quickly launch associated remediation support tools if and when deployment issues occur.

Compuware is further empowering enterprises to achieve mainframe agility by integrating. For instance, its ISPW and XebiaLabs’ cross-platform continuous delivery solutions enable IT organizations to orchestrate and visualize their mainframe DevOps processes in a common manner with their broader cross-platform DevOps automation.

The second announcement focused on Xebial Labs, as noted above. The idea here is to deliver cross-platform continuous releases for the mainframe. As Compuware explained, enterprises using XebiaLabs’ solution suite and Compuware ISPW, can now automate and monitor all phases of mainframe DevOps within the same continuous delivery management environment they use for their distributed, web, and cloud platforms. This automation and monitoring includes test/QA, pre-copy staging, and code promotion. The goal, as with all DevOps, is to speed digital agility for mainframe or distributed systems or both.

The third announcement concerned a partnership between Compuware and ConicIT that aims to help a new generation of IT ops staff proactively resolve emerging mainframe issues before they impact application service levels. It does so by integrating ConicIT’s predictive mainframe analytics with Compuware’s Strobe, which provides visually intuitive troubleshooting intelligence. Together, the two companies promise to enable even IT staff with relatively little hands-on mainframe experience to quickly identify and resolve a wide range of application performance problems.

The key to doing this is a reliance on the adoption of intuitive GUI interfaces. Compuware started this with its Topaz tools and has been continuing along this path for two years. Compuware’s CEO, Chris O’Malley, has been harping on these themes almost since he first arrived there.

Compuware customers apparently have gotten the message. As reported: “Market pressures are making it essential for us to deliver quality products and services to our clients more frequently, and the mainframe plays a critical role in that delivery,” according to Craig Danielson, Assistant Vice President for Commerce Bank. “We leverage ISPW to help in this capacity and its new capabilities will provide us the automation and visibility of our software deployment process to help us continuously improve our internal operations and services.” (note: DancingDinosaur did not validate this customer statement.)

Companies will need all the help modern mainframe tools can deliver. Mainframe data centers are facing unprecedented challenges that require unusual speed and agility. In short, they need DevOps fast. And they will have to respond with an increasingly aging core of experienced mainframe staff supplemented by millennials who have to be coaxed and cajoled onto the mainframe with easy graphical tools. If mainframe data centers can’t respond to these challenges—not just cloud, mobile, Linux, and analytics, but also IoT, blockchain, cognitive computing, and whatever else is coming along next—how are they going to cope. Already their users, the line of business managers, are turning to shadow IT out of frustration with the slow response from the mainframe data centers. And you know what comes next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM z13 Helps Avoid Costly Data Breaches

June 24, 2016

A global study sponsored by IBM and conducted by the Ponemon Institute found that the average cost of a data breach for companies surveyed has grown to $4 million, representing a 29 percent increase since 2013. With cybersecurity incidents continuing to increase with 64% more security incidents in 2015 than in 2014 the costs are poised to grow.

World’s Most Secure System

z13–world’s most secure system

The z13, at least, is one way to keep security costs down. It comes with a cryptographic processor unit available on every core, enabled as a no-charge feature. It also provides EAL5+ support, a regulatory certification for LPARS, which verifies the separation of partitions to further improve security along with a dozen or so other built-in security features for the z13. For a full list of z13 security features click here. There also is a Redbook, Ultimate Security with the IBM z13 here. A midsize z, the z13s brings the benefits of mainframe security and mainframe computing to smaller organizations. You read about the z13s here on DancingDinosaur this past February.

As security threats become more complex, the researchers noted, the cost to companies continues to rise. For example, the study found that companies lose $158 per compromised record. Breaches in highly regulated industries were even more costly, with healthcare reaching $355 per record – a full $100 more than in 2013. And the number of records involved can run from the thousands to the millions.

Wow, why so costly? The researchers try to answer that too: leveraging an incident response team was the single biggest factor associated with reducing the cost of a data breach – saving companies nearly $400,000 on average (or $16 per record). In fact, response activities like incident forensics, communications, legal expenditures and regulatory mandates account for 59 percent of the cost of a data breach. Part of these high costs may be linked to the fact that 70 percent of U.S. security executives report they don’t even have incident response plans in place.

The process of responding to a breach is extremely complex and time consuming if not properly planned for. As described by the researchers, the process of responding to a breach consists of a minimum of four steps. Among the specified steps, a company must:

  • Work with IT or outside security experts to quickly identify the source of the breach and stop any more data leakage
  • Disclose the breach to the appropriate government/regulatory officials, meeting specific deadlines to avoid potential fines
  • Communicate the breach with customers, partners, and stakeholders
  • Set up any necessary hotline support and credit monitoring services for affected customers

And not even included in the researchers’ list are tasks like inventorying and identifying the data records that have been corrupted or destroyed, remediating the damaged data, and validating it against the last known clean backup copy. Am surprised the costs aren’t even higher. Let’s not even talk about the PR damage or loss of customer goodwill. Now, aren’t you glad you have a z13?

That’s not even the worst of it. The study also found the longer it takes to detect and contain a data breach, the more costly it becomes to resolve. While breaches that were identified in less than 100 days cost companies an average of $3.23 million, breaches that were found after the 100-day mark cost over $1 million more on average ($4.38 million). The average time to identify a breach in the study was estimated at 201 days and the average time to contain a breach was estimated at 70 days. The cost of a z13 or even the lower cost z13s could justify itself by averting just one data breach.

The researchers also found that companies with predefined Business Continuity Management (BCM) processes in place found and contained breaches more quickly, discovering breaches 52 days earlier and containing them 36 days faster than companies without BCM. Still, the cheapest solution is to avert breaches in the first place.

Not surprisingly, IBM is targeting the incident response business as an up and coming profit center. The company increased its investment in the Incident response market with the recent acquisition of Resilient Systems, which just came out with an updated version that graphically displays the relationships between Indicators of Compromise (IOCs) and incidents in an organization’s environment. But the z13 is probably a better investment if you want to avoid data breaches in the first place.

Surprisingly, sometimes your blogger is presented as a mainframe guru. Find the latest here.

DancingDinosaur is Alan Radding, a veteran information technology analyst writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


%d bloggers like this: