Posts Tagged ‘hadoop’

IBM Introduces New DS8880 All-Flash Arrays

January 13, 2017

Yesterday IBM introduced three new members of the DS8000 line, each an all-flash product.  The new, all-flash storage products are designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical.


IBM envisions these boxes for more than the z’s core OLTP workloads. According to the company, they are built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions are designed to support cognitive workloads, which can be used to uncover trends and patterns that help improve decision-making, customer service, and ROI. ERP and financial transactions certainly constitute conventional OLTP but the cognitive workloads are more analytical and predictive.

The three products:

  • IBM DS8884 F
  • IBM DS8886 F
  • IBM DS8888 F

The F signifies all-flash.  Each was designed with High-Performance Flash Enclosures Gen2. IBM did not just slap flash into existing hard drive enclosures.  Rather, it reports undertaking a complete redesign of the flash-to-z interaction. As IBM puts it: through deep integration between the flash and the z, IBM has embedded software that facilitates data protection, remote replication, and optimization for midrange and large enterprises. The resulting new microcode is ideal for cognitive workloads on z and Power Systems requiring the highest availability and system reliability possible. IBM promises that the boxes will deliver superior performance and uncompromised availability for business-critical workloads. In short, fast enough to catch bad guys before they leave the cash register or teller window. Specifically:

  • The IBM DS8884 F—labelled as the business class offering–boasts the lowest entry cost for midrange enterprises (prices starting at $90,000 USD). It runs an IBM Power Systems S822, which is a 6-core POWER8 processor per S822 with 256 GB Cache (DRAM), 32 Fibre channel/FICON ports, and 6.4 – 154 TB of flash capacity.
  • The IBM DS8886 F—the enterprise class offering for large organizations seeking high performance– sports a 24-core POWER8 processor per S824. It offers 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4 – 614.4 TB of flash capacity. That’s over one-half petabyte of high performance flash storage.
  • The IBM DS8888 F—labelled an analytics class offering—promises the highest performance for faster insights. It runs on the IBM Power Systems E850 with a 48-core POWER8 processor per E850. It also comes with 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4TB – 1.22 PB of flash capacity. Guess crossing the petabyte level qualifies it as an analytics and cognitive device along with the bigger processor complex

As IBM emphasized in the initial briefing, it engineered these storage devices to surpass the typical big flash storage box. For starters, IBM bypassed the device adapter to connect the z directly to the high performance storage controller. IBM’s goal was to reduce latency and optimize all-flash storage, not just navigate a simple replacement by swapping new flash for ordinary flash or, banish the thought, HDD.

“We optimized the data path,” explained Jeff Barber IBM systems VP for HE Storage BLE (DS8, DP&R and SAN). To that end, IBM switched from a 1u to a 4u enclosure, runs on shared-nothing clusters, and boosted throughput performance. The resulting storage, he added, “does database better than anyone; we can run real-time analytics.”  The typical analytics system—a shared system running Hadoop, won’t even come close to these systems, he added. With the DS8888, you can deploy a real-time cognitive cluster with minimal latency flash.

DancingDinosaur always appreciates hearing from actual users. Working through a network of offices, supported by a team of over 850 people, Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (such as electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research on solutions capable of managing these applications –which included both Hitachi and EMC –the organization deployed the IBM DS8886 along with DB2 for z/OS data server software to provide an integrated data backup and restore system. (Full disclosure: DancingDinosaur has not verified this customer story.)

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

For full details and specs on these products, click here

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Power System S822LC for HPC Beat Sort Record by 3.3x

November 17, 2016

The new IBM Power System S822LC for High Performance Computing servers set a new benchmark for sorting by taking less than 99 seconds (98.8 seconds) to finish sorting 100 terabytes of data in the Indy GraySort category, improving on last year’s best result, 329 seconds, by a factor of 3.3. The win proved a victory not only for the S822LC but for the entire OpenPOWER community. The team of Tencent, IBM, and Mellanox has been named the Winner of the Sort Benchmark annual global computing competition for 2016.

rack-of-new-ibm-power-systems-s822lc-for-high-performance-computing-servers-1Power System S822LC for HPC

Specifically, the machine, an IBM Power S822LC for High Performance Computing (HPC), features NVIDIA NVLink technology optimized for the Power architecture and NVIDIA’s latest GPU technology. The new system supports emerging computing methods of artificial intelligence, particularly deep learning. The combination, newly dubbed IBM PowerAI, provides a continued path for Watson, IBM’s cognitive solutions platform, to extend its artificial intelligence expertise in the enterprise by using several deep learning methods to train Watson.

Actually Tencent Cloud Data Intelligence (the distributed computing platform of Tencent Cloud) won each category in both the GraySort and MinuteSort benchmarks, establishing four new world records with its performance, outperforming the 2015 best speeds by 2-5x. Said Zeus Jiang, Vice President of Tencent Cloud and General Manager of Tencent’s Data Platform Department: “In the future, the ability to manage big data will be the foundation of successful Internet businesses.”

To get this level of performance Tencent runs 512 IBM OpenPOWER LC servers and Mellanox’100Gb interconnect technology, improving the performance of Tencent Cloud big data products with the infrastructure. Online prices for the S822LC starts at about $9600 for 2-socket, 2U with up to 20 cores (2.9-3.3Ghz), 1 TB memory (32 DIMMs), 230 GB/sec sustained memory bandwidth, 2x SFF (HDD/SSD), 2 TB storage, 5 PCIe slots, 4 CAPI enabled, up to 2 NVidia K80 GPU. Be sure to shop for volume discounts.

The 2016 Sort Benchmark Results below (apologies in advance if this table breaks apart)

Sort Benchmark Competition 20 Records (Tencent Cloud ) 2015 World Records 2016 Improvement
Daytona GraySort 44.8 TB/min 15.9 TB/min 2.8X greater performance
Indy GraySort 60.7 TB/min 18.2 TB/min 3.3X greater performance
Daytona MinuteSort 37 TB/min 7.7 TB/min 4.8X greater performance
Indy MinuteSort 55 TB/min 11 TB/min 5X greater performance

Pretty impressive, huh. As IBM explains it: Tencent Cloud used 512 IBM OpenPOWER servers and Mellanox’100Gb interconnect technology, improving the performance of Tencent Cloud big data products with the infrastructure. Then Tom Rosamilia, IBM Senior VP weighed in: “Industry leaders like Tencent are helping IBM and our OpenPOWER partners push performance boundaries for a cognitive era defined by big data and advanced analytics.” The computing record achieved by Tencent Cloud on OpenPOWER turned out to be an important milestone for the OpenPOWER Foundation too.

Added Amir Prescher, Sr. Vice President, Business Development, at Mellanox Technologies: “Real-time-analytics and big data environments are extremely demanding, and the network is critical in linking together the extra high performance of IBM POWER-based servers and Tencent Cloud’s massive amounts of data,” In effect, Tencent Cloud developed an optimized hardware/software platform to achieve new computing records while demonstrating that Mellanox’s 100Gb/s Ethernet technology can deliver total infrastructure efficiency and improve application performance, which should make it a favorite for big data applications.

Behind all of this was the new IBM Power System S822LC for High Performance Computing servers. Currently the servers feature a new IBM POWER8 chip designed for demanding workloads including artificial intelligence, deep learning and advanced analytics.  However, a new POWER9 chips has already been previewed and is expected next year.  Whatever the S822LC can do running POWER8 just imagine how much more it will do running POWER9, which IBM describes as a premier acceleration platform. DancingDinosaur covered POWER9 in early Sept. here.

To capitalize on the hardware, IBM is making a new deep learning software toolkit available, PowerAI, which runs on the recently announced IBM Power S822LC server built for artificial intelligence that features NVIDIA NVLink interconnect technology optimized for IBM’s Power architecture. The hardware-software combination provides more than 2X performance over comparable servers with 4 GPUs running AlexNet with Caffe. The same 4-GPU Power-based configuration running AlexNet with BVLC Caffe can also outperform 8 M40 GPU-based x86 configurations, making it the world’s fastest commercially available enterprise systems platform on two versions of a key deep learning framework.

Deep learning is a fast growing, machine learning method that extracts information by crunching through millions of pieces of data to detect and ranks the most important aspects of the data. Publicly supported among leading consumer web and mobile application companies, deep learning is quickly being adopted by more traditional enterprises across a wide range of industry sectors; in banking to advance fraud detection through facial recognition; in automotive for self-driving automobiles; and in retail for fully automated call centers with computers that can better understand speech and answer questions. Is your data center ready for deep learning?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.



IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?


Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Leads in TBR Private and Hybrid Cloud Surveys

August 4, 2016

IBM has been named number one in private clouds by independent technology market research firm Technology Business Research (TBR) as well as number one in TBR’s hybrid cloud environments survey. Ironically, as fast as IBM has been trying to distance itself from its legacy platform heritage it brings an advantage when it comes to clouds for some customers. “A footprint in legacy IT solutions and management is a strong predictor of private cloud vendor success, as private cloud solutions are typically the first step toward hybrid IT environments,” wrote TBR Cloud Senior Analyst Cassandra Mooshian.

1800FLOWERS Taps IBM Commerce Cloud

Courtesy of IBM: 1800 FLOWERS Taps IBM Cloud

Coming out on top of IBM’s 2Q16 financials reported here, were the company’s strategic initiatives, mainly cloud, analytics, and mobile, which generated positive revenue results. The TBR reports provide welcome reinforcement for IBM strategy doubters. As reported by IBM, the annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent while security revenue increased 18 percent.

The TBR report also noted IBM leadership in overall vendor adoption for private cloud and in select private cloud segments due to its broad cloud and IT services portfolio, its variety of deployment options, and accompanying integration and optimization support. As a result, the company’s expertise and knowledge of both cloud and legacy technology make it easier for customers to opt for an IBM migration path to both private and hybrid clouds.

TBR also specifically called out of IBM cloud-friendly capabilities, including the comprehensive portfolio of cloud and hardware assets with security; cloud professional services that can span a customer’s entire IT environment; and a vertical approach to cloud combined with Watson technology. As for hybrid clouds, Kelsey Mason, Cloud Analyst at TBR, noted in the announcement: “Hybrid integration is the next stage in cloud adoption and will be the end state for many enterprise IT environments.” Enterprise hybrid adoption, TBR observed, now matches public adoption of a year ago, which it interprets as signaling a new level of maturity in companies’ cloud strategies.

What really counts, however, are customers who vote with their checkbooks.  Here IBM has been racking up cloud wins. For example, Pratt & Whitney, a United Technologies Corp. company in July announced it will move the engine manufacturer’s business, engineering, and manufacturing enterprise systems to a fully managed and supported environment on the IBM Cloud infrastructure.

Said Brian Galovich, vice president and chief information officer, Pratt & Whitney, in the published announcement:  “Working with IBM and moving our three enterprise systems to a managed cloud service will give us the ability to scale quickly and meet the increased demands for computing services, data processing and storage based on Pratt & Whitney’s forecasted growth over the next decade.

Also in July, Dixons Carphone Group, Europe’s largest telecommunications retail and services company as the result of a 2014 merger, announced plans to migrate to the IBM Cloud from IBM datacenters in the United Kingdom to integrate two distinct infrastructures and enable easy scaling to better manage the peaks and valleys of seasonal shopping trends. Specifically, the company expects to migrate about 2,500 server images from both enterprises with supporting database and middleware components from both infrastructures to an IBM hybrid cloud platform that comprises a private IBM Cloud with bare metal servers for production workloads and public IBM Cloud platform for non-production workloads.

As a merged company it saw an opportunity to consolidate the infrastructures by leveraging cloud solutions for flexibility, performance and cost savings. After assessing the long-term values and scalability of multiple cloud providers, the company turned to IBM Cloud for a smooth transition to a hybrid cloud infrastructure. “We can trust IBM Cloud to seamlessly integrate the infrastructures of both companies into one hybrid cloud that will enable us to continue focusing on other parts of the business,” said David Hennessy, IT Director, Dixons Carphone, in the announcement.

As IBM’s 2Q16 report makes clear, once both these companies might have bought new IBM hardware platforms but that’s not the world today. At least they didn’t opt for AWS or Azure.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Gets Serious about Linux on z Systems

February 12, 2016


It has taken the cloud, open source, and mobile for IBM to finally, after more than a decade of Linux on z, for the company to turn it into the agile development machine it should have been all along. Maybe z data centers weren’t ready back then, maybe they aren’t all that ready now, but it is starting to happen.

Primary_LinuxONE_LeftAngle-1 (1)

LinuxONE Rockhopper, Refreshed for Hybrid Cloud Innovation

In March, IBM will make its IBM Open Platform available for the IBM LinuxONE (IOP) portfolio available at no cost. IOP includes a broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase and more, as well as Apache Hadoop 2.7.1. Continuing its commitment to contributing back to the open source community, IBM has optimized the Open Managed Runtime project (OMR) for LinuxONE. Now IBM innovations in virtual machine technology for new dynamic scripting languages will be brought to enterprise-grade strength.

It doesn’t stop there. IBM has ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. IBM expects to begin contributing code to the Go community this summer.

Back in December IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services, including the Watson Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

Following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This will be closely tied to Canonical’s Ubuntu port to the z expected this summer.

Also, through new work by SUSE to collaborate on technologies in the OpenStack space, SUSE tools will be employed to manage public, private, and hybrid clouds running on LinuxONE.  Open source, OpenStack, open-just-about-everything appears to be the way IBM is pushing the z.

At a presentation last August on Open Source & ISV Ecosystem Enablement for LinuxONE and IBM z, Dale Hoffman, Program Director, IBM’s Linux SW Ecosystem & Innovation Lab, introduced the three ages of mainframe development; our current stage being the third.

  1. Traditional mainframe data center, 1964–2014 includes • Batch • General Ledger • Transaction Systems • Client Databases • Accounts payable / receivable • Inventory, CRM, ERP Linux & Java
  2. Internet Age, 1999–2014 includes–• Server Consolidation • Oracle Consolidation • Early Private Clouds • Email • Java®, Web & eCommerce
  3. Cloud/Mobile/Analytics (CAMSS2) Age, 2015–2020 includes– • On/Off Premise, Hybrid Cloud • Big Data & Analytics • Enterprise Mobile Apps • Security solutions • Open Source LinuxONE and IBM z ecosystem enablement

Hoffman didn’t suggest what comes after 2020 but we can probably imagine: Cognitive Computing, Internet of Things, Blockchain. At least those are trends starting to ramp up now.

He does, however, draw a picture of the state of Linux on the mainframe today:

  • 27% of total installed capacity run Linux
  • Linux core capacity increased 16% from 2Q14 to 2Q15
  • 40% of customers have Linux cores
  • 80% of the top 100 customers (in terms of installed MIPS) run Linux on the mainframe
  • 67% of new accounts run Linux

To DancingDinosaur, this last point about the high percentage of new z accounts running Linux speaks to where the future of the z is heading.

Maybe as telling are the following:

  • 64% of companies participate in Open Source projects
  • 78% of companies run on open source
  • 88% of companies to increase open source contributions in the next 2-3 year
  • 47% to release internal tools & projects as OSS
  • 53% expect to reduce barriers to employee participation in open source
  • 50% report that more than half of their engineers are working on open source projects
  • 66% of companies build software on open source

Remember when open source and Linux first appeared for z, data center managers were shocked at the very concept. It was anti-capitalist at the very least, maybe even socialist or communist. Look at the above percentages; open source has gotten about as mainstream as it gets.

It will be interesting to see how quickly developers move to LinuxONE for their CAMSS projects. IBM hasn’t said anything about the pricing of the refreshed Rockhopper model or about the look and feel of the tools. Until the developers know, DancingDinosaur expects they will continue to work on the familiar x86 tools they are using now.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM zSystem Continues Surge in 4Q15

January 22, 2016

DancingDinosaur follows technology, not financial investments, so you’d be an idiot if you considered what follows as investment advice. It is not.  Still, as one who has built a chunk of his career around the mainframe, it is good to see the z System continuing to remain in the black and beating the sexier Power lineup although I do follow both closely. See the latest IBM financials here.


The IBM z13 System

 Specifically, as IBM reported on Tuesday, revenues from z Systems mainframe server products increased 16 percent compared with the year-ago period (up 21 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS (millions of instructions per second), increased 28 percent.  Revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency).

Almost as good, revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency). Power revenues have been up most of the year although they got a little blurry in the accounting.

In the storage market, which is getting battered by software defined storage (SDS) on one hand and cloud-based storage on the other, IBM reported revenues from System Storage decreased 11 percent (down 7 percent adjusting for currency). The storage revenues probably won’t bounce back fast, at least not without IBM bringing out radically new storage products. That storage rival EMC got acquired by Dell should be some kind of signal that the storage market as the traditional enterprise players knew it is drastically different. For now object storage, SDS, and even Flash won’t replace the kind of revenue IBM used to see from DS8000 disk systems or TS enterprise tape libraries loaded with mechanical robotics.

Getting more prominence is IBM’s strategic initiative. This has been a company priority all year. Strategic initiatives include cloud, mobile, analytics, security, IoT, and cognitive computing. Q4 revenues, as reported by IBM, from these strategic imperatives — cloud, analytics, and engagement — increased 10 percent year-to-year (up 16 percent adjusting for currency).  For the full year, revenues from strategic imperatives increased 17 percent (up 26 percent adjusting for currency and the divested System x business) to $28.9 billion and now represents 35 percent of total IBM consolidated revenue.

For the full year, total cloud revenues (public, private and hybrid) increased 43 percent (up 57 percent adjusting for currency and the divested System x business) to $10.2 billion.  Revenues for cloud delivered as a service — a subset of the total cloud revenue — increased 50 percent to $4.5 billion; and the annual as-a-service run rate increased to $5.3 billion from $3.5 billion in the fourth quarter of 2014.

Meanwhile, revenues from business analytics increased 7 percent (up 16 percent adjusting for currency) to $17.9 billion.  Revenues from mobile more than tripled and from security increased 5 percent (up 12 percent adjusting for currency).

Commenting on IBM latest financial was Timothy Prickett Morgan, who frequently writes on IBM’s platforms. Citing Martin Schroeter, IBM’s chief financial officer, statements to analyst, Morgan suggested that low profit margins, which other financial analysts complained about, put pressure on the System z13 product line that launched early in the year. After a fast start, apparently, the z13 is now experiencing a slowdown in the upgrade cycle. It’s at this point that DancingDinosaur usually expects to see a new z, typically a business class version of the latest mainframe, the z13 in this case, but that does not appear to be in the offing. About the closest IBM got to that was the RockHopper model of the LinuxOne, a z optimized only for Linux, cloud, mobile, and analytics.

Morgan also noted that IBM added about 50 new mainframe customers for the year on an installed base of about 6,000 active customers. DancingDinosaur has been tracking that figure for years and it has not fluctuated much in recent years. And am never sure how to count the handful of IT shops that run a z in the IBM cloud.  But 5000-6000 active z shops still sounds about right.

Power Systems, which has also grown four quarters in a row, and was up 8 percent at constant currency. This has to be a relief to the company, which has committed over $1 billion to Power. IBM attributes some of this growth to its enthusiastic embrace of Linux on Power8, but Morgan complains of having no sense of how much of the Power Systems pie is driven by scale-out Linux machines intended to compete against Intel Xeon servers. Power also is starting to get some boost from the OpenPOWER Foundation, members that started to ship products in the past few months. It’s probably minimal revenue now but over time it should grow.

For those of us who are counting on z and Power to be around for a while longer, the latest financials should be encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Syncsort’s 2015 State of the Mainframe: Little Has Changed

November 30, 2015

Syncsort’s annual survey of almost 200 mainframe shops found that 83 percent of respondents cited security and availability as key strengths of the mainframe. Are you surprised? You can view the detailed results here for yourself.

synsort mainframes Role Big Data Ecosystem

Courtesy; Syncsort

Security and availability have been hallmarks of the z for decades. Even Syncsort’s top mainframe executive, Harvey Tessler, could point to little unexpected in the latest results “Nothing surprising. At least no big surprises. Expect the usual reliability, security,” he noted. BTW, in mid-November Clearlake Capital Group, L.P. (Clearlake) announced that it had completed the acquisition of Syncsort Incorporated. Apparently no immediate changes are being planned.

The 2015 study also confirmed a few more recent trends that DancingDinosaur has long suspected. More than two-thirds (67 percent) of respondents cited integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe.

Similarly, the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe. That, in fact, may be the most surprising response. Mainframe shops (or more likely the line-of-business managers they work with) are notorious for moving data off the mainframe for analytics, usually to distributed x86 platforms. The study showed respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis.

Many of the respondents no doubt will continue to do so, but it makes little sense in 2015 with a modern z System running a current configuration. In truth, it makes little sense from either a performance or a cost standpoint to move data off the z to perform analytics elsewhere. The z runs Hadoop and Spark natively. With your data and key analytics apps already on the z, why bother incurring both the high overhead and high latency entailed in moving data back and forth to run on what is probably a slower platform anyway.

The only possible reason might be that the mainframe shop doesn’t run Linux on the mainframe at all. That can be easily remedied, however, especially now with the introduction of Ubuntu Linux for the z. C’mon, it’s late 2015; modernize your z for the cloud-mobile-analytics world and stop wasting time and resources jumping back and forth to distributed systems that will run natively on the z today.

More encouraging is the interest of the respondents in big data and analytics. “The survey demonstrates that many big companies are using the mainframe as the back-end transaction hub for their Big Data strategies, grappling with the same data, cost, and management challenges they used it to tackle before, but applying it to more complex use cases with more and dauntingly large and diverse amounts of data,” said Denny Yost, associate publisher and editor-in-chief for Enterprise Systems Media, which partnered with Syncsort on the survey. The results show the respondents’ interest in mainframe’s ability to be a hub for emerging big data analytics platforms also is growing.

On other issues, almost one-quarter of respondents ranked as very important the ability of the mainframe to run other computing platforms such as Linux on an LPAR or z/VM virtual machines as a key strength of the mainframe at their company. Over one-third of respondents ranked as very important the ability of the mainframe to integrate with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of the mainframe at their company.

Maybe more surprising; only 70% on the respondents ranked as very important their organizations use of the mainframe for performing large-scale transaction processing or use of the mainframe for hosting mission-critical applications. Given that the respondents appeared to come from large, traditional mainframe shops you might have expected those numbers to be closer to 85-90%. Go figure.

When asked to rank their organization’s use of the mainframe to supplement or replace non-mainframe servers (i.e. RISC or x86-based servers) just 10% of the respondents considered it important. Clearly the hybrid mainframe-based data center is not a priority with these respondents.

So, what are they looking to improve in the next 12 months? The respondents’ top three initiatives are:

  1. Meeting Security and Compliance Requirements
  2. Reducing CPU usage and related costs
  3. Meeting Service Level Agreements (SLAs)

These aren’t the most ambitious goals DancingDinosaur has ever encountered but they should be quite achievable in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

%d bloggers like this: