Posts Tagged ‘Power Systems’

OpenCAPI, Gen-Z, CCIX Initiate a New Computing Era

October 20, 2016

The next generation data center will be a more open, cooperative, and faster place judging from the remarkably similar makeup of three open consortia, OpenCAPI , Gen-Z, and CCIX. CCIX allows processors based on different instruction set architectures to extend their cache coherency to accelerators, interconnect, and I/O.

OpenCAPI provides a way to attach accelerators and I/O devices with coherence and virtual addressing to eliminate software inefficiency associated with the traditional I/O subsystem, and to attach advanced memory technologies.  The focus of OpenCAPI is on attached devices primarily within a server. Gen-Z, announced around the same time, is a new data access technology that primarily enables read and write operations among disaggregated memory and storage.


Rethink the Datacenter

It’s quite likely that your next data center will use all three. The OpenCAPI group includes AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx. Their new specification promises to enable up to 10X faster server performance with the first products expected in the second half of 2017.

The Gen-Z consortium consists Advanced Micro Devices, Broadcom, Huawei Technologies, Red Hat, Micron, Xilinx, Samsung, IBM, and Cray. Other founding members are Cavium, IDT, Mellanox Technologies, Microsemi, Seagate, SK Hynix, and Western Digital. They plan to develop a scalable computing interconnect and protocol that will enable systems to keep with the rapidly rising tide of data that is being generated and that needs to be analyzed. This will require the rapid movement of high volumes of data between memory and storage.

The CCIX initial members include Amphenol Corp., Arteris Inc., Avery Design Systems, Atos, Cadence Design Systems, Inc., Cavium, Inc., Integrated Device Technology, Inc., Keysight Technologies, Inc., Micron Technology, Inc., NetSpeed Systems, Red Hat Inc., Synopsys, Inc., Teledyne LeCroy, Texas Instruments, and TSMC.

The basic problem all three address revolves around how to make the volume and variety of new hardware forge fast communications and work together. In effect each group, from its own particular perspective, aims to boost the performance and interoperability of data center servers, devices, and components engaged in generating and handling myriad data and tasked with analyzing large amounts of that data. This will only be compounded as IoT, blockchain, and cognitive computing ramp up.

To a large extent, this results from the inability of Moore’s Law to continue to double the number of processors indefinitely. Future advances must rely on different sorts of hardware tweaks and designs to deliver greater price/performance.

Then in Aug. 2016 IBM announced a related chip breakthrough.  It unveiled the industry’s first 7 nm chip that could hold more than 20 billion tiny switches or transistors for improved computing power. The new chips could help meet demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies, according to IBM.

Most chips today in servers and other devices use microprocessors between 14 and 22 nanometers (nm). The 7nm technology represents at least a 50 percent power improvement. IBM intends to apply the new chips to analyze DNA, viruses, and exosomes. IBM expects to test this lab-on-a-chip technology starting with prostate cancer.

The point of this digression into chips and Moore’s Law is to suggest the need for tools and interfaces like Open CAPI, Gen-Z, and CCIX. As the use cases for ultra fast data analytics expands along with the expected proliferation of devices speed becomes critical. How long do you want to wait for an analysis of your prostate or breast cells? If the cells are dear to you, every nanosecond matters.

For instance, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design puts the compute power closer to the data and removes inefficiencies in traditional system architectures to help eliminate system bottlenecks that significantly improve server performance.  In some cases OpenCAPI enables system designers to access memory with sub-500 nanosecond latency.

IBM plans to introduce POWER9-based servers that leverage the OpenCAPI specification in the second half of 2017. Similarly, expect other members of OpenPOWER Foundation to introduce OpenCAPI enabled products in the same time frame. In addition, Google and Rackspace’s new server under development, codenamed Zaius and announced at the OpenPOWER Summit in San Jose, will leverage POWER9 processor technology and plans to provide the OpenCAPI interface in its design. Also, Mellanox plans to enable the new specification capabilities in its future products and Xilinx plans to support OpenCAPI enabled FPGAs

As reported at the Gen-Z announcement, “The formation of these new consortia (CCIX, OpenCAPI, and Gen-Z), backed by more than 30 industry-leading global companies, supports the premise that the datacenter of the future will require open standards. We look forward to collaborating with CCIX and OpenCAPI as this new ecosystem takes shape,” said Kurtis Bowman, Gen-Z Consortium president. Welcome to the 7nm computing era.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?


Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Revamped IBM Power Systems LC Takes on x86

September 9, 2016

To hear IBM, its revamped and refreshed Power Systems LC lineup will undermine x86 (Intel), HPE, Dell/EMC, and any other purveyor of x86-based systems. Backed by accelerators provided by OpenPower community members, IBM appears ready extend the x86 battle to on premises, in the cloud, and the hybrid cloud. It promises to deliver better performance at lower cost for all the hot workloads too: artificial intelligence, deep learning, high performance data analytics, and compute-heavy workloads.


Two POWER8 processors, 1U config, priced 30% less than an x86 server

Almost a year ago, Oct. 2015, DancingDinosaur covered IBM previous Power Systems LC announcement here. The LC designation stands for Linux Community, and the company is tapping accelerators and more from the OpenPower community, just as it did with its recent announcement of POWER9 expected in 2017, here.

The new Power LC systems feature a set of community delivered technologies IBM has dubbed POWERAccel, a family of I/O technologies designed to deliver composable system performance enabled by accelerators. For GPU acceleration the NVDIA NVLink delivers nearly 5x better integration between POWER processors and the NVIDIA GPUs.  For FPGA acceleration IBM tapped its own CAPI architecture to integrate accelerators that run natively as part of the application.

This week’s Power Systems LC announcement features three new machines:

  • S821LC (pictured above)—includes 2 POWER8 sockets in a 1U enclosure and intended for environments requiring dense computing.
  • S822LC—brings 2 POWER8 sockets for big data workloads and adds big data acceleration through CAPI and GPUs.
  • S822LC—intended for high performance computing, it incorporates the new POWER8 processor with the NVDIA NVLink to deliver 2.8x the bandwidth to GPU accelerators and up to 4 integrated NVIDIA Pascal GPUs.

POWER8 with NVLink delivers 2.8 x the bandwidth compared to a PCle data pipe. According to figures provided by IBM comparing the price-performance of the Power S822LC for HPC (20-core, 256 GB, 4x Pascal) with a Dell C4130 (20-core, 256 GB 4xK80) and measured by total queries per hour (gph) the Power System delivered 2.1x better price-performance.  The Power Systems server cost more ($66,612) vs. the Dell ($57,615) but the Power System delivered 444 qph vs. Dell’s 185 qph.

The story plays out similarly for big data workloads running MongoDB on the IBM Power S8221LC for big data (20-core, 128 GB) vs. an HP DL380 (20-core, 128 GB). Here the system cost (server, OS, MongoDB annual subscription) came to $24,870 for IBM Power and $29,915 for HP.  Power provided 40% more performance at a 31% lower hardware/maintenance cost.

When it comes to the cloud the new IBM Power Systems LC offerings get even more interesting from a buyer’s standpoint. IBM declared the cloud a strategic imperative about 2 years ago and needs to demonstrate adoption that can rival the current cloud leaders; AWS, Google, and Microsoft (Azure). To that end IBM has started to tack on free cloud usage.

For example, during the industry analyst launch briefing IBM declared: Modernize your Power infrastructure for the Cloud, get access to IBM Cloud for free and cut your current operating costs by 50%. Whether you’re talking on-premises cloud or hybrid infrastructure the freebies just come. The free built-in cloud deployment service options include:

  • Cloud Provisioning and Automation
  • Infrastructure as a Service
  • Cloud Capacity Pools across Data Centers
  • Hybrid Cloud with BlueMix
  • Automation for DevOps
  • Database as a Service

These cover both on-premises, where you can transform your traditional infrastructure with automation, self-service, and elastic consumption models or a hybrid infrastructure where you can securely extend to Public Cloud with rapid access to compute services and API integration. Other freebies include open source automation, installation and configuration recipes, cross data center inventory, performance monitoring via the IBM Cloud, optional DR as a service for Power, and free access and capacity flexibility with SolfLayer (12 month starter pack).

Will the new LC line and its various cloud freebies get the low cost x86 monkey off IBM’s back? That’s the hope in Armonk. The new LC servers can be acquired at a lower price and can deliver 80% more performance per dollar spent over x86-based systems, according to IBM. This efficiency enables businesses and cloud service providers to lower costs and combat data center sprawl.

DancingDinosaur has developed TCO and ROI analyses comparing mainframe and Power systems to x86 for a decade, maybe more.  A few managers get it, but most, or their staff, have embedded bias and will never accept non-x86 machines. To them, any x86 system always is cheaper regardless of the specs and the math. Not sure even free will change their minds.

The new Power Systems LC lineup is price-advantaged over comparatively configured Intel x86-based servers, costing 30% less in some configurations.  Online LC pricing begins at $5999. Additional models with smaller configurations sport lower pricing through IBM Business Partners. All but the HPC machine are available immediately. The HPC machine will ship Sept. 26.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth


IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.




IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Oracle Aims at Intel and IBM POWER

July 8, 2016

In late June Oracle announced the SPARC S7 processor, a new 20nm, 4.27 GHz, 8-core/64-thread SPARC processor targeted for scale-out Cloud workloads that usually go to Intel x86 servers. These are among the same workloads IBM is aiming for with POWER8, POWER9, and eventually POWER10, as reported by DancingDinosaur just a couple of weeks ago.

oracle roadmap trajectory

Oracle 5-year SPARC trajectory (does not include newly announced S series).

According to Oracle, the latest additions to the SPARC platform are built on the new 4.27 GHz, 8-core/64-thread SPARC S7 microprocessor with what Oracle calls Software-in-Silicon features such as Silicon Secured Memory and Data Analytics Accelerators, which enable organizations to run applications of all sizes on the SPARC platform at commodity price points. All existing commercial and custom applications will also run on the new SPARC enterprise cloud services and solutions unchanged while experiencing improvements in security, efficiency, and simplicity.

By comparison, the IBM POWER platform includes with the POWER8, which is delivered as a 12-core, 22nm processor. The POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators, which ensure delivery of more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even a 7nm, processor, based on the existing micro-architecture. And an even beefier POWER10 is expected to arrive around 2020.

At the heart of the Oracle’s new scale-out, commodity-priced server, the S7. According to Oracle, the SPARC S7 delivers balanced compute performance with 8 cores per processor, integrated on-chip DDR4 memory interfaces, a PCIe controller, and coherency links. The cores in the SPARC S7 are optimized for running key enterprise software, including Java applications and database. The SPARC S7–based servers use very high levels of integration that increase bandwidth, reduce latencies, simplify board design, reduce the number of components, and increase reliability, according to Oracle. All this promises an increase in system efficiency with a corresponding improvement in the economics of deploying a scale-out infrastructure when compared to other vendor solutions.

Oracle’s SPARC S7 processor, based on Oracle enterprise class M7 servers, is optimized for horizontally scalable systems with all the key functionality included in the microprocessor chip. Its Software-in-Silicon capabilities, introduced with the SPARC M7 processor, are also available in the SPARC S7 processor to enable improved data protection, cryptographic acceleration, and analytics performance. These features include Security-in-Silicon, which provides Silicon Secured Memory and cryptographic acceleration, and Data Analytics Accelerator (DAX) units, which provide In-memory query acceleration and in-line decompression

SPARC S7 processor–based servers include single- and dual-processor systems that are complementary to the existing mid-range and high-end systems based on Oracle’s SPARC M7 processor. SPARC S7 processor–based servers include two rack-mountable models. The SPARC S7-2 server uses a compact 1U chassis, and the SPARC S7-2L server is implemented in a larger, more expandable 2U chassis. Uniformity of management interfaces and the adoption of standards also should help reduce administrative costs, while the chassis design provides density, efficiency, and economy as increasingly demanded by modern data centers. Published reports put the cost of the new Oracle systems at just above $11,000 with a single processor, 64GB of memory and two 600GB disk drives, and up to about $50,000 with two processors and a terabyte of memory.

DancingDinosaur doesn’t really have enough data to compare the new Oracle system with the new POWER8 and upcoming POWER9 systems. Neither Oracle nor IBM have provided sufficient details. Oracle doesn’t even offer a roadmap at this point, which might tell you something.

What we do know about the POWER machines is this: POWER9 promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and is being optimized for new deployment models like hyperscale, cloud, and technical computing through scale-out deployment. Available for either clustered or multiple formats, it will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

According to IBM, you can expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

At least IBM showed its POWER roadmap. There is no comparable information from Oracle. At best, DancingDinosaur was able to dig up the following sketchy details for 2017-2019: Next Gen Core, 2017 Software-in-Silicon V1, Scale Out fully integrated Software-in-Silicon V1 or 2; 2018- 2019 Core Enhancements, Increased Cache, Increased Bandwidth, Software-in-Silicon V3.

Both Oracle and IBM have made it clear neither really wants to compete in the low cost, scale out server market. However, as both companies’ large clients turn to scale out, hyperscale Intel-based systems they have no choice but to follow the money. With the OpenPOWER Foundation growing and driving innovation, mainly in the form of accelerators, IBM POWER may have an advantage driving a very competitive price/performance story against Intel. With the exception of Fujitsu as an ally of sorts, Oracle has no comparable ecosystem as far as DancingDinosaur can tell.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Fires a Shot at Intel with its Latest POWER Roadmap

June 17, 2016

In case you worry that IBM will abandon hardware in the pursuit of its strategic initiatives focusing on cloud, mobile, analytics and more; well, stop worrying. With the announcement of its POWER Roadmap at the OpenPOWER Summit earlier this spring, it appears POWER will be around for years to come. But IBM is not abandoning the strategic initiatives either; the new Roadmap promises to support new types of workloads, such as real time analytics, Linux, hyperscale data centers, and more along with support for the current POWER workloads.


Pictured above: POWER9 Architecture, courtesy of IBM

Specifically, IBM is offering a denser roadmap, not tied to technology and not even tied solely to IBM. It draws on innovations from a handful of the members of the Open POWER Foundation as well as support from Google. The new roadmap also signals IBM’s intention to make a serious run at Intel’s near monopoly on enterprise server processors by offering comparable or better price, performance, and features.

Google, for example, reports porting many of its popular web services to run on Power systems; its toolchain has been updated to output code for x86, ARM, or Power architectures with the flip of a configuration flag. Google, which strives to be everything to everybody, now has a highly viable alternative to Intel in terms of performance and price with POWER. At the OpenPOWER Summit early in the spring, Google made it clear it plans to build scale-out server solutions based on OpenPower.

Don’t even think, however, that Google is abandoning Intel. The majority of its systems are Intel-oriented. Still, POWER and the OpenPOWER community will provide a directly competitive processing alternative.  To underscore the situation Google and Rackspace announced they were working together on Power9 server blueprints for the Open Compute Project, designs that reportedly are compatible with the 48V Open Compute racks Google and Facebook, another hyperscale data center, already are working on.

Google represents another proof point that OpenPOWER is ready for hyperscale data centers. DancingDinosaur, however, really is interested most in what is coming from OpenPOWER that is new and sexy for enterprise data centers, since most DancingDinosaur readers are focused on the enterprise data center. Of course, they still need ever better performance and scalability too. In that regard OpenPOWER has much for them in the works.

For starters, POWER8 is currently delivered as a 12-core, 22nm processor. POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators. That is sure to deliver more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even 7nm, processor, based on the existing micro-architecture.

The real POWER future, arriving around 2020, will feature a new micro-architecture, sport new features and functions, and bring new technology. Expect much, if not almost all, of the new functions to come from various OpenPOWER Foundation partners,

POWER9, only a year or so out, promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and bringing new deployment models for hyperscale, cloud, and technical computing through scale out deployment. This will include deployment in both clustered or multiple formats. It will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

Expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

As a data center manager, will a POWER9 machine change your data center dynamics?  Maybe, you decide: a dual-socket Power9 server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total 44 core count. That’s a lot of computing power in one rack.

Now IBM just has to crank out similar advances for the next z System (a z14 maybe?) through the Open Mainframe Project.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Advances SSD with Phase-Change Memory Breakthrough

May 20, 2016

Facing an incessant demand to speed data through computers the latest IBM storage memory advance, announced earlier this week, will ratchet up the speed another notch or two. Scientists at IBM Research have demonstrated storing 3 bits of data per cell using phase-change memory (PCM). Until now, PCM had been tried but had never caught on for a variety of reasons. By storing 3 bits per cell, IBM can boost PCM capacity and speed and lower the cost.

TLCPCMSmall (1)

IBM multi-bit PCM chip connected to a standard integrated circuit board.

Pictured above, the chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture, IBM explained. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

Although PCM has been around for some years only with this latest advance is it attracting the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility, and density. Specifically, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.  Primary use cases will be capturing massive volumes of data expected from mobile devices and the Internet of Things.

PCM, in effect, adds another tier to the storage/memory hierarchy, coming in between DRAM and Flash at the upper levels of the storage performance pyramid. The IBM researchers envision both standalone PCM and hybrid applications, which combine PCM and flash storage together. For example, PCM can act as an extremely fast cache by storing a mobile phone’s operating system and enabling it to launch in seconds. For enterprise data centers, IBM envisions entire databases could be stored in PCM for blazing fast query processing of time-critical online applications, such as financial transactions.

As reported by CNET, PCM fits neatly between DRAM and flash. DRAM is 5-10x faster at retrieving data than PCM, while PCM is about 70x faster than flash. IBM reportedly expects PCM to be cheaper than DRAM, eventually becoming as cheap as flash (or course flash keeps getting cheaper too). PCM’s ability to hold three bits of data rather than 2 bits, PCM’s previous best, enables packing more data into a chip, which lowers the cost of PCM storage and boosts its competitive position against technologies like Flash and DRAM.

Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” wrote Haris Pozidis, key researcher and manager of non-volatile memory research at IBM Research –in the published announcement. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

IBM explains how PCM works: PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively. In digital systems, data is stored as a 0 or a 1. To store a 0 or a 1 on a PCM cell, a high or medium electrical current is applied to the material. A 0 can be programmed to be written in the amorphous phase or a 1 in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied.

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: 1) a set of drift-immune cell-state metrics and 2) drift-tolerant coding and detection schemes. These new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. The other measures provide additional robustness of the stored data. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

Combined these advancements address the key challenges of multi-bit PCM—drift, variability, temperature sensitivity and endurance cycling, according to IBM. From there, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board

Expect to see PCM first in Power Systems. At the 2016 OpenPOWER Summit in San Jose, CA, last month, IBM scientists demonstrated PCM attached to POWER8-based servers (made by IBM and TYAN® Computer Corp.) via the CAPI (Coherent Accelerator Processor Interface) protocol, which speeds the data to storage or memory. This technology leverages the low latency and small access granularity of PCM, the efficiency of the OpenPOWER architecture, and the efficiency of the CAPI protocol, an example of the OpenPower Foundation in action. Pozidis suggested PCM could be ready by 2017; maybe but don’t bet on it. IBM still needs to line up chip makers to produce it in commercial quantities among other things.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.



IBM Drives Platforms to the Cloud

April 29, 2016

IBM hasn’t been shy about its shift of focus from platforms and systems to cloud, mobile, analytics, and cognitive computing. But it didn’t hit home until last week’s release of 1Q2016 financials, which mentioned the z System just once. For the quarter IBM systems hardware and operating systems software revenues (lumped into one category, almost an after-thought) rang up $1.7 billion, down 21.8 percent.

This is ugly, and DancingDinosaur isn’t even a financial analyst. After the z System showed attractive revenue growth through all of 2015 suddenly its part of a loss. You can’t even find the actual numbers for z or Power in the new report format. As IBM notes: the company has revised its financial reporting structure to reflect the transformation of the business and provide investors with increased visibility into the company’s operating model by disclosing additional information on its strategic imperatives revenue by segment. BTW, IBM did introduce new advanced storage this week, which was part of the Systems Hardware loss too. DancingDinosaur will take up the storage story here next week.


But the 1Q2016 report was last week. To further emphasize its shift IBM this week announced that it was boosting support of OpenStack’s RefStack project, which is intended to advance common language between clouds and facilitate interoperability across clouds. DancingDinosaur applauds that but if you are a z data center manager you better take note that the z along with all the IBM platforms, mainly Power and storage, being pushed to the back of the bus behind IBM’s strategic imperatives.

DancingDinosaur supports the strategic initiatives and you can throw blockchain and IoT in with them too. These initiatives will ultimately save the mainframe data center. All the transactions and data swirling around and through these initiatives eventually need to land in a safe, secure, utterly reliable place where they can be processed in massive volume, kept accessible, highly available, and protected for subsequent use, for compliance, and for a variety of other purposes. That place most likely will be the z data center. It might be on premise or in the cloud but if organizations need rock solid transaction performance, security, availability, scalability, and such they will want the z, which will do it better and be highly price competitive. In short, the z data center provides the ideal back end for all the various activities going on through IBM’s strategic initiative.

The z also has a clear connection to OpenStack. Two years ago IBM announced expanding its support of open technologies by providing advanced OpenStack integration and cloud virtualization and management capabilities across IBM’s entire server portfolio through IBM Cloud Manager with OpenStack. According to IBM, Cloud Manager with OpenStack will provide support for the latest OpenStack release, dubbed Icehouse at that time, and full access to the complete core OpenStack API set to help organizations ensure application portability and avoid vendor lock-in. It also extends cloud management support to the z, in addition to Power Systems, PureFlex/Flex Systems, System x (which was still around then)  or any other x86 environment. It also would provide support for IBM z/VM on the z, and PowerVC for PowerVM on Power Systems to add more scalability and security to its Linux environments.

At the same time IBM also announced it was beta testing a dynamic, hybrid cloud solution on the IBM Cloud Manager with OpenStack platform. That would allow workloads requiring additional infrastructure resources to expand from an on premise cloud to remote cloud infrastructure.  Since that announcement, IBM has only gotten more deeply enamored with hybrid clouds.  Again, the z data center should have a big role as the on premise anchor for hybrid clouds.

With the more recent announcement RefStack, officially launched last year and to which IBM is the lead contributor, becomes a critical pillar of IBM’s commitment to ensuring an open cloud – helping to advance the company’s long-term vision of mitigating vendor lock-in and enabling developers to use the best combination of cloud services and APIs for their needs. The new functionality includes improved usability, stability, and other upgrades, ensuring better cohesion and integration of any cloud workloads running on OpenStack.

RefStack testing ensures core operability across the OpenStack ecosystem, and passing RefStack is a prerequisite for all OpenStack certified cloud platforms. By working on cloud platforms that are OpenStack certified, developers will know their workloads are portable across IBM Cloud and the OpenStack community.  For now RefStack acts as the primary resource for cloud providers to test OpenStack compatibility, RefStack also maintains a central repository and API for test data, allowing community members visibility into interoperability across OpenStack platforms.

One way or another, your z data center will have to coexist with hybrid clouds and the rest of IBM’s strategic imperatives or face being displaced. With RefStack and the other OpenStack tools this should not be too hard. In the meantime, prepare your z data center for new incoming traffic from the strategic imperatives, Blockchain, IoT, Cognitive Computing, and whatever else IBM deems strategic next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

%d bloggers like this: