Posts Tagged ‘Power Systems’

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Oracle Aims at Intel and IBM POWER

July 8, 2016

In late June Oracle announced the SPARC S7 processor, a new 20nm, 4.27 GHz, 8-core/64-thread SPARC processor targeted for scale-out Cloud workloads that usually go to Intel x86 servers. These are among the same workloads IBM is aiming for with POWER8, POWER9, and eventually POWER10, as reported by DancingDinosaur just a couple of weeks ago.

oracle roadmap trajectory

Oracle 5-year SPARC trajectory (does not include newly announced S series).

According to Oracle, the latest additions to the SPARC platform are built on the new 4.27 GHz, 8-core/64-thread SPARC S7 microprocessor with what Oracle calls Software-in-Silicon features such as Silicon Secured Memory and Data Analytics Accelerators, which enable organizations to run applications of all sizes on the SPARC platform at commodity price points. All existing commercial and custom applications will also run on the new SPARC enterprise cloud services and solutions unchanged while experiencing improvements in security, efficiency, and simplicity.

By comparison, the IBM POWER platform includes with the POWER8, which is delivered as a 12-core, 22nm processor. The POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators, which ensure delivery of more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even a 7nm, processor, based on the existing micro-architecture. And an even beefier POWER10 is expected to arrive around 2020.

At the heart of the Oracle’s new scale-out, commodity-priced server, the S7. According to Oracle, the SPARC S7 delivers balanced compute performance with 8 cores per processor, integrated on-chip DDR4 memory interfaces, a PCIe controller, and coherency links. The cores in the SPARC S7 are optimized for running key enterprise software, including Java applications and database. The SPARC S7–based servers use very high levels of integration that increase bandwidth, reduce latencies, simplify board design, reduce the number of components, and increase reliability, according to Oracle. All this promises an increase in system efficiency with a corresponding improvement in the economics of deploying a scale-out infrastructure when compared to other vendor solutions.

Oracle’s SPARC S7 processor, based on Oracle enterprise class M7 servers, is optimized for horizontally scalable systems with all the key functionality included in the microprocessor chip. Its Software-in-Silicon capabilities, introduced with the SPARC M7 processor, are also available in the SPARC S7 processor to enable improved data protection, cryptographic acceleration, and analytics performance. These features include Security-in-Silicon, which provides Silicon Secured Memory and cryptographic acceleration, and Data Analytics Accelerator (DAX) units, which provide In-memory query acceleration and in-line decompression

SPARC S7 processor–based servers include single- and dual-processor systems that are complementary to the existing mid-range and high-end systems based on Oracle’s SPARC M7 processor. SPARC S7 processor–based servers include two rack-mountable models. The SPARC S7-2 server uses a compact 1U chassis, and the SPARC S7-2L server is implemented in a larger, more expandable 2U chassis. Uniformity of management interfaces and the adoption of standards also should help reduce administrative costs, while the chassis design provides density, efficiency, and economy as increasingly demanded by modern data centers. Published reports put the cost of the new Oracle systems at just above $11,000 with a single processor, 64GB of memory and two 600GB disk drives, and up to about $50,000 with two processors and a terabyte of memory.

DancingDinosaur doesn’t really have enough data to compare the new Oracle system with the new POWER8 and upcoming POWER9 systems. Neither Oracle nor IBM have provided sufficient details. Oracle doesn’t even offer a roadmap at this point, which might tell you something.

What we do know about the POWER machines is this: POWER9 promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and is being optimized for new deployment models like hyperscale, cloud, and technical computing through scale-out deployment. Available for either clustered or multiple formats, it will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

According to IBM, you can expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

At least IBM showed its POWER roadmap. There is no comparable information from Oracle. At best, DancingDinosaur was able to dig up the following sketchy details for 2017-2019: Next Gen Core, 2017 Software-in-Silicon V1, Scale Out fully integrated Software-in-Silicon V1 or 2; 2018- 2019 Core Enhancements, Increased Cache, Increased Bandwidth, Software-in-Silicon V3.

Both Oracle and IBM have made it clear neither really wants to compete in the low cost, scale out server market. However, as both companies’ large clients turn to scale out, hyperscale Intel-based systems they have no choice but to follow the money. With the OpenPOWER Foundation growing and driving innovation, mainly in the form of accelerators, IBM POWER may have an advantage driving a very competitive price/performance story against Intel. With the exception of Fujitsu as an ally of sorts, Oracle has no comparable ecosystem as far as DancingDinosaur can tell.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Fires a Shot at Intel with its Latest POWER Roadmap

June 17, 2016

In case you worry that IBM will abandon hardware in the pursuit of its strategic initiatives focusing on cloud, mobile, analytics and more; well, stop worrying. With the announcement of its POWER Roadmap at the OpenPOWER Summit earlier this spring, it appears POWER will be around for years to come. But IBM is not abandoning the strategic initiatives either; the new Roadmap promises to support new types of workloads, such as real time analytics, Linux, hyperscale data centers, and more along with support for the current POWER workloads.

power9b

Pictured above: POWER9 Architecture, courtesy of IBM

Specifically, IBM is offering a denser roadmap, not tied to technology and not even tied solely to IBM. It draws on innovations from a handful of the members of the Open POWER Foundation as well as support from Google. The new roadmap also signals IBM’s intention to make a serious run at Intel’s near monopoly on enterprise server processors by offering comparable or better price, performance, and features.

Google, for example, reports porting many of its popular web services to run on Power systems; its toolchain has been updated to output code for x86, ARM, or Power architectures with the flip of a configuration flag. Google, which strives to be everything to everybody, now has a highly viable alternative to Intel in terms of performance and price with POWER. At the OpenPOWER Summit early in the spring, Google made it clear it plans to build scale-out server solutions based on OpenPower.

Don’t even think, however, that Google is abandoning Intel. The majority of its systems are Intel-oriented. Still, POWER and the OpenPOWER community will provide a directly competitive processing alternative.  To underscore the situation Google and Rackspace announced they were working together on Power9 server blueprints for the Open Compute Project, designs that reportedly are compatible with the 48V Open Compute racks Google and Facebook, another hyperscale data center, already are working on.

Google represents another proof point that OpenPOWER is ready for hyperscale data centers. DancingDinosaur, however, really is interested most in what is coming from OpenPOWER that is new and sexy for enterprise data centers, since most DancingDinosaur readers are focused on the enterprise data center. Of course, they still need ever better performance and scalability too. In that regard OpenPOWER has much for them in the works.

For starters, POWER8 is currently delivered as a 12-core, 22nm processor. POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators. That is sure to deliver more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even 7nm, processor, based on the existing micro-architecture.

The real POWER future, arriving around 2020, will feature a new micro-architecture, sport new features and functions, and bring new technology. Expect much, if not almost all, of the new functions to come from various OpenPOWER Foundation partners,

POWER9, only a year or so out, promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and bringing new deployment models for hyperscale, cloud, and technical computing through scale out deployment. This will include deployment in both clustered or multiple formats. It will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

Expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

As a data center manager, will a POWER9 machine change your data center dynamics?  Maybe, you decide: a dual-socket Power9 server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total 44 core count. That’s a lot of computing power in one rack.

Now IBM just has to crank out similar advances for the next z System (a z14 maybe?) through the Open Mainframe Project.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Advances SSD with Phase-Change Memory Breakthrough

May 20, 2016

Facing an incessant demand to speed data through computers the latest IBM storage memory advance, announced earlier this week, will ratchet up the speed another notch or two. Scientists at IBM Research have demonstrated storing 3 bits of data per cell using phase-change memory (PCM). Until now, PCM had been tried but had never caught on for a variety of reasons. By storing 3 bits per cell, IBM can boost PCM capacity and speed and lower the cost.

TLCPCMSmall (1)

IBM multi-bit PCM chip connected to a standard integrated circuit board.

Pictured above, the chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture, IBM explained. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

Although PCM has been around for some years only with this latest advance is it attracting the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility, and density. Specifically, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.  Primary use cases will be capturing massive volumes of data expected from mobile devices and the Internet of Things.

PCM, in effect, adds another tier to the storage/memory hierarchy, coming in between DRAM and Flash at the upper levels of the storage performance pyramid. The IBM researchers envision both standalone PCM and hybrid applications, which combine PCM and flash storage together. For example, PCM can act as an extremely fast cache by storing a mobile phone’s operating system and enabling it to launch in seconds. For enterprise data centers, IBM envisions entire databases could be stored in PCM for blazing fast query processing of time-critical online applications, such as financial transactions.

As reported by CNET, PCM fits neatly between DRAM and flash. DRAM is 5-10x faster at retrieving data than PCM, while PCM is about 70x faster than flash. IBM reportedly expects PCM to be cheaper than DRAM, eventually becoming as cheap as flash (or course flash keeps getting cheaper too). PCM’s ability to hold three bits of data rather than 2 bits, PCM’s previous best, enables packing more data into a chip, which lowers the cost of PCM storage and boosts its competitive position against technologies like Flash and DRAM.

Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” wrote Haris Pozidis, key researcher and manager of non-volatile memory research at IBM Research –in the published announcement. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

IBM explains how PCM works: PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively. In digital systems, data is stored as a 0 or a 1. To store a 0 or a 1 on a PCM cell, a high or medium electrical current is applied to the material. A 0 can be programmed to be written in the amorphous phase or a 1 in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied.

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: 1) a set of drift-immune cell-state metrics and 2) drift-tolerant coding and detection schemes. These new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. The other measures provide additional robustness of the stored data. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

Combined these advancements address the key challenges of multi-bit PCM—drift, variability, temperature sensitivity and endurance cycling, according to IBM. From there, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board

Expect to see PCM first in Power Systems. At the 2016 OpenPOWER Summit in San Jose, CA, last month, IBM scientists demonstrated PCM attached to POWER8-based servers (made by IBM and TYAN® Computer Corp.) via the CAPI (Coherent Accelerator Processor Interface) protocol, which speeds the data to storage or memory. This technology leverages the low latency and small access granularity of PCM, the efficiency of the OpenPOWER architecture, and the efficiency of the CAPI protocol, an example of the OpenPower Foundation in action. Pozidis suggested PCM could be ready by 2017; maybe but don’t bet on it. IBM still needs to line up chip makers to produce it in commercial quantities among other things.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Drives Platforms to the Cloud

April 29, 2016

IBM hasn’t been shy about its shift of focus from platforms and systems to cloud, mobile, analytics, and cognitive computing. But it didn’t hit home until last week’s release of 1Q2016 financials, which mentioned the z System just once. For the quarter IBM systems hardware and operating systems software revenues (lumped into one category, almost an after-thought) rang up $1.7 billion, down 21.8 percent.

This is ugly, and DancingDinosaur isn’t even a financial analyst. After the z System showed attractive revenue growth through all of 2015 suddenly its part of a loss. You can’t even find the actual numbers for z or Power in the new report format. As IBM notes: the company has revised its financial reporting structure to reflect the transformation of the business and provide investors with increased visibility into the company’s operating model by disclosing additional information on its strategic imperatives revenue by segment. BTW, IBM did introduce new advanced storage this week, which was part of the Systems Hardware loss too. DancingDinosaur will take up the storage story here next week.

openstack-logo

But the 1Q2016 report was last week. To further emphasize its shift IBM this week announced that it was boosting support of OpenStack’s RefStack project, which is intended to advance common language between clouds and facilitate interoperability across clouds. DancingDinosaur applauds that but if you are a z data center manager you better take note that the z along with all the IBM platforms, mainly Power and storage, being pushed to the back of the bus behind IBM’s strategic imperatives.

DancingDinosaur supports the strategic initiatives and you can throw blockchain and IoT in with them too. These initiatives will ultimately save the mainframe data center. All the transactions and data swirling around and through these initiatives eventually need to land in a safe, secure, utterly reliable place where they can be processed in massive volume, kept accessible, highly available, and protected for subsequent use, for compliance, and for a variety of other purposes. That place most likely will be the z data center. It might be on premise or in the cloud but if organizations need rock solid transaction performance, security, availability, scalability, and such they will want the z, which will do it better and be highly price competitive. In short, the z data center provides the ideal back end for all the various activities going on through IBM’s strategic initiative.

The z also has a clear connection to OpenStack. Two years ago IBM announced expanding its support of open technologies by providing advanced OpenStack integration and cloud virtualization and management capabilities across IBM’s entire server portfolio through IBM Cloud Manager with OpenStack. According to IBM, Cloud Manager with OpenStack will provide support for the latest OpenStack release, dubbed Icehouse at that time, and full access to the complete core OpenStack API set to help organizations ensure application portability and avoid vendor lock-in. It also extends cloud management support to the z, in addition to Power Systems, PureFlex/Flex Systems, System x (which was still around then)  or any other x86 environment. It also would provide support for IBM z/VM on the z, and PowerVC for PowerVM on Power Systems to add more scalability and security to its Linux environments.

At the same time IBM also announced it was beta testing a dynamic, hybrid cloud solution on the IBM Cloud Manager with OpenStack platform. That would allow workloads requiring additional infrastructure resources to expand from an on premise cloud to remote cloud infrastructure.  Since that announcement, IBM has only gotten more deeply enamored with hybrid clouds.  Again, the z data center should have a big role as the on premise anchor for hybrid clouds.

With the more recent announcement RefStack, officially launched last year and to which IBM is the lead contributor, becomes a critical pillar of IBM’s commitment to ensuring an open cloud – helping to advance the company’s long-term vision of mitigating vendor lock-in and enabling developers to use the best combination of cloud services and APIs for their needs. The new functionality includes improved usability, stability, and other upgrades, ensuring better cohesion and integration of any cloud workloads running on OpenStack.

RefStack testing ensures core operability across the OpenStack ecosystem, and passing RefStack is a prerequisite for all OpenStack certified cloud platforms. By working on cloud platforms that are OpenStack certified, developers will know their workloads are portable across IBM Cloud and the OpenStack community.  For now RefStack acts as the primary resource for cloud providers to test OpenStack compatibility, RefStack also maintains a central repository and API for test data, allowing community members visibility into interoperability across OpenStack platforms.

One way or another, your z data center will have to coexist with hybrid clouds and the rest of IBM’s strategic imperatives or face being displaced. With RefStack and the other OpenStack tools this should not be too hard. In the meantime, prepare your z data center for new incoming traffic from the strategic imperatives, Blockchain, IoT, Cognitive Computing, and whatever else IBM deems strategic next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Ubuntu Linux (beta) for the z System is Available Now

April 8, 2016

As recently as February, DancingDinosaur has been lauding IBM’s bolstering of the z System for Linux and support for the latest styles of app dev. As part of that it expected Ubuntu Linux for z by the summer. It arrived early.  You can download it for LinuxONE and the z now, hereubuntu-logo-300x225

Of course, the z has run Linux for over a decade. That was a customized version that required a couple of extra steps, mainly recompiling, if x86 Linux apps were to run seamlessly. This time Canonical and the Ubuntu community have committed to work with IBM to ensure that Ubuntu works seamlessly with IBM LinuxONE, z Systems, and Power Systems. The goal is to enable IBM’s enterprise platforms to play nicely with the latest app dev goodies, including NFV, containers, KVM, OpenStack, big data analytics, DevOps, and even IoT. To that end, all three parties (Canonical, the Ubuntu community, and IBM) commit to provide reference architectures, supported solutions, and cloud offerings, now and in the future.

Ubuntu is emerging as the platform of choice for organizations running scale-out, next-generation workloads in the cloud. According to Canonical, Ubuntu dominates public cloud guest volume and production OpenStack deployments with up to 70% market share. Global brands running Ubuntu at scale in the cloud include AT&T, Walmart, Deutsche Telecom, Bloomberg, Cisco and others.

The z and LinuxONE machines play right into this. They can support thousands of Linux images with no-fail high availability, security, and performance. When POWER 9 processors come to market it gets even better. At a recent OpenPOWER gathering the POWER 9 generated tremendous buzz with Google discussing its intentions of building a new data center server  based on an open POWER9 design that conforms to Facebook’s Open Compute Project server.

These systems will be aimed initially at hyperscale data centers. OpenPOWER processors combined with acceleration technology have the potential to fundamentally change server and data center design today and into the future.  OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.

According to Aaron Sullivan, Open Compute Project Incubation Committee Member and Distinguished Engineer at Rackspace. “OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.” This is true today and with POWER9, a reportedly 14nm processor coming around 2017, it will be even more so then. This particular roadmap looks out to 2020 when POWER10, a 10nm processor, is expected with the task of delivering extreme analytics optimization.

But for now, what is available for the z isn’t exactly chopped liver. Ubuntu is delivering scale-out capabilities for the latest development approaches to run on the z and LinuxONE. As Canonical promises: Ubuntu offers the best of open source for IBM’s enterprise customers along with unprecedented performance, security and resiliency. The latest Ubuntu version, Ubuntu 16.04 LTS, is in beta and available to all IBM LinuxOne and z Systems customers. See the link above. Currently SUSE and Red Hat are the leading Linux distributions among z data centers. SUSE also just announced a new distro of openSUSE Linux for the z to be called openSUSE Factory.

Also this week the OpenPOWER Foundation held its annual meeting where it introduced technology to boost data center infrastructures with more choices, essentially allowing increased data workloads and analytics to drive better business results. Am hoping that the Open Mainframe Project will emulate the Open POWER group and in a year or two by starting to introducing technology to boost mainframe computing along the same lines.

For instance OpenPOWER introduced more than 10 new OpenPOWER servers, offering expanded services for high performance computing and server virtualization. Or this: IBM, in collaboration with NVIDIA and Wistron, revealed plans to release its second-generation OpenPOWER high performance computing server, which includes support for the NVIDIA Tesla Accelerated Computing platform. The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via the NVIDIA NVLink, a high-speed interconnect technology.

In the same batch of announcements TYAN announced its GT75-BP012, a 1U, POWER8-based server solution with the ppc64 architecture. The ppc64 architecture is optimized for 64-bit big-endian PowerPC and Power Architecture processors.  Also of interest to DancingDinosaur readers may be the variation of the ppc64 that enables a pure little-endian mode with the POWER8 to enable the porting of x86 Linux-based software with minimal effort. BTW, the OpenPOWER-based platform, reportedly, offers exceptional capability for in-memory computing in a 1U implementation, part of the overall trend toward smaller, denser, and more efficient systems. The latest TYAN offerings will only drive more of it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem and Power Score in IDC 4Q 2015 Rankings

March 18, 2016

IBM retained the number 3 spot with 14.1% share for the quarter as revenue increased 8.9% year-over-year to $2.2 billion in 4Q15. More impressively, IBM experienced strong growth for POWER Systems and double-digit growth for its z System mainframes in the quarter, according to IDC. You can check out the IDC announcement here. IDC credits z and POWER for IBM’s strong platform finish in 2015.

Primary_LinuxONE_LeftAngle-1 (1) zSystem-based LinuxONE

DancingDinosaur has expected these results and been reporting IBM’s z System and POWER System successes for the past year. You can check it out here (z13s) and here (LinuxOne) and here (Power Systems LC).

Along with deservedly crowing about its latest IDC ranking IBM added:  z Systems saw double digit growth due to a number of new portfolio enhancements. The next-generation z13 mainframe, optimized for digital businesses and hybrid cloud environments, is designed to handle mobile transactions securely and at scale, while enabling clients to run analytics on the system and in real time. IBM expanded its commitment to offering open-source on the mainframe by launching a line of Linux-only systems in August of 2015. LinuxONE is based on the latest generation of z Systems technology and enables popular open-source tools and software on the mainframe. IBM also added what amounts to a Business Class z with the z13s to go along with a Business Class dedicated Linux z, the LinuxONE Rockhopper.

Meanwhile, IBM has started to get some uptake for its Open Mainframe Project. In addition to announcing support from the usual mainframe suspects—IBM, CA, Compuware, SUSE, BMC, and others—it also announced its first projects. These include an effort to find ways to leverage new software and tools in the Linux environment that can better take advantage of the mainframe’s speed, security, scalability, and availability. DancingDinosaur is hoping that in time the Open Mainframe Project will produce the kind of results the Open POWER Foundation has recently generated for the POWER Platform

IBM attributes the growing traction of Linux running on POWER Systems in large part to optimized solutions such as DB2 BLU, SAP HANA, and other industry big data software, built on POWER Systems running Linux. In October 2015, IBM expanded its Linux on Power Systems portfolio with the LC line of servers. These servers are infused with OpenPOWER Foundation technology and bring the higher performance of the POWER CPU to the broad Linux community. The POWER-based LC line along with the z-based LinuxONE Rockhopper should give any data center manager looking to run a large, efficient Linux server farm a highly cost-competitive option that can rival or even beat the x86 option. And given that both platforms will handle Docker containers and microservices and support all of today’s popular development tools there is no reason to stick with x86.

From a platform standpoint, IBM appears to be in sync with what IDC is reporting: Datacenter buildout continues, and the main beneficiary this quarter is the density-optimized segment of the market, where growth easily outpaced the overall server market. Density-optimized servers achieved a 30.2% revenue growth rate this quarter, contributing a full 2 percentage points to the overall 5.2% revenue growth in the market.

“The fourth quarter (of 2015) was a solid close to a strong year of growth in the server market, driven by on premise refresh deployments as well as continued hyperscale cloud deployments,” said Kuba Stolarski, Research Director, Servers and Emerging Technologies at IDC. “As the cyclical refresh of 2015 comes to an end, the market focus has begun to shift towards software-defined infrastructure and hybrid environment management, as organizations begin to transform their IT infrastructure as well as prepare for the compute demands expected over the next few years from next-gen IT domains such as IoT and cognitive analytics. In the short term, 2016 looks to be a year of accelerated cloud infrastructure expansion with existing footprints filling out and new cloud datacenter buildouts across the globe.”

After a seemingly endless string of dismal quarters DancingDinosaur is encouraged by what IBM is doing now with the z, POWER Systems, and its strategic initiatives. With its strategic focus on cloud, mobile, big data analytics, cognitive computing, and IoT as well as its support for the latest approaches to software development, tools, and languages, IBM should be well positioned to continue its platform success in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Spectrum Suite Returns IBM to the Storage Game

January 29, 2016

The past four quarters haven’t been kind to IBM storage as the storage group racked up consecutive quarterly revenue losses. The Spectrum Suite V 1.0 is IBM’s latest software defined storage (SDS) initiative, one of the hottest trends in storage. The product release promises to start turning things around for IBM storage.

IBM Mobile Storage (Jared Lazarus/Feature Photo Service for IBM)

IBM Mobile Storage, Jamie,Thomas, GM Storage (Jared Lazarus/Feature Photo Service for IBM)

Driving interest in SDS is the continuing rapid adoption on new workload, new application, and new ways of storing and consuming data. The best thing about the Spectrum Suite is the way IBM is now delivering it—as a broad set of storage software capabilities that touch every type of storage operation. It doesn’t much matter which workloads or applications are driving it or what kind of storage you need.  Seventy percent of clients report deploying object storage, and 60% already are committed to SDS.  Over three-quarters of storage device interface (SDI) adopters also indicated a strong preference for single-vendor storage solutions.  This all bodes well for IBM’s Spectrum Suite.

Also working in IBM’s favor is the way storage has traditionally been delivered. Even within one enterprise there can be multiple point solutions from different vendors or even incompatible solutions from the same vendor. Companies need to transition among storage software offerings as business needs change, which entails adding and removing software licenses. This always is complex and may even lead to dramatic cost gyrations due to different licensing metrics and different vendor policies.  On top of that, procurement may not play along so quickly, leaving the organization with a gap in functionality.  Then there are the typical inconsistent user interfaces among offerings, which invariably reduces productivity and may increase errors.

Add to that the usual hassles of learning different products with different interfaces and different ways to run new storage processes. As a result, a switch to SDS may not be as smooth or efficient as you hoped, and it probably won’t be cheap.

IBM is counting on these storage complications, outlined above, and more to give it a distinct advantage in the SDS market  IBM should know; the company has been one of the offenders creating similar complications as they cobbled together a wide array of storage products with different interfaces and management processes over the years.

With the new Spectrum Storage Suite IBM finally appears to have gotten it right. IBM is offering a simplified and predictable licensing model for entire Spectrum Storage family. Pricing is pegged to the capacity being used, regardless of what that capacity is and how it is being used. Block, file, object—doesn’t matter; the same per-terabyte pricing applies. IBM estimates that alone can save up to 40% compared to licensing different software capabilities separately. Similarly, there are no software licensing hassles when migrating from one form of storage or data type to another. Even the cost won’t change unless you add capacity. Then, you pay the same per-terabyte cost for the additional capacity.

The Spectrum Suite and its licensing model work for mainframe shops running Linux on z and LinuxONE. Sorry, no z/OS yet.

The new Spectrum Storage approach has advantages when running a storage shop. There are no unexpected charges when using new capabilities and IBM isn’t charging for non-production uses like dev and test.

Finally, you will find a consistent user interface across all storage components in the Spectrum suite. That was never the case with IBM’s underlying storage hardware products but Spectrum SDS makes those difference irrelevant. The underlying hardware array doesn’t really matter; admins will rarely ever have to touch it.

The storage capabilities included in IBM Spectrum Storage Suite V1.0 should be very familiar to you from the traditional IBM storage products you probably are currently using. They include:

  • IBM Spectrum Accelerate, Version 11.5.3
  • IBM Spectrum Archive Enterprise Edition, Version 1.2 (Linux edition)
  • IBM Spectrum Control Advanced Edition 5.2
  • IBM Spectrum Protect Suite 7.1
  • IBM Spectrum Scale Advanced and Standard Editions (Protocols) V4.2
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Real-time Compression
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Encryption Software

With Spectrum Storage you can, for example, run SAN storage, storage rich servers, and a tape library. Add up the storage capacity for each and pay the per-terabyte licensing cost. Re-allocate the existing capacity between the different types of storage and your charges don’t change. Pretty nifty, huh? To DancingDinosaur, who has sat through painful discussions of complicated IBM software pricing slopes, this is how you spell relief. Maybe there really is a new IBM coming that actually gets it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem Continues Surge in 4Q15

January 22, 2016

DancingDinosaur follows technology, not financial investments, so you’d be an idiot if you considered what follows as investment advice. It is not.  Still, as one who has built a chunk of his career around the mainframe, it is good to see the z System continuing to remain in the black and beating the sexier Power lineup although I do follow both closely. See the latest IBM financials here.

  ibm-z13

The IBM z13 System

 Specifically, as IBM reported on Tuesday, revenues from z Systems mainframe server products increased 16 percent compared with the year-ago period (up 21 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS (millions of instructions per second), increased 28 percent.  Revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency).

Almost as good, revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency). Power revenues have been up most of the year although they got a little blurry in the accounting.

In the storage market, which is getting battered by software defined storage (SDS) on one hand and cloud-based storage on the other, IBM reported revenues from System Storage decreased 11 percent (down 7 percent adjusting for currency). The storage revenues probably won’t bounce back fast, at least not without IBM bringing out radically new storage products. That storage rival EMC got acquired by Dell should be some kind of signal that the storage market as the traditional enterprise players knew it is drastically different. For now object storage, SDS, and even Flash won’t replace the kind of revenue IBM used to see from DS8000 disk systems or TS enterprise tape libraries loaded with mechanical robotics.

Getting more prominence is IBM’s strategic initiative. This has been a company priority all year. Strategic initiatives include cloud, mobile, analytics, security, IoT, and cognitive computing. Q4 revenues, as reported by IBM, from these strategic imperatives — cloud, analytics, and engagement — increased 10 percent year-to-year (up 16 percent adjusting for currency).  For the full year, revenues from strategic imperatives increased 17 percent (up 26 percent adjusting for currency and the divested System x business) to $28.9 billion and now represents 35 percent of total IBM consolidated revenue.

For the full year, total cloud revenues (public, private and hybrid) increased 43 percent (up 57 percent adjusting for currency and the divested System x business) to $10.2 billion.  Revenues for cloud delivered as a service — a subset of the total cloud revenue — increased 50 percent to $4.5 billion; and the annual as-a-service run rate increased to $5.3 billion from $3.5 billion in the fourth quarter of 2014.

Meanwhile, revenues from business analytics increased 7 percent (up 16 percent adjusting for currency) to $17.9 billion.  Revenues from mobile more than tripled and from security increased 5 percent (up 12 percent adjusting for currency).

Commenting on IBM latest financial was Timothy Prickett Morgan, who frequently writes on IBM’s platforms. Citing Martin Schroeter, IBM’s chief financial officer, statements to analyst, Morgan suggested that low profit margins, which other financial analysts complained about, put pressure on the System z13 product line that launched early in the year. After a fast start, apparently, the z13 is now experiencing a slowdown in the upgrade cycle. It’s at this point that DancingDinosaur usually expects to see a new z, typically a business class version of the latest mainframe, the z13 in this case, but that does not appear to be in the offing. About the closest IBM got to that was the RockHopper model of the LinuxOne, a z optimized only for Linux, cloud, mobile, and analytics.

Morgan also noted that IBM added about 50 new mainframe customers for the year on an installed base of about 6,000 active customers. DancingDinosaur has been tracking that figure for years and it has not fluctuated much in recent years. And am never sure how to count the handful of IT shops that run a z in the IBM cloud.  But 5000-6000 active z shops still sounds about right.

Power Systems, which has also grown four quarters in a row, and was up 8 percent at constant currency. This has to be a relief to the company, which has committed over $1 billion to Power. IBM attributes some of this growth to its enthusiastic embrace of Linux on Power8, but Morgan complains of having no sense of how much of the Power Systems pie is driven by scale-out Linux machines intended to compete against Intel Xeon servers. Power also is starting to get some boost from the OpenPOWER Foundation, members that started to ship products in the past few months. It’s probably minimal revenue now but over time it should grow.

For those of us who are counting on z and Power to be around for a while longer, the latest financials should be encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 885 other followers

%d bloggers like this: