Posts Tagged ‘Power Systems’

IBM Power System S822LC for HPC Beat Sort Record by 3.3x

November 17, 2016

The new IBM Power System S822LC for High Performance Computing servers set a new benchmark for sorting by taking less than 99 seconds (98.8 seconds) to finish sorting 100 terabytes of data in the Indy GraySort category, improving on last year’s best result, 329 seconds, by a factor of 3.3. The win proved a victory not only for the S822LC but for the entire OpenPOWER community. The team of Tencent, IBM, and Mellanox has been named the Winner of the Sort Benchmark annual global computing competition for 2016.

rack-of-new-ibm-power-systems-s822lc-for-high-performance-computing-servers-1Power System S822LC for HPC

Specifically, the machine, an IBM Power S822LC for High Performance Computing (HPC), features NVIDIA NVLink technology optimized for the Power architecture and NVIDIA’s latest GPU technology. The new system supports emerging computing methods of artificial intelligence, particularly deep learning. The combination, newly dubbed IBM PowerAI, provides a continued path for Watson, IBM’s cognitive solutions platform, to extend its artificial intelligence expertise in the enterprise by using several deep learning methods to train Watson.

Actually Tencent Cloud Data Intelligence (the distributed computing platform of Tencent Cloud) won each category in both the GraySort and MinuteSort benchmarks, establishing four new world records with its performance, outperforming the 2015 best speeds by 2-5x. Said Zeus Jiang, Vice President of Tencent Cloud and General Manager of Tencent’s Data Platform Department: “In the future, the ability to manage big data will be the foundation of successful Internet businesses.”

To get this level of performance Tencent runs 512 IBM OpenPOWER LC servers and Mellanox’100Gb interconnect technology, improving the performance of Tencent Cloud big data products with the infrastructure. Online prices for the S822LC starts at about $9600 for 2-socket, 2U with up to 20 cores (2.9-3.3Ghz), 1 TB memory (32 DIMMs), 230 GB/sec sustained memory bandwidth, 2x SFF (HDD/SSD), 2 TB storage, 5 PCIe slots, 4 CAPI enabled, up to 2 NVidia K80 GPU. Be sure to shop for volume discounts.

The 2016 Sort Benchmark Results below (apologies in advance if this table breaks apart)

Sort Benchmark Competition 20 Records (Tencent Cloud ) 2015 World Records 2016 Improvement
Daytona GraySort 44.8 TB/min 15.9 TB/min 2.8X greater performance
Indy GraySort 60.7 TB/min 18.2 TB/min 3.3X greater performance
Daytona MinuteSort 37 TB/min 7.7 TB/min 4.8X greater performance
Indy MinuteSort 55 TB/min 11 TB/min 5X greater performance

Pretty impressive, huh. As IBM explains it: Tencent Cloud used 512 IBM OpenPOWER servers and Mellanox’100Gb interconnect technology, improving the performance of Tencent Cloud big data products with the infrastructure. Then Tom Rosamilia, IBM Senior VP weighed in: “Industry leaders like Tencent are helping IBM and our OpenPOWER partners push performance boundaries for a cognitive era defined by big data and advanced analytics.” The computing record achieved by Tencent Cloud on OpenPOWER turned out to be an important milestone for the OpenPOWER Foundation too.

Added Amir Prescher, Sr. Vice President, Business Development, at Mellanox Technologies: “Real-time-analytics and big data environments are extremely demanding, and the network is critical in linking together the extra high performance of IBM POWER-based servers and Tencent Cloud’s massive amounts of data,” In effect, Tencent Cloud developed an optimized hardware/software platform to achieve new computing records while demonstrating that Mellanox’s 100Gb/s Ethernet technology can deliver total infrastructure efficiency and improve application performance, which should make it a favorite for big data applications.

Behind all of this was the new IBM Power System S822LC for High Performance Computing servers. Currently the servers feature a new IBM POWER8 chip designed for demanding workloads including artificial intelligence, deep learning and advanced analytics.  However, a new POWER9 chips has already been previewed and is expected next year.  Whatever the S822LC can do running POWER8 just imagine how much more it will do running POWER9, which IBM describes as a premier acceleration platform. DancingDinosaur covered POWER9 in early Sept. here.

To capitalize on the hardware, IBM is making a new deep learning software toolkit available, PowerAI, which runs on the recently announced IBM Power S822LC server built for artificial intelligence that features NVIDIA NVLink interconnect technology optimized for IBM’s Power architecture. The hardware-software combination provides more than 2X performance over comparable servers with 4 GPUs running AlexNet with Caffe. The same 4-GPU Power-based configuration running AlexNet with BVLC Caffe can also outperform 8 M40 GPU-based x86 configurations, making it the world’s fastest commercially available enterprise systems platform on two versions of a key deep learning framework.

Deep learning is a fast growing, machine learning method that extracts information by crunching through millions of pieces of data to detect and ranks the most important aspects of the data. Publicly supported among leading consumer web and mobile application companies, deep learning is quickly being adopted by more traditional enterprises across a wide range of industry sectors; in banking to advance fraud detection through facial recognition; in automotive for self-driving automobiles; and in retail for fully automated call centers with computers that can better understand speech and answer questions. Is your data center ready for deep learning?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Can SDS and Flash Resurrect IBM Storage?

November 4, 2016

As part of IBM’s ongoing string of quarterly losses storage has consistently contributed to the red ink, but the company is betting on cloud storage, all-flash strategy, and software defined storage (SDS) to turn things around. Any turn-around, however, is closely tied to the success of IBM’s strategic imperatives, which have emerged as bright spots amid the continuing quarterly losses; especially cloud, analytics, and cognitive computing.

climate-data-requires-fast-access-1

Climate study needs large amounts of fast data access

As a result, IBM needs to respond to two challenges created by its customers: 1) changes like the increased adoption of cloud, analytics, and most recently cognitive computing and 2) the need by customers to reduce the cost of the IT infrastructure. The problem as IBM sees it is this: How do I simultaneously optimize the traditional application infrastructure and free up money to invest in a new generation application infrastructure, especially if I expect move forward into the cognitive era at some point? IBM’s answer is to invest in flash and SDS.

A few years ago DancingDinosaur was skeptical, for example, that flash deployment would lower storage costs except in situations where low cost IOPS was critical. Today between the falling cost of flash and new ways to deploy increasingly cheaper flash DancingDinosaur now believes Flash storage can save IT real money.

According to the Evaluator Group and cited by IBM, flash and hybrid cloud technologies are dramatically changing the way companies deploy storage and design applications. As new applications are created–often for mobile or distributed access–the ability to store data in the right place, on the right media, and with the right access capability will become even more important.

In response, companies are adding cloud to lower costs, flash to increase performance, and SDS to add flexibility. IBM is integrating these capabilities together with security and data management for faster return on investment.  Completing the IBM pitch, the company offers choice among on-premise storage, SDS, or storage as a cloud service.

In an announcement earlier this week IBM introduced six products:

  • IBM Spectrum Virtualize 7.8 with transparent cloud tiering
  • IBM Spectrum Scale 4.2.2 with cloud data sharing
  • IBM Spectrum Virtualize family flash enhancements
  • IBM Storwize family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack—a joint IBM-Cisco initiative

In short, these announcements address Hybrid Cloud enablement, as a standard feature for new and existing users of Spectrum Virtualize to enable data sharing to the cloud through Spectrum Scale, which can sync file and object data across on-premises and cloud storage to connect cloud native applications. Plus, more high density, highly scalable all-flash storage now sports a new high density expansion enclosure that includes new 7TB and 15TB flash drives.

IBM Storwize, too, is included, now able to grow up to 8x larger than previously without disruption. That means up to 32PB of flash storage in only four racks to meet the needs of fast-growing cloud workloads in space-constrained data centers. Similarly, IBM’s new DeepFlash Elastic Storage Server (ESS) offers up to 8x better performance than HDD-based solutions for big data and analytics workloads. Built with IBM Spectrum Scale ESS includes virtually unlimited scaling, enterprise security features, and unified file, object, and HDFS support.

The z can play in this party too. IBM’s DS8888 now delivers 2x better performance and 3x more efficient use of rack space for mission-critical applications such as credit card and banking transactions as well as airline reservations running on IBM’s z System or IBM Power Systems. DancingDinosaur first reported on the all flash z, the DS8888, when it was introduced last May.

Finally hybrid cloud enablement for existing and new on-premises storage enhancements through IBM Spectrum Virtualize, which brings hybrid cloud capabilities for block storage to the Storwize family, FlashSystem V9000, SVC, and VersaStack, the IBM-Cisco collaboration.

Behind every SDS deployment lies some actual physical storage of some type. Many opt for generic, low cost white box storage to save money.  As part of IBM’s latest SDS offerings you can choose among any of nearly 400 storage systems from IBM and others. Doubt any of those others are white box products but at least they give you some non-IBM options to potentially lower your storage costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 3Q16 Results Telegraph a New z System in 2017

October 27, 2016

DancingDinosaur usually doesn’t like to read too much into the statements of IBM suits at financial briefings. This has been especially true since IBM introduced a new presentation format this year to downplay its platform business and emphasize its strategic imperatives. (Disclaimer: DancingDinosaur is NOT a financial analyst but a technology analyst.)

But this quarter the CFO said flat out: “Our z Systems results reflect a product cycle dynamic, seven quarters into the z13 cycle; revenue was down while margins continue to expand. We continue to add new clients to the platform and we are introducing new technologies like block chain. We announced new services to make it easier to build and test block chain networks in a secure environment as we build our block chain platform it’s been engineered to run on multiple platforms but is optimized for scale, security and resilience on both the IBM mainframe and the IBM cloud.”

linuxone-emperorLinuxONE Emperor

If you parse the first sentence–reflect a product cycle dynamic–he is not too subtly hinting that IBM needs a z System refresh if they want to stop the financial losses with z. You don’t have to be a genius to expect a new z, probably the z14, in 2017. Pictured above is the LinuxONE Emperor, a z optimized to run Linux. The same suit said “We’ve been shifting our platform to address Linux, and in the third quarter Linux grew at a double digit rate, faster than the market.” So based on that we can probably guess that the z14 (or whatever it will be called) will run z/OS, followed shortly by a LinuxONE version to further expand the z System’s Linux footprint.

Timothy Prickett Morgan picked that up too and more. He expects a z14 processor complex will be announced next year around the same time that the Power9 chip ships. In both cases, Power and z customers who can wait will wait, or, if they are smart, will demand very steep discounts on current Power8 hardware to make up for the price/performance improvements that are sure to accompany the upcoming Power9 and z machines.

When it comes to revenue 3Q16 was at best flat, but actually was down again overall. The bright spot again was IBM’s strategic imperatives. As the suit stated: in total, we continue to deliver double-digit revenue growth in our strategic imperatives led by our cloud business. Specifically, cognitive solutions were up 5% and, within that, solution software was up 8%.

Overall, growth in IBM’s strategic imperatives rose 15%. Over the last 12 months, strategic imperatives delivered nearly $32 billion in revenue and now represent 40% of IBM. The suit also emphasized strong performance in IBM’s cloud offerings which increased over 40%, led by the company’s as-a-service offerings. IBM ended the third quarter with an as-a-service run rate of $7.5 billion, up from $6.7 billion last quarter. Most of that was attributed to organic growth, not acquisitions. Also strong was IBM’s revenue performance in security and mobile. In addition, the company experienced growth in its analytic offerings, up 14% this quarter with contributions from the core analytics platform, especially the Watson platform, Watson Health, and Watson IoT.

IBM apparently is convinced that cognitive computing, defined as using data and adding intelligence into products and services to help companies make better decisions, is the wave of the future. As the company sees it, real value lies in providing cognitive capabilities via the IBM cloud. A critical element of its strategy is IBM’s industry focus. Initially industry platforms will address two substantial opportunity areas, financial services and block chain solutions. You can probably add healthcare too.

Blockchain may emerge as the sleeper, although DancingDinosaur has long been convinced that blockchain is ideal for z shops—the z already handles the transactions and delivers the reliability, scalability, availability, and security to do it right.  As IBM puts it, “we believe block chain has the potential to do for trusted transactions what the Internet did for information.” Specifically, IBM is building a complete block chain platform and is now working with over 300 clients to pioneer block chain for business, including CLS, which settles $5 trillion per day in the currency markets, to implement a distributed ledger in support of its payment netting service, and Bank of Tokyo Mitsubishi, for smart contracts to manage service level agreements and automate multi party transactions.

Says Morgan: “IBM is very enthusiastic about using Blockchain in commercial transaction processing settings, and has 40 clients testing it out on mainframes, but this workload will take a long time to grow. Presumably, IBM will also push Blockchain on Power as well.”  Morgan may be right about blockchain coming to Power, but it is a natural for the z right now, whether as a new z14 or a new z-based LinuxONE machine.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

OpenCAPI, Gen-Z, CCIX Initiate a New Computing Era

October 20, 2016

The next generation data center will be a more open, cooperative, and faster place judging from the remarkably similar makeup of three open consortia, OpenCAPI , Gen-Z, and CCIX. CCIX allows processors based on different instruction set architectures to extend their cache coherency to accelerators, interconnect, and I/O.

OpenCAPI provides a way to attach accelerators and I/O devices with coherence and virtual addressing to eliminate software inefficiency associated with the traditional I/O subsystem, and to attach advanced memory technologies.  The focus of OpenCAPI is on attached devices primarily within a server. Gen-Z, announced around the same time, is a new data access technology that primarily enables read and write operations among disaggregated memory and storage.

open-power-rethink-datacenter

Rethink the Datacenter

It’s quite likely that your next data center will use all three. The OpenCAPI group includes AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx. Their new specification promises to enable up to 10X faster server performance with the first products expected in the second half of 2017.

The Gen-Z consortium consists Advanced Micro Devices, Broadcom, Huawei Technologies, Red Hat, Micron, Xilinx, Samsung, IBM, and Cray. Other founding members are Cavium, IDT, Mellanox Technologies, Microsemi, Seagate, SK Hynix, and Western Digital. They plan to develop a scalable computing interconnect and protocol that will enable systems to keep with the rapidly rising tide of data that is being generated and that needs to be analyzed. This will require the rapid movement of high volumes of data between memory and storage.

The CCIX initial members include Amphenol Corp., Arteris Inc., Avery Design Systems, Atos, Cadence Design Systems, Inc., Cavium, Inc., Integrated Device Technology, Inc., Keysight Technologies, Inc., Micron Technology, Inc., NetSpeed Systems, Red Hat Inc., Synopsys, Inc., Teledyne LeCroy, Texas Instruments, and TSMC.

The basic problem all three address revolves around how to make the volume and variety of new hardware forge fast communications and work together. In effect each group, from its own particular perspective, aims to boost the performance and interoperability of data center servers, devices, and components engaged in generating and handling myriad data and tasked with analyzing large amounts of that data. This will only be compounded as IoT, blockchain, and cognitive computing ramp up.

To a large extent, this results from the inability of Moore’s Law to continue to double the number of processors indefinitely. Future advances must rely on different sorts of hardware tweaks and designs to deliver greater price/performance.

Then in Aug. 2016 IBM announced a related chip breakthrough.  It unveiled the industry’s first 7 nm chip that could hold more than 20 billion tiny switches or transistors for improved computing power. The new chips could help meet demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies, according to IBM.

Most chips today in servers and other devices use microprocessors between 14 and 22 nanometers (nm). The 7nm technology represents at least a 50 percent power improvement. IBM intends to apply the new chips to analyze DNA, viruses, and exosomes. IBM expects to test this lab-on-a-chip technology starting with prostate cancer.

The point of this digression into chips and Moore’s Law is to suggest the need for tools and interfaces like Open CAPI, Gen-Z, and CCIX. As the use cases for ultra fast data analytics expands along with the expected proliferation of devices speed becomes critical. How long do you want to wait for an analysis of your prostate or breast cells? If the cells are dear to you, every nanosecond matters.

For instance, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design puts the compute power closer to the data and removes inefficiencies in traditional system architectures to help eliminate system bottlenecks that significantly improve server performance.  In some cases OpenCAPI enables system designers to access memory with sub-500 nanosecond latency.

IBM plans to introduce POWER9-based servers that leverage the OpenCAPI specification in the second half of 2017. Similarly, expect other members of OpenPOWER Foundation to introduce OpenCAPI enabled products in the same time frame. In addition, Google and Rackspace’s new server under development, codenamed Zaius and announced at the OpenPOWER Summit in San Jose, will leverage POWER9 processor technology and plans to provide the OpenCAPI interface in its design. Also, Mellanox plans to enable the new specification capabilities in its future products and Xilinx plans to support OpenCAPI enabled FPGAs

As reported at the Gen-Z announcement, “The formation of these new consortia (CCIX, OpenCAPI, and Gen-Z), backed by more than 30 industry-leading global companies, supports the premise that the datacenter of the future will require open standards. We look forward to collaborating with CCIX and OpenCAPI as this new ecosystem takes shape,” said Kurtis Bowman, Gen-Z Consortium president. Welcome to the 7nm computing era.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Revamped IBM Power Systems LC Takes on x86

September 9, 2016

To hear IBM, its revamped and refreshed Power Systems LC lineup will undermine x86 (Intel), HPE, Dell/EMC, and any other purveyor of x86-based systems. Backed by accelerators provided by OpenPower community members, IBM appears ready extend the x86 battle to on premises, in the cloud, and the hybrid cloud. It promises to deliver better performance at lower cost for all the hot workloads too: artificial intelligence, deep learning, high performance data analytics, and compute-heavy workloads.

ibm-power-systems-s821lc

Two POWER8 processors, 1U config, priced 30% less than an x86 server

Almost a year ago, Oct. 2015, DancingDinosaur covered IBM previous Power Systems LC announcement here. The LC designation stands for Linux Community, and the company is tapping accelerators and more from the OpenPower community, just as it did with its recent announcement of POWER9 expected in 2017, here.

The new Power LC systems feature a set of community delivered technologies IBM has dubbed POWERAccel, a family of I/O technologies designed to deliver composable system performance enabled by accelerators. For GPU acceleration the NVDIA NVLink delivers nearly 5x better integration between POWER processors and the NVIDIA GPUs.  For FPGA acceleration IBM tapped its own CAPI architecture to integrate accelerators that run natively as part of the application.

This week’s Power Systems LC announcement features three new machines:

  • S821LC (pictured above)—includes 2 POWER8 sockets in a 1U enclosure and intended for environments requiring dense computing.
  • S822LC—brings 2 POWER8 sockets for big data workloads and adds big data acceleration through CAPI and GPUs.
  • S822LC—intended for high performance computing, it incorporates the new POWER8 processor with the NVDIA NVLink to deliver 2.8x the bandwidth to GPU accelerators and up to 4 integrated NVIDIA Pascal GPUs.

POWER8 with NVLink delivers 2.8 x the bandwidth compared to a PCle data pipe. According to figures provided by IBM comparing the price-performance of the Power S822LC for HPC (20-core, 256 GB, 4x Pascal) with a Dell C4130 (20-core, 256 GB 4xK80) and measured by total queries per hour (gph) the Power System delivered 2.1x better price-performance.  The Power Systems server cost more ($66,612) vs. the Dell ($57,615) but the Power System delivered 444 qph vs. Dell’s 185 qph.

The story plays out similarly for big data workloads running MongoDB on the IBM Power S8221LC for big data (20-core, 128 GB) vs. an HP DL380 (20-core, 128 GB). Here the system cost (server, OS, MongoDB annual subscription) came to $24,870 for IBM Power and $29,915 for HP.  Power provided 40% more performance at a 31% lower hardware/maintenance cost.

When it comes to the cloud the new IBM Power Systems LC offerings get even more interesting from a buyer’s standpoint. IBM declared the cloud a strategic imperative about 2 years ago and needs to demonstrate adoption that can rival the current cloud leaders; AWS, Google, and Microsoft (Azure). To that end IBM has started to tack on free cloud usage.

For example, during the industry analyst launch briefing IBM declared: Modernize your Power infrastructure for the Cloud, get access to IBM Cloud for free and cut your current operating costs by 50%. Whether you’re talking on-premises cloud or hybrid infrastructure the freebies just come. The free built-in cloud deployment service options include:

  • Cloud Provisioning and Automation
  • Infrastructure as a Service
  • Cloud Capacity Pools across Data Centers
  • Hybrid Cloud with BlueMix
  • Automation for DevOps
  • Database as a Service

These cover both on-premises, where you can transform your traditional infrastructure with automation, self-service, and elastic consumption models or a hybrid infrastructure where you can securely extend to Public Cloud with rapid access to compute services and API integration. Other freebies include open source automation, installation and configuration recipes, cross data center inventory, performance monitoring via the IBM Cloud, optional DR as a service for Power, and free access and capacity flexibility with SolfLayer (12 month starter pack).

Will the new LC line and its various cloud freebies get the low cost x86 monkey off IBM’s back? That’s the hope in Armonk. The new LC servers can be acquired at a lower price and can deliver 80% more performance per dollar spent over x86-based systems, according to IBM. This efficiency enables businesses and cloud service providers to lower costs and combat data center sprawl.

DancingDinosaur has developed TCO and ROI analyses comparing mainframe and Power systems to x86 for a decade, maybe more.  A few managers get it, but most, or their staff, have embedded bias and will never accept non-x86 machines. To them, any x86 system always is cheaper regardless of the specs and the math. Not sure even free will change their minds.

The new Power Systems LC lineup is price-advantaged over comparatively configured Intel x86-based servers, costing 30% less in some configurations.  Online LC pricing begins at $5999. Additional models with smaller configurations sport lower pricing through IBM Business Partners. All but the HPC machine are available immediately. The HPC machine will ship Sept. 26.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth

IBM POWER9 chip

IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Oracle Aims at Intel and IBM POWER

July 8, 2016

In late June Oracle announced the SPARC S7 processor, a new 20nm, 4.27 GHz, 8-core/64-thread SPARC processor targeted for scale-out Cloud workloads that usually go to Intel x86 servers. These are among the same workloads IBM is aiming for with POWER8, POWER9, and eventually POWER10, as reported by DancingDinosaur just a couple of weeks ago.

oracle roadmap trajectory

Oracle 5-year SPARC trajectory (does not include newly announced S series).

According to Oracle, the latest additions to the SPARC platform are built on the new 4.27 GHz, 8-core/64-thread SPARC S7 microprocessor with what Oracle calls Software-in-Silicon features such as Silicon Secured Memory and Data Analytics Accelerators, which enable organizations to run applications of all sizes on the SPARC platform at commodity price points. All existing commercial and custom applications will also run on the new SPARC enterprise cloud services and solutions unchanged while experiencing improvements in security, efficiency, and simplicity.

By comparison, the IBM POWER platform includes with the POWER8, which is delivered as a 12-core, 22nm processor. The POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators, which ensure delivery of more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even a 7nm, processor, based on the existing micro-architecture. And an even beefier POWER10 is expected to arrive around 2020.

At the heart of the Oracle’s new scale-out, commodity-priced server, the S7. According to Oracle, the SPARC S7 delivers balanced compute performance with 8 cores per processor, integrated on-chip DDR4 memory interfaces, a PCIe controller, and coherency links. The cores in the SPARC S7 are optimized for running key enterprise software, including Java applications and database. The SPARC S7–based servers use very high levels of integration that increase bandwidth, reduce latencies, simplify board design, reduce the number of components, and increase reliability, according to Oracle. All this promises an increase in system efficiency with a corresponding improvement in the economics of deploying a scale-out infrastructure when compared to other vendor solutions.

Oracle’s SPARC S7 processor, based on Oracle enterprise class M7 servers, is optimized for horizontally scalable systems with all the key functionality included in the microprocessor chip. Its Software-in-Silicon capabilities, introduced with the SPARC M7 processor, are also available in the SPARC S7 processor to enable improved data protection, cryptographic acceleration, and analytics performance. These features include Security-in-Silicon, which provides Silicon Secured Memory and cryptographic acceleration, and Data Analytics Accelerator (DAX) units, which provide In-memory query acceleration and in-line decompression

SPARC S7 processor–based servers include single- and dual-processor systems that are complementary to the existing mid-range and high-end systems based on Oracle’s SPARC M7 processor. SPARC S7 processor–based servers include two rack-mountable models. The SPARC S7-2 server uses a compact 1U chassis, and the SPARC S7-2L server is implemented in a larger, more expandable 2U chassis. Uniformity of management interfaces and the adoption of standards also should help reduce administrative costs, while the chassis design provides density, efficiency, and economy as increasingly demanded by modern data centers. Published reports put the cost of the new Oracle systems at just above $11,000 with a single processor, 64GB of memory and two 600GB disk drives, and up to about $50,000 with two processors and a terabyte of memory.

DancingDinosaur doesn’t really have enough data to compare the new Oracle system with the new POWER8 and upcoming POWER9 systems. Neither Oracle nor IBM have provided sufficient details. Oracle doesn’t even offer a roadmap at this point, which might tell you something.

What we do know about the POWER machines is this: POWER9 promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and is being optimized for new deployment models like hyperscale, cloud, and technical computing through scale-out deployment. Available for either clustered or multiple formats, it will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

According to IBM, you can expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

At least IBM showed its POWER roadmap. There is no comparable information from Oracle. At best, DancingDinosaur was able to dig up the following sketchy details for 2017-2019: Next Gen Core, 2017 Software-in-Silicon V1, Scale Out fully integrated Software-in-Silicon V1 or 2; 2018- 2019 Core Enhancements, Increased Cache, Increased Bandwidth, Software-in-Silicon V3.

Both Oracle and IBM have made it clear neither really wants to compete in the low cost, scale out server market. However, as both companies’ large clients turn to scale out, hyperscale Intel-based systems they have no choice but to follow the money. With the OpenPOWER Foundation growing and driving innovation, mainly in the form of accelerators, IBM POWER may have an advantage driving a very competitive price/performance story against Intel. With the exception of Fujitsu as an ally of sorts, Oracle has no comparable ecosystem as far as DancingDinosaur can tell.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: