Posts Tagged ‘Power Systems’

Latest IBM Initiatives Drive Power Advantages over x86

November 20, 2015

This past week IBM announced a multi-year strategic collaboration between it and Xilinx that aims to enable higher performance and energy-efficient data center applications through Xilinx FPGA-enabled workload acceleration on IBM POWER-based systems. The goal is to deliver open acceleration infrastructures, software, and middleware to address applications like machine learning, network functions virtualization (NFV), genomics, high performance computing (HPC), and big data analytics. In the process, IBM hopes to put x86 systems at an even greater price/performance disadvantage.


Courtesy of IBM

At the same time IBM and several fellow OpenPOWER Foundation members revealed new technologies, collaborations and developer resources to enable clients to analyze data more deeply and at high speed. The new offerings center on the tight integration of IBM’s open and licensable POWER processors with accelerators and dedicated high performance x86e processors optimized for computationally intensive software code. The accelerated POWER-based offerings come at a time when many companies are seeking the best platform for Internet of Things, machine learning, and other performance hungry applications.

The combination of collaborations and alliances are clearly aimed at establishing Power as the high performance leader for the new generation of workloads. Noted IBM, independent software vendors already are leveraging IBM Flash Storage attached to CAPI to create very large memory spaces for in-memory processing of analytics, enabling the same query workloads to run with a fraction of the number of servers compared to commodity x86 solutions.  These breakthroughs enable POWER8-based systems to continue where the promise of Moore’s Law falls short, by delivering performance gains through OpenPOWER ecosystem-driven, full stack innovation. DancingDinosaur covered efforts to expand Moore’s Law on the z a few weeks back here.

The new workloads present different performance challenges. To begin, we’re talking about heterogeneous workloads that are becoming increasingly prevalent, forcing data centers to turn to application accelerators just to keep up with the demands for throughput and latency at low power. The Xilinx All Programmable FPGAs promise to deliver the power efficiency that makes accelerators practical to deploy throughout the data center. Just combine IBM’s open and licensable POWER architecture with Xilinx FPGAs to deliver compelling performance, performance/watt, and lower total cost of ownership for this new generation of data centers workloads.

As part of the IBM and Xilinx strategic collaboration, IBM Systems Group developers will create solution stacks for POWER-based servers, storage, and middleware systems with Xilinx FPGA accelerators for data center architectures such as OpenStack, Docker, and Spark. IBM will also develop and qualify Xilinx accelerator boards for IBM Power Systems servers. Xilinx is developing and will release POWER-based versions of its leading software defined SDAccel™ Development Environment and libraries for the OpenPOWER developer community.

But there is more than this one deal. IBM is promising new products, collaborations and further investments in accelerator-based solutions on top of the POWER processor architecture.  Most recently announced were:

The coupling of NVIDIA® Tesla® K80 GPUs, the flagship offering of the NVIDIA Tesla Accelerated Computing Platform, with Watson’s POWER-based architecture to accelerate Watson’s Retrieve and Rank API capabilities to 1.7x of its normal speed. This speed-up can further improve the cost-performance of Watson’s cloud-based services.

On the networking front Mellanox announced the world’s first smart network switch, the Switch-IB 2, capable of delivering an estimated 10x system performance improvement. NEC also announced availability of its ExpEther Technology suited for POWER architecture-based systems, along with plans to leverage IBM’s CAPI technology to deliver additional accelerated computing value in 2016.

Finally, two OpenPOWER members, E4 Computer Engineering and Penguin Computing, revealed new systems based on the OpenPOWER design concept and incorporating IBM POWER8 and NVIDIA Tesla GPU accelerators. IBM also reported having ported a series of key IBM Internet of Things, Spark, Big Data, and Cognitive applications to take advantage of the POWER architecture with accelerators.

The announcements include the names of partners and products but product details were in short supply as were cost and specific performance details. DancingDinosaur will continue to chase those down.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

DancingDinosaur will not post the week of Thanksgiving. Have a delicious holiday.

IBM Enhances the DS8000 Storage Family for New Challenges

October 30, 2015

Earlier this month IBM introduced a family of business-critical hybrid data storage systems that span a wide range of price points. The family is powered by the next generation of IBM’s proven DS8000 storage platform and delivers critical application acceleration, 6-nines (99.9999) availability, and industry-leading capabilities, like integrated high performance flash.  And coming along in November and December will be new tape storage products.


DS8880, courtesy of IBM (click to enlarge)

The company sees demand for the new storage being driven by cloud, mobile, analytics, and security. As IBM continues to encourage data centers to expand into new workloads, it is introducing a new family of business-critical hybrid flash data systems primarily to support the latest requirements of z System- and Power-based data centers. If your shop hasn’t started to experience a ramp up of new workloads it likely will soon enough.

The new storage family, all based on POWER8 and the DS8000 software stack, currently consists 3 models:

  1. The entry model, the DS8884, delivers fast hybrid flash starting at under $50K. It offers up to 12 cores, 256 GB total system memory, 64 16GB FCP/FICON ports, and 768 HDD/SSD + 120 Flash cards in a 19”, 40u rack.
  2. The DS8886 brings a 2x performance boost, up to 48 cores, 2 TB total system memory, 128 16GB FCP/FICON ports, and 1536 HDD/SSD’s + 240 Flash cards packed into a 19”, 46u rack.
  3. The high end DS8888, according to IBM, is the industry’s fastest T1 Subsystem. It offers all-flash with up to 96 cores, 2 TB total system memory, 128 16GB FCP/FICON ports, and 480 Flash cards packed in the 19”, 40u rack. Won’t be available until spring 2016.

Being built on the DS8000 software stack, the new storage brings unparalleled integration with IBM z System. The systems are especially tuned for insight and cloud environments. They also deliver top efficiency and maximum utilization of resources including staff productivity, space utilization and lower cost through streamlined operations and a 30% reduction in footprint vs. 33″-34” racks.

The DS8888 family comes with two license options: Base function license provides Logical Configuration support for FB, Original Equipment License (OEL), IBM Database Protection, Thin Provisioning, Encryption Authorization, Easy Tier, and I/O Priority Manager. The z Synergy Service  Function license brings PAV, and Hyper-PAV, FICON and High Performance FICON (zHPF), IBM z/OS Distributed Data Backup, and a range of Copy Services Functions including FlashCopy, Metro Mirror, Global MirrorMetro/Global Mirror, z/Global Mirror & z/Global Mirror Resync, and Multi-Target PPRC .

The DS8880 family also provides 99.9999% uptime, an increase over the typical industry uptime benchmark of 99.999% uptime. That extra decimal point translates into 365.243 continuous days of uptime per year. Even the most mission-critical application can probably live with that.

The High-Performance Flash Enclosure for the DS8880 family redefines what IBM considers true enterprise hybrid flash data systems should be, especially in terms of performance for critical applications. Usually, hybrid systems combine flash and traditional spinning drives to be deployed among a variety of mixed workloads of private or public clouds, while reserving more costly all-flash storage for delivering the most extreme performance for only those applications that require it. Now IBM recommends hybrid configurations for consolidation of virtually all workloads since the DS8880 preserves the flexibility to deliver flash performance exactly where and when it is needed automatically through Easy Tier, which optimizes application performance dynamically across any DS8880 configuration without requiring administrators to manually tune and retune applications and storage.

The DS8880 also supports a wide variety of enterprise server and virtual server platforms, but not all are created equal. It includes special integration with z Systems and IBM Power Systems. This is due to the advanced microcode that has been developed and enhanced in lockstep with the mainframe’s I/O architecture over the past several decades. For Power shops the DS8880 copy services are tightly integrated with IBM PowerHA SystemMirror for AIX and IBM i, which add another level of assurance for users who need 24×7 business continuity for their critical Power systems.

For shops dealing with VMware, the DS8880 includes interoperability with VMware vStorage APIs for Array Integration, VMware vCenter Site Recovery Manager, and a VMware vCenter plug-in that allows users to offload storage management operations in VMware environments to the DS8880. Should you prefer to go the other direction, the DS8880 supports IBM Storage Management Console for VMware vCenter to help VMware administrators independently monitor and control their storage resources from the VMware vSphere Client GUI.

If you didn’t notice, there have been a series of interesting announcements coming out of IBM Insight, which wrapped up yesterday in Las Vegas. DancingDinosaur intends to recap some of the most interesting announcements in case you missed them.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM z System Shines in 3Q15 Quarterly Report

October 23, 2015

IBM posted another down quarter this past Monday, maybe the thirteenth in a row; it’s easy to lose track. But yet again, the IBM z System provided a bright spot, a 15 percent increase compared with the year-ago period. Last quarter the z also came up a winner. Still the investment analysts went crazy, the stock tumbled, and wild scenarios, inspired by Dell’s acquisition of EMC no doubt, began circulating.


IBM z13

However, don’t expect IBM to be going away anytime soon. DancingDinosaur is a technology analyst and writer, absolutely not a financial analyst (his wife handles the checkbook).  If you look at what has been going on in the past two years with z System and POWER from a technology standpoint these platforms are here for the long haul.  Most of the top 100 companies rely on a mainframe.  Linux on z has become a factor in roughly 70 percent of the leading shops. When DancingDinosaur last ran the numbers there still are about 5000-6000 active mainframe shops and the numbers aren’t dropping nearly as fast as some pundits would have you believe.



The z13 and LinuxONE are very powerful mainframes, the most powerful by any number of measures in the industry.  And they are a dramatically different breed of enterprise platform, capable of concurrently running mixed workloads—OLTP, mobile, cloud, analytics—with top performance, scalability, and rock solid security. The Open Mainframe Project in conjunction with the Linux Foundation means that IBM no longer is going it alone with the mainframe. A similar joint effort with the Open POWER Consortium began delivering results within a year.

The Dell-EMC comparison is not a valid one. EMC’s primary business was storage and the business at the enterprise level has changed dramatically. It has changed for IBM too; the company’s revenues from System Storage decreased 19 percent. But storage was never as important to the company as the z, which had long been its cash cow, now diminished for sure but still worth the investment. The dozens and dozens of acquisitions EMC made never brought it much in terms of synergy. IBM, at least, has its strategic imperatives plan that is making measurable progress.

IBM’s strategic imperatives, in fact, were the only business that was doing as well as the z. Strategic imperatives revenue: up 27 percent year-to-year; Cloud revenue up more than 65 percent year-to-date.  Total cloud revenue hit $9.4 billion over the trailing 12 months. Cloud delivered as a service had an annual run rate of $4.5 billion vs. $3.1 billion in third-quarter 2014.  Business analytics revenue was up 19 percent year-to-date. Be interesting to see what cognitive computing and Watson can produce.

Besides storage, the other dim spot in the IBM platform story is Power Systems.  Revenues from Power Systems were down 3 percent compared with the 2014 period. DancingDinosaur, long a fan of Power Systems, anticipates the platform will turn positive next quarter or the first quarter of 2016 as some of the new technology and products coming, in part, from the Open POWER Consortium begin to attract new customers and ring up sales. The new Power Systems LC Server family should attract interest for hybrid Cloud, Hyperscale Data Centers, and Open Solutions, hopefully bringing new customers. With online pricing starting around $6600 the LC machines should be quite competitive against x86 boxes of comparable capabilities.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM Power Systems LC Aims to Expand the Power Systems Market

October 8, 2015

IBM is rapidly trying to capitalize on its investment in POWER technology and the OpenPOWER Foundation to expand the POWER franchise. The company is offering up the  Power Systems LC Server family; LC for Linux Community. This addresses how processing will be used in the immediate future; specifically in Hybrid Cloud, Hyperscale Data Centers, and Open Solutions. You could probably throw in IoT and big data/real-time analytics too although those weren’t specifically mentioned in any of the LC announcement materials or briefings.

Linux Community 1 lc server

Courtesy of IBM:  the new Power S822LC (click to enlarge)

The LC Server family  comes with a new IBM go-to-market strategy, as IBM put it: buy servers the way you want to buy them; online with simple pricing and a one-click purchase (coming soon). Your choice of standard configurations or have your configuration customized to meet your unique needs through IBM’s global ecosystem of partners and providers. Same with a selection of service and support options from an array of IBM technology partners.

There appear to be three basic configurations at this point:

  1. Power Systems S812LC: designed for entry and small Hadoop workloads
  2. Power Systems S822LC for Commercial Computing: ideal for data in the cloud and flexible capacity for MSPs
  3. Power Systems S822LC for High Performance Computing: for cluster deployments across a broad range of industries

According to the latest S812LC spec sheet, the IBM 8348 Power System S812LC server with POWER8 processors is optimized for data and Linux. It is designed to deliver superior performance and throughput for high-value Linux workloads such as industry applications, open source, big data, and LAMP.  It incorporates OpenPOWER Foundation innovations for organizations that want the advantages of running their big data, Java, open source, and industry applications on a platform designed and optimized for data and Linux. Modular in design, the Power S812LC is simple to order and can scale from single racks to hundreds.

The Power S812LC server supports one processor socket, offering 8-core 3. 32 GHz or 10-core 2.92 GHz POWER8 configurations in a 19-inch rack-mount, 2U drawer configuration. All the cores are activated. The server provides 32 DIMM memory slots. Memory features supported are 4 GB (#EM5A), 8 GB (#EM5E), 16 GB (#EM5C), and 32 GB (#EM5D), allowing for a maximum system memory of 1024 GB.

The LC Server family will leverage a variety of innovations that have been brought out by various members of the OpenPOWER Foundation over the last few months.  These include innovations from Wistron, redislabs, Tyan, Nvidia, Mellanox, Ubuntu, and Nallatech in the areas of big data, GPU acceleration, HPC, and cloud. And, of course, IBM’s CAPI.

No actual pricing was provided. In response to a question from DancingDinosaur about whether the arrival of products from the OpenPOWER Foundation was driving down Power Systems prices, the response was a curt: “We haven’t seen the drag down,” said an IBM manager. Oh well, so much for an imminent price war over Power Systems.

However, IBM reported today that  based on its own internal testing, a new Power Systems LC server can complete an average of select Apache Spark workloads – including analyzing Twitter feeds, streaming web page views and other data-intensive analytics – for less than half the cost of an Intel E5-2699 V3 processor-based server, providing clients with 2.3x better performance per dollar spent. Additionally, the efficient design of a Power Systems LC server allows for 94% more Spark social media workloads in the same rack space as a comparable Intel-based server.

These new systems are exactly what is needed to make the POWER platform viable over the long term, and it can’t be just an IBM show. With OpenPOWER Foundation members delivering innovations there is no telling what can be done in terms of computing with POWER9 and POWER10 when they come.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM z System After Moore’s Law

October 2, 2015

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

third dimension chip

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future.  This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Ranked #1 in Midrange Servers and Enterprise Network Storage

August 13, 2015

Although the financial markets may be beating up IBM the technology world continues to acclaim IBM technology and products. Most recently, IBM ranked on top in the CRN Annual Report Card (ARC) Survey recognizing the best-in-class vendors in the categories of partnership, support, and product innovation.  But the accolades don’t stop there.

Mobile Security Infographic

Courtesy of IBM (click to enlarge)

IBM was named a leader in four key cloud services categories—hosting, overall cloud professional services, cloud consulting services, and systems integration—by the independent technology market research firm Technology Business Research, Inc. (TBR).  This summer Gartner also named IBM as a leader in Security Information and Event Management (SIEM) in the latest Gartner Magic Quadrant for SIEM, this for the seventh consecutive year. Gartner also named IBM as a Leader in the 2015 Magic Quadrant for Mobile Application Development Platforms, specifically calling out the IBM MobileFirst Platform.

The CRN award addresses the technology channel. According to IBM, the company and its business partners are engaging with clients in new ways to work, building the infrastructure, and deploying innovative solutions for the digital era.  This should come as no surprise to anyone reading this blog; the z 13 was designed expressly to be a digital platform for the cloud, mobile, and big data era.  IBM’s z and Power Systems servers and Storage Solutions specifically were designed to address the challenges these areas present.

Along the same lines, IBM’s commitment to open alliances has continued this year unabated, starting with its focus on innovation platforms designed for big data and superior cloud economics, which continue to be the cornerstone of IBM Power System. The company also plays a leading role in the Open Power Foundation, the Linux Foundation as well as ramping up communities around the Internet of Things, developerWorks Recipes, and the open cloud, developerWorks Open. The last two were topics DancingDinosaur tackled recently, here and here.

The TBR report, entitled Hosted Private & Professional Services Cloud Benchmark, provides a market synopsis and growth estimates for 29 cloud providers in the first quarter of 2015. In that report, TBR cited IBM as:

  • The undisputed growth leader in overall professional cloud services
  • The leader in hosted private cloud and managed cloud services
  • A leader in OpenStack vendor acquisitions and OpenStack cloud initiatives
  • A growth leader in cloud consulting services, bridging the gap between technology and strategy consulting
  • A growth leader in cloud systems integration services

According to the report: IBM’s leading position across all categories remains unchallenged as the company’s established SoftLayer and Bluemix portfolios, coupled with in-house cloud and solutions integration expertise, provide enterprises with end-to-end solutions.

Wall Street analysts and pundits clearly look at IBM differently than IT analysts.  The folks who look at IBM’s technology, strategy, and services, like those at Gartner, TBR, and the CRN report card, tell a different story. Who do you think has it right?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Simplifies Internet of Things with developerWorks Recipes

August 6, 2015

IBM has a penchant for working through communities going back as far as Eclipse and probably before. Last week DancingDinosaur looked at the developerWorks Open community. Now let’s look at the IBM’s developerWorks Recipes community intended to address the Internet of Things (IoT).

recipes iot sensor tag

TI SensorTag

The Recipes community  will try to help developers – from novice to experienced – quickly and easily learn how to connect IoT devices to the cloud and how to use data coming from those connected devices. For example one receipe walks you through Connecting the TI Simplelink SensorTag (pictured above) to the IBM IoT foundation service in a few simple step. By following these steps a developer, according to IBM, should be able to connect the SensorTag to the IBM quickstart cloud service in less than 3 minutes. Think of recipes as simplified development patterns—so simple that almost anyone could follow it. (Wanted to try it myself but didn’t have a tag.  Still, it looked straightfoward enough.)

IoT is growing fast. Gartner forecasts 4.9 billion connected things in use in 2015, up 30% from 2014, and will reach 25 billion by 2020. In terms of revenue, this is huge. IDC predicts the worldwide IoT market to grow from $655.8 billion in 2014 to $1.7 trillion in 2020, a compound annual growth rate (CAGR) of 16.9%. For IT people who figure out how to do this, the opportunity will be boundless. Every organization will want to connect its devices to other devices via IoT. The developerWorks Recipes community seems like a perfect way to get started.

IoT isn’t exactly new. Manufacturers have cobbled together machine-to-machine (M2M) networks Banks and retailers have assembled networks of ATMs and POS terminals. DancingDinosaur has been writing about IoT for mainframe shops for several years.  Now deveoperWorks Recipes promises a way for just about anyone to set up their own IoT easily and quickly while leveraging the cloud in the process. There is a handful of recipes now but it provides a mechanism to add recipes so expect the catalog of recipes to steadily increase. And developers are certain to take existing recipes and improvise on them.

IBM has been trying to simplify  development for cloud, mobile, IoT starting with the launch of Bluemix last year. By helping users connect their IoT devices to IBM Bluemix, which today boasts more than 100 open-source tools and services, users can then run advanced analytics, utilize machine learning, and tap into additional Bluemix services to accelerate the adoption of  IoT and more.

As easy as IBM makes IoT development sound this is a nascent effort industry wide. There is a crying need for standards at every level to facilitate the interoperability and data exchange among the many and disparate devices, networks, and applications that will make up IoT.  Multiple organizations have initiated standards efforts but it will take some time to sort it all out.

And then there is the question of security. In a widely reported experiment by Wired Magazine  hackers were able to gain control of a popular smart vehicle. Given that cars are expected to be a major medium for IoT and every manufacturer is rushing to jam as much smart componentry into their vehicles you can only hope every automaker is  scrambling for security solutions .

Home appliances represent another fat, lucrative market target for manufacturers that want to embed intelligent devices and IoT into their all products. What if hackers access your automatic garage door opener? Or worse yet, what if they turn off your coffee maker and water heater? Could you start the day without a hot shower and cup of freshly brewed coffee and still function?

Running IoT through secure clouds like the IBM Cloud is part of the solution. And industry-specific clouds intended for IoT already are being announced, much like the Internet exchanges of a decade or two ago. Still, more work needs to be done on security and interoperability standards if IoT is to work seamlessly and broadly to achieve the trillions of dollars of economic value projected for it.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.









z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.


Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM POWER8 Tops STAC-A2 Benchmark in Win for OpenPOWER

June 25, 2015

In mid-March the Security Technology Analysis Center (STAC) released the first audited STAC-A2 Benchmark results for a server using the IBM Power8 architecture. STAC provides technology research and testing tools based on community-source standards. The March benchmark results showed that an IBM POWER8-based server can deliver more than twice the performance of the best x86 server when running standard financial industry workloads.

stac benchmark power8

IBM Power System S824

This is not IBM just blowing its own horn. The STAC Benchmark Council consists of a group of over 200 major financial firms and other algorithmic-driven enterprises as well as more than 50 leading technology vendors. Their mission is to explore technical challenges and solutions in financial services and develop technology benchmark standards that are useful to financial organizations.

The POWER8 system not only delivered more than twice the performance of the nearest x86 system but its set four new performance records for financial workloads, 2 of which apparently were new public records.  This marked the first time the IBM Power8 architecture has gone through STAC-A2 testing.

The community developed STAC-A2 benchmark set represents a class of financial risk analytics workloads characterized by Monte Carlo simulation and Greeks computations. Greeks computations cover theta, rho, delta, gamma, cross-gamma, model vega, and correlation vega. Together they are referred to as the Greeks. Quality is assessed for single assets by comparing the Greeks obtained from the Monte Carlo with Greeks obtained from a Heston closed form formula for vanilla puts and calls.  Suffice to say, this as an extremely CPU-intensive set of computations. For more detail, click here.

In this case, results were compared to other publicly-released results of warm runs on the Greeks benchmark (STAC-A2.β2.GREEKS.TIME.WARM). The two-socket Power8 server, outfitted with two 12-core 3.52 GHz Power8 processor cards, achieved:

  • 2.3x performance over the comparable x86 setup, an Intel white box with two Xeon E5-2699 v3 (Haswell EP) @ 2.30GHz.
  • 1.7x the performance of the best-performing x86 solution, an Intel white box with two Intel Xeon E5-2699 v3 processors (Haswell EP) @ 2.30GHz and one Intel Xeon Phi 7120A coprocessor.
  • Only 10% less performance than the best-performing solution, a Supermicro server with two 10-core Intel Xeon E5-2690 v2 @ 3.0GHz (Ivy Bridge) and one NVIDIA K80 GPU accelerator.

The Power server also set new records for path scaling (STAC-A2.β2.GREEKS.MAX_PATHS) and asset capacity (STAC-A2.β2.GREEKS.MAX_ASSETS). Compared to the best four-socket x86-based solution — a server comprised of four Xeon E7-4890 v2 (Ivy Bridge EX) parts running at 2.80 GHz — the Power8 server delivered:

  • Double the throughput.
  • 16 percent increase for asset capacity.

The STAC test system consisted of an IBM Power System S824 server with two 12-core 3.52 GHz POWER8 processor cards, equipped with 1TB of DRAM and running Red Hat Enterprise Linux version 7. The solution stack included the IBM-authored STAC-A2 Pack for Linux on Power Systems (Rev A), which used IBM XL, a suite for C/C++ developers that includes the C++ Compiler and the Mathematical Acceleration Subsystem libraries (MASS), and the Engineering and Scientific Subroutine Library (ESSL).

POWER8 processors are based on high performance, multi-threaded cores with each core of the Power System S824 server running up to eight simultaneous threads at 3.5 GHz. With POWER8 IBM also is able to tap the innovations of the OpenPOWER Foundation including CAPI and a variety of accelerators that have started to ship.

The S824 also brings a very high bandwidth memory interface that runs at 192 GB/s per socket which is almost three times the speed of a typical x86 processor. These factors along with a balanced system structure including a large internal 8MB per core L3 cache are the primary reasons why financial computing workloads run significantly faster on POWER8-based systems than alternatives, according to IBM.

Sumit Gupta, vice president of HPC and OpenPOWER operations at IBM, reports STAC-A2 gives a much more accurate view of the expected performance as compared to micro benchmarks or simple code loops. This is especially important when the challenge is big data.

In his blog on the topic, Gupta elaborated on the big data challenge in the financial industry and the POWER8 advantages. STAC-A2 is a set of standard benchmarks that help estimate the relative performance of full systems running complete financial applications. This enables clients in the financial industry to evaluate how systems will perform on real applications. “Those are the kind of results that matter—real results for real client challenges,” Gupta wrote.

Gupta went on to note that the S824 also has a very high bandwidth memory interface. Combined with the large L3 cache noted above it can run financial applications noticeably faster than alternatives.  Combine the STAC results with data recently published by Cabot Partners and you have convincing proof that IBM POWER8-based systems have taken the performance lead in the financial services space (and elsewhere). The Cabot Partners report evaluates functionality, performance, and price/performance across several industries, including life sciences, financial services, oil and gas, and analytics while referencing standard benchmarks as well as application-oriented benchmark data.

Having sat through numerous briefings on POWER8 performance, DancingDinosaur felt reassured, but he doesn’t have to actually run these workloads. It is encouraging, however, to see proof in the form of 3rd party benchmarks like STAC and reports from Cabot Partners. Check out Cabot’s OpenPOWER report here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.


Get every new post delivered to your Inbox.

Join 813 other followers

%d bloggers like this: