Posts Tagged ‘Flash’

Arcati 2017 Mainframe Survey—Cognitive a No-Show

February 2, 2017

DancingDinosaur checks into Arcati’s annual mainframe survey every few years. You can access a copy of the 2017 report here.  Some of the data doesn’t change much, a few percentage points here or there. For example, 75% of the respondents consider the mainframe too expensive. OK, people have been saying that for years.

On the other hand, 65% of the respondents’ mainframes are involved with web services. Half also run Java-based mainframe apps, up from 30% last year, while 17% more are planning to run Java with their mainframe this year. Similarly, 35% of respondents report running Linux on the mainframe, up from 22% last year. Again, 13% of the respondents expect to add Linux this year.  Driving this is the advantageous cost and management benefits that result from consolidating distributed Linux workloads on the z. Yes, things are changing.

linuxone-5558_d_ibm_linuxone_social_tile_990_550_4_081515

The biggest surprise for DancingDinosaur, however, revolved around IBM’s latest strategic initiatives, especially cognitive computing and blockchain.  Other strategic initiatives may include, depending on who is briefing you at the moment—security, data analytics, cloud, hybrid cloud, and mobile. These strategic imperatives, especially cognitive computing, are expected to drive IBM’s revenue. In the latest statement, reported last week in DancingDinosaur, strategic imperatives amounted to 41% of revenue.  Cloud revenue and Cloud-as-a-service also rose considerably, 35% and 61% respectively.

When DancingDinosaur searched the accompanying Arcati vendor report (over 120 vendors with brief descriptions) for cognitive only GT Software came up. IBM didn’t even mention cognitive in its vendor listing, which admittedly was skimpy. The case was the same with Blockchain; only one vendor, Atos, mentioned it and nothing about blockchain in the IBM listing. More vendors, however, noted supporting one or some of the other supposed strategic initiatives.

Overall, the Arcati survey is quite positive about the mainframe. The survey found that 50 percent of sites viewed their mainframe as a legacy system (down from last year’s 62 percent). However, 22 percent (up from 16 percent last year) viewed mainframe as strategic, with 28 percent (up from 22 percent) viewing mainframes as both strategic and legacy.

Reinforcing the value of the mainframe, the survey found 78 percent of sites experienced some kind of increase in capacity. With increased demand for mainframe resources (data and processing), it should not be surprising that respondents report an 81 percent an increase in technology costs. Yet, 38 percent of sites report their people costs have decreased or stayed the same.

Unfortunately, the survey also found that 70 percent of respondents thought there were a cultural barrier between mainframe and other IT professionals. That did not discourage respondents from pointing out the mainframe advantages: 100 percent highlighted the benefit of the mainframe’s availability, 83 percent highlighted security, 75 percent identified scalability, and 71 percent picked manageability as a mainframe benefit.

Also, social media runs on the mainframe. Respondents found social media (Facebook, Twitter, YouTube) useful for their work on the mainframe. Twenty-seven percent report using social (up slightly from 25 percent last year) with the rest not using it at all despite IBM offering Facebook pages dedicated to IMS, CICS, and DB2. DancingDinosaur, only an occasional FB visitor, will check it out and report.

In terms of how mainframes are being used, the Arcati survey found that 25 percent of sites are planning to use Big Data; five percent of sites have adopted it for DevOps while 48 percent are planning to use mainframe DevOps going forward. Similarly, 14 percent of respondents already are reusing APIs while another 41 percent are planning to.

Arcati points out another interesting thought: The survey showed a 55:45 percent split in favor of distributed systems. So, you might expect the spend on the two types of platform to be similar. Yet, the survey found that 87 percent of an organization’s IT spend was going to distributed systems! Apparently mainframes aren’t as expensive as people think. Or put it another way, the cost of owning and operating distributed systems with mainframe-caliber QoS amounts to a lot more than people are admitting.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces New DS8880 All-Flash Arrays

January 13, 2017

Yesterday IBM introduced three new members of the DS8000 line, each an all-flash product.  The new, all-flash storage products are designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical.

ibm-flash-ds8888-mainframe-ficon

IBM envisions these boxes for more than the z’s core OLTP workloads. According to the company, they are built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions are designed to support cognitive workloads, which can be used to uncover trends and patterns that help improve decision-making, customer service, and ROI. ERP and financial transactions certainly constitute conventional OLTP but the cognitive workloads are more analytical and predictive.

The three products:

  • IBM DS8884 F
  • IBM DS8886 F
  • IBM DS8888 F

The F signifies all-flash.  Each was designed with High-Performance Flash Enclosures Gen2. IBM did not just slap flash into existing hard drive enclosures.  Rather, it reports undertaking a complete redesign of the flash-to-z interaction. As IBM puts it: through deep integration between the flash and the z, IBM has embedded software that facilitates data protection, remote replication, and optimization for midrange and large enterprises. The resulting new microcode is ideal for cognitive workloads on z and Power Systems requiring the highest availability and system reliability possible. IBM promises that the boxes will deliver superior performance and uncompromised availability for business-critical workloads. In short, fast enough to catch bad guys before they leave the cash register or teller window. Specifically:

  • The IBM DS8884 F—labelled as the business class offering–boasts the lowest entry cost for midrange enterprises (prices starting at $90,000 USD). It runs an IBM Power Systems S822, which is a 6-core POWER8 processor per S822 with 256 GB Cache (DRAM), 32 Fibre channel/FICON ports, and 6.4 – 154 TB of flash capacity.
  • The IBM DS8886 F—the enterprise class offering for large organizations seeking high performance– sports a 24-core POWER8 processor per S824. It offers 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4 – 614.4 TB of flash capacity. That’s over one-half petabyte of high performance flash storage.
  • The IBM DS8888 F—labelled an analytics class offering—promises the highest performance for faster insights. It runs on the IBM Power Systems E850 with a 48-core POWER8 processor per E850. It also comes with 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4TB – 1.22 PB of flash capacity. Guess crossing the petabyte level qualifies it as an analytics and cognitive device along with the bigger processor complex

As IBM emphasized in the initial briefing, it engineered these storage devices to surpass the typical big flash storage box. For starters, IBM bypassed the device adapter to connect the z directly to the high performance storage controller. IBM’s goal was to reduce latency and optimize all-flash storage, not just navigate a simple replacement by swapping new flash for ordinary flash or, banish the thought, HDD.

“We optimized the data path,” explained Jeff Barber IBM systems VP for HE Storage BLE (DS8, DP&R and SAN). To that end, IBM switched from a 1u to a 4u enclosure, runs on shared-nothing clusters, and boosted throughput performance. The resulting storage, he added, “does database better than anyone; we can run real-time analytics.”  The typical analytics system—a shared system running Hadoop, won’t even come close to these systems, he added. With the DS8888, you can deploy a real-time cognitive cluster with minimal latency flash.

DancingDinosaur always appreciates hearing from actual users. Working through a network of offices, supported by a team of over 850 people, Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (such as electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research on solutions capable of managing these applications –which included both Hitachi and EMC –the organization deployed the IBM DS8886 along with DB2 for z/OS data server software to provide an integrated data backup and restore system. (Full disclosure: DancingDinosaur has not verified this customer story.)

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

For full details and specs on these products, click here

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Can SDS and Flash Resurrect IBM Storage?

November 4, 2016

As part of IBM’s ongoing string of quarterly losses storage has consistently contributed to the red ink, but the company is betting on cloud storage, all-flash strategy, and software defined storage (SDS) to turn things around. Any turn-around, however, is closely tied to the success of IBM’s strategic imperatives, which have emerged as bright spots amid the continuing quarterly losses; especially cloud, analytics, and cognitive computing.

climate-data-requires-fast-access-1

Climate study needs large amounts of fast data access

As a result, IBM needs to respond to two challenges created by its customers: 1) changes like the increased adoption of cloud, analytics, and most recently cognitive computing and 2) the need by customers to reduce the cost of the IT infrastructure. The problem as IBM sees it is this: How do I simultaneously optimize the traditional application infrastructure and free up money to invest in a new generation application infrastructure, especially if I expect move forward into the cognitive era at some point? IBM’s answer is to invest in flash and SDS.

A few years ago DancingDinosaur was skeptical, for example, that flash deployment would lower storage costs except in situations where low cost IOPS was critical. Today between the falling cost of flash and new ways to deploy increasingly cheaper flash DancingDinosaur now believes Flash storage can save IT real money.

According to the Evaluator Group and cited by IBM, flash and hybrid cloud technologies are dramatically changing the way companies deploy storage and design applications. As new applications are created–often for mobile or distributed access–the ability to store data in the right place, on the right media, and with the right access capability will become even more important.

In response, companies are adding cloud to lower costs, flash to increase performance, and SDS to add flexibility. IBM is integrating these capabilities together with security and data management for faster return on investment.  Completing the IBM pitch, the company offers choice among on-premise storage, SDS, or storage as a cloud service.

In an announcement earlier this week IBM introduced six products:

  • IBM Spectrum Virtualize 7.8 with transparent cloud tiering
  • IBM Spectrum Scale 4.2.2 with cloud data sharing
  • IBM Spectrum Virtualize family flash enhancements
  • IBM Storwize family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack—a joint IBM-Cisco initiative

In short, these announcements address Hybrid Cloud enablement, as a standard feature for new and existing users of Spectrum Virtualize to enable data sharing to the cloud through Spectrum Scale, which can sync file and object data across on-premises and cloud storage to connect cloud native applications. Plus, more high density, highly scalable all-flash storage now sports a new high density expansion enclosure that includes new 7TB and 15TB flash drives.

IBM Storwize, too, is included, now able to grow up to 8x larger than previously without disruption. That means up to 32PB of flash storage in only four racks to meet the needs of fast-growing cloud workloads in space-constrained data centers. Similarly, IBM’s new DeepFlash Elastic Storage Server (ESS) offers up to 8x better performance than HDD-based solutions for big data and analytics workloads. Built with IBM Spectrum Scale ESS includes virtually unlimited scaling, enterprise security features, and unified file, object, and HDFS support.

The z can play in this party too. IBM’s DS8888 now delivers 2x better performance and 3x more efficient use of rack space for mission-critical applications such as credit card and banking transactions as well as airline reservations running on IBM’s z System or IBM Power Systems. DancingDinosaur first reported on the all flash z, the DS8888, when it was introduced last May.

Finally hybrid cloud enablement for existing and new on-premises storage enhancements through IBM Spectrum Virtualize, which brings hybrid cloud capabilities for block storage to the Storwize family, FlashSystem V9000, SVC, and VersaStack, the IBM-Cisco collaboration.

Behind every SDS deployment lies some actual physical storage of some type. Many opt for generic, low cost white box storage to save money.  As part of IBM’s latest SDS offerings you can choose among any of nearly 400 storage systems from IBM and others. Doubt any of those others are white box products but at least they give you some non-IBM options to potentially lower your storage costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Advances SSD with Phase-Change Memory Breakthrough

May 20, 2016

Facing an incessant demand to speed data through computers the latest IBM storage memory advance, announced earlier this week, will ratchet up the speed another notch or two. Scientists at IBM Research have demonstrated storing 3 bits of data per cell using phase-change memory (PCM). Until now, PCM had been tried but had never caught on for a variety of reasons. By storing 3 bits per cell, IBM can boost PCM capacity and speed and lower the cost.

TLCPCMSmall (1)

IBM multi-bit PCM chip connected to a standard integrated circuit board.

Pictured above, the chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture, IBM explained. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

Although PCM has been around for some years only with this latest advance is it attracting the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility, and density. Specifically, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.  Primary use cases will be capturing massive volumes of data expected from mobile devices and the Internet of Things.

PCM, in effect, adds another tier to the storage/memory hierarchy, coming in between DRAM and Flash at the upper levels of the storage performance pyramid. The IBM researchers envision both standalone PCM and hybrid applications, which combine PCM and flash storage together. For example, PCM can act as an extremely fast cache by storing a mobile phone’s operating system and enabling it to launch in seconds. For enterprise data centers, IBM envisions entire databases could be stored in PCM for blazing fast query processing of time-critical online applications, such as financial transactions.

As reported by CNET, PCM fits neatly between DRAM and flash. DRAM is 5-10x faster at retrieving data than PCM, while PCM is about 70x faster than flash. IBM reportedly expects PCM to be cheaper than DRAM, eventually becoming as cheap as flash (or course flash keeps getting cheaper too). PCM’s ability to hold three bits of data rather than 2 bits, PCM’s previous best, enables packing more data into a chip, which lowers the cost of PCM storage and boosts its competitive position against technologies like Flash and DRAM.

Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” wrote Haris Pozidis, key researcher and manager of non-volatile memory research at IBM Research –in the published announcement. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

IBM explains how PCM works: PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively. In digital systems, data is stored as a 0 or a 1. To store a 0 or a 1 on a PCM cell, a high or medium electrical current is applied to the material. A 0 can be programmed to be written in the amorphous phase or a 1 in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied.

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: 1) a set of drift-immune cell-state metrics and 2) drift-tolerant coding and detection schemes. These new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. The other measures provide additional robustness of the stored data. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

Combined these advancements address the key challenges of multi-bit PCM—drift, variability, temperature sensitivity and endurance cycling, according to IBM. From there, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board

Expect to see PCM first in Power Systems. At the 2016 OpenPOWER Summit in San Jose, CA, last month, IBM scientists demonstrated PCM attached to POWER8-based servers (made by IBM and TYAN® Computer Corp.) via the CAPI (Coherent Accelerator Processor Interface) protocol, which speeds the data to storage or memory. This technology leverages the low latency and small access granularity of PCM, the efficiency of the OpenPOWER architecture, and the efficiency of the CAPI protocol, an example of the OpenPower Foundation in action. Pozidis suggested PCM could be ready by 2017; maybe but don’t bet on it. IBM still needs to line up chip makers to produce it in commercial quantities among other things.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Mainframe Cloud Storage Attracts Renewed Interest at Share

March 4, 2016

Maybe it was Share 2016, which runs through today in San Antonio that attracted both EMC and Oracle to introduce updated products that specifically target mainframe storage. Given that IBM has been struggling in the storage area, who would have guessed the newfound interest in mainframe storage. Or maybe these vendors sense a vulnerability.

EMC-VMAX_AllFlash

Courtesy of EMC

EMC Corporation, for instance, announced new capabilities for its EMC VMAX and EMC Disk Library for mainframe storage products. With VMAX support for mainframe, in both the VMAX3 and the new VMAX All Flash products, mainframe shops can modernize, automate and consolidate disparate data center technologies within a simplified, high-performance data services platform. The additional capabilities of VMAX3 extend its automated performance tiering functionality to the mainframe.

The VMAX family, according to EMC, now offers twice the processing power in a third of the footprint for mainframe customers. Furthermore, in modernizing data protection for the mainframe, the company also announced what it refers to as the first-to-market scale-out automated snapshot solution for mainframe storage, called zDP (Data Protector for z Systems). It also announced updates to its EMC Disk Library for mainframe (DLm) technology that gives two virtual tape systems the ability to read from, write to, and update the

Not to be ignored at Share, Oracle announced its new StorageTek Virtual Storage Manager (VSM) 7 System, calling it the most secure and scalable data protection solution for mainframe and heterogeneous systems with the additional capability to provide fully automated tiering directly to the public cloud. Specifically, Oracle reports the StorageTek VSM 7 System delivers, 34x more capacity, significantly higher scalability to 256 StorageTek VSM 7 Systems, data deduplication, and native cloud tiering that provides mainframe and heterogeneous storage users the ability to access additional capacity on demand. Furthermore, Oracle’s StorageTek VSM 7 System has been architected to integrate with Oracle Storage Cloud Service—Object Storage and Oracle Storage Cloud Service – Archive Service to provide storage administrators with a built-in cloud strategy, making cloud storage as accessible as on-premises storage.

BTW, DancingDinosaur has not independently validated the specifications of either the new EMC or Oracle products. Links to their announcements are provided above should you want to perform further due diligence. Still, what we’re seeing here is that all enterprise data center systems vendors are sensing that with the growing embrace of cloud computing there is money to be made in modifying or augmenting their mainframe storage systems to accommodate cloud storage in a variety of ways. “Data center managers are starting to realize the storage potential of cloud, and the vendors are starting to connect the dots,” says Greg Schulz, principal of StorageIO.

Until recently cloud storage was not a first tier option for mainframe shops, in large part because cloud computing didn’t support FICON and still doesn’t.  “Mainframe data shops would have to piece together the cloud storage. Now, with so much intelligence built into the storage devices the necessary smart gateways, controllers, and bridges can be built in,” noted Schulz. Mainframe storage managers can put their FICON data in the cloud without the cloud specifically supporting FICON. What makes this possible is that all these capabilities are abstracted, same as  any software defined storage. Nobody on the mainframe side has to worry about anything; the vendors will take care of it through software or sometimes through firmware either in the data center storage device or in the cloud gateway or controller.

Along with cloud storage comes all the other goodies of the latest, most advanced storage, namely automated tiering and fast flash storage. For a mainframe data center, the cloud can simply be just one more storage tier, cheaper in some cases, faster but maybe a bit pricier (flash storage) in others. And flash, in terms of IOPS price/performance, shouldn’t be significantly more expensive if storage managers are using it appropriately.

IBM initially staked out the mainframe storage space decades ago, first on premises and later in the cloud. StorageTek and EMC certainly are not newcomers to mainframe storage. DancingDinosaur expects to see similar announcements from HDS any day now.

It’s telling that both vendors above–EMC, Oracle– specifically cited the mainframe storage although their announcements were primarily cloud focused. The strategy for mainframe storage managers at this point should be to leverage this rekindled interest in mainframe storage, especially mainframe storage in the cloud, to get the very best deals possible.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem Continues Surge in 4Q15

January 22, 2016

DancingDinosaur follows technology, not financial investments, so you’d be an idiot if you considered what follows as investment advice. It is not.  Still, as one who has built a chunk of his career around the mainframe, it is good to see the z System continuing to remain in the black and beating the sexier Power lineup although I do follow both closely. See the latest IBM financials here.

  ibm-z13

The IBM z13 System

 Specifically, as IBM reported on Tuesday, revenues from z Systems mainframe server products increased 16 percent compared with the year-ago period (up 21 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS (millions of instructions per second), increased 28 percent.  Revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency).

Almost as good, revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency). Power revenues have been up most of the year although they got a little blurry in the accounting.

In the storage market, which is getting battered by software defined storage (SDS) on one hand and cloud-based storage on the other, IBM reported revenues from System Storage decreased 11 percent (down 7 percent adjusting for currency). The storage revenues probably won’t bounce back fast, at least not without IBM bringing out radically new storage products. That storage rival EMC got acquired by Dell should be some kind of signal that the storage market as the traditional enterprise players knew it is drastically different. For now object storage, SDS, and even Flash won’t replace the kind of revenue IBM used to see from DS8000 disk systems or TS enterprise tape libraries loaded with mechanical robotics.

Getting more prominence is IBM’s strategic initiative. This has been a company priority all year. Strategic initiatives include cloud, mobile, analytics, security, IoT, and cognitive computing. Q4 revenues, as reported by IBM, from these strategic imperatives — cloud, analytics, and engagement — increased 10 percent year-to-year (up 16 percent adjusting for currency).  For the full year, revenues from strategic imperatives increased 17 percent (up 26 percent adjusting for currency and the divested System x business) to $28.9 billion and now represents 35 percent of total IBM consolidated revenue.

For the full year, total cloud revenues (public, private and hybrid) increased 43 percent (up 57 percent adjusting for currency and the divested System x business) to $10.2 billion.  Revenues for cloud delivered as a service — a subset of the total cloud revenue — increased 50 percent to $4.5 billion; and the annual as-a-service run rate increased to $5.3 billion from $3.5 billion in the fourth quarter of 2014.

Meanwhile, revenues from business analytics increased 7 percent (up 16 percent adjusting for currency) to $17.9 billion.  Revenues from mobile more than tripled and from security increased 5 percent (up 12 percent adjusting for currency).

Commenting on IBM latest financial was Timothy Prickett Morgan, who frequently writes on IBM’s platforms. Citing Martin Schroeter, IBM’s chief financial officer, statements to analyst, Morgan suggested that low profit margins, which other financial analysts complained about, put pressure on the System z13 product line that launched early in the year. After a fast start, apparently, the z13 is now experiencing a slowdown in the upgrade cycle. It’s at this point that DancingDinosaur usually expects to see a new z, typically a business class version of the latest mainframe, the z13 in this case, but that does not appear to be in the offing. About the closest IBM got to that was the RockHopper model of the LinuxOne, a z optimized only for Linux, cloud, mobile, and analytics.

Morgan also noted that IBM added about 50 new mainframe customers for the year on an installed base of about 6,000 active customers. DancingDinosaur has been tracking that figure for years and it has not fluctuated much in recent years. And am never sure how to count the handful of IT shops that run a z in the IBM cloud.  But 5000-6000 active z shops still sounds about right.

Power Systems, which has also grown four quarters in a row, and was up 8 percent at constant currency. This has to be a relief to the company, which has committed over $1 billion to Power. IBM attributes some of this growth to its enthusiastic embrace of Linux on Power8, but Morgan complains of having no sense of how much of the Power Systems pie is driven by scale-out Linux machines intended to compete against Intel Xeon servers. Power also is starting to get some boost from the OpenPOWER Foundation, members that started to ship products in the past few months. It’s probably minimal revenue now but over time it should grow.

For those of us who are counting on z and Power to be around for a while longer, the latest financials should be encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Systems Sets 2016 Priorities

December 14, 2015

Despite its corporate struggles, IBM Systems, the organization that replaced IBM System and Technology Group (IBM STG) had a pretty good year in 2015. It started the year by launching the z13, which was optimized for the cloud and mobile economy. No surprise there. IBM made no secret that cloud, mobile, and analytics were its big priorities.  Over the year it also added cognitive computing and software defined storage to its priorities.

But it might have left out its biggest achievement of 2015.  This week IBM announced receiving a major multi-year research grant to IBM scientists to advance the building blocks for a universal quantum computer. The award was made by the U.S. Intelligence Advanced Research Projects Activity (IARPA) program. This may not come to commercial fruition in our working lives but it has the potential to radically change computing as we have ever envisioned it. And it certainly will put a different spin on worries about Moore’s Law.

Three Types of Quantum Computing

Right now, according to IBM, the workhorse of the quantum computer is the quantum bit (qubit). Many scientists are tackling the challenge of building qubits, but quantum information is extremely fragile and requires special techniques to preserve the quantum state. This fragility of qubits played a key part in one of the preposterous but exciting plots on the TV show Scorpion. The major hurdles include creating qubits of high quality and packaging them together in a scalable form so they can perform complex calculations in a controllable way – limiting the errors that can result from heat and electromagnetic radiation.

IBM scientists made a great stride in that direction earlier this year by demonstrating critical breakthroughs to detect quantum errors by combining superconducting qubits in lattices on computer chips – and whose quantum circuit design is the only physical architecture that can scale to larger dimensions.

To return to a more mundane subject, revenue, during 2015 DancingDinosaur reported the positive contributions the z System made to IBM’s revenue, one of the company’s few positive revenue performers. Turned out DancingDinosaur missed one contributor since it doesn’t track constant currency. If you look at constant currency, which smooths out fluctuations in currency valuations, IBM Power Systems have been on an upswing for the last 3 quarters: up 1% in Q1, up 5% in Q2, up 2% in Q3.   DancingDinosaur expects both z and Power to contribute to IBM revenue in upcoming quarters.

Looking ahead to 2016, IBM identified the following priorities:

  • Develop an API ecosystem that monetizes big data and cognitive workloads, built on the cloud as part of becoming a better service provider.
  • Win the architectural battle with OpenPOWER and POWER8 – designed for data and the cognitive era. (Unspoken, beat x86.)
  • Extend z Systems for new mobile, cloud and in-line analytics workloads.
  • Capture new developers, markets and buyers with open innovation on IBM LinuxONE, the most advanced and trusted enterprise Linux system.
  • Shift the IBM storage portfolio to a Flash and the software defined model that disrupts the industry by enabling new workloads, very high speed, and data virtualization for improved data economics.
  • Engage clients through a digital-first Go-to-Market model

These are all well and good. About the only thing missing is any mention of the IBM Open Mainframe Project that was announced in August as a partnership with the Linux Foundation. Still hoping that will generate the kind of results in terms of innovative products for the z that the OpenPOWER initiative has started to produce. DancingDinosaur covered that announcement here. Hope they haven’t given up already.  Just have to remind myself to be patient; it took about a year to start getting tangible results from OpenPOWER consortium.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Expect this to be the final DancingDinosaur for 2015.  Be back the week of Jan. 4

POWER Systems for Cloud & Linux at IBM Edge2015

April 23, 2015

In October, IBM introduced a new range of POWER systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 processor-based systems, delivering to clients a superior alternative to closed, commodity-based data center servers. DancingDinosaur covered it last October here. Expect this theme to play out big at IBM

Edge2015 in Las Vegas, May 10-15. Just a sampling of a few of the many POWER sessions makes that clear:

IBM Power S824L

Courtesy of Studio Stence, Power S824L (click to enlarge)

(lCV1655) Linux on Power and Linux on Intel: Side By Side, IT Economics Positioning; presenter Susan Proietti Conti

Based on real cases studied by the IBM Eagle team for many customers in different industries and geographies, this session explains where and when Linux on Power provides a competitive alternative to Linux on Intel. The session also highlights the IT economic value of architecture choices provided by the Linux/KVM/Power stack, based on open technologies brought by POWER8 and managed through OpenStack. DancingDinosaur periodically covers studies like these here and here.

(lCV1653) Power IT Economics Advantages for Cloud Service Providers and Private Cloud Deployment; presenter Susan Proietti Conti

Since the announcement of POWER8 and building momentum of the OpenPOWER consortium, there are new reasons for cloud service providers to look at Power technology to support their offerings. As an alternative open-based technology to traditional proprietary technologies, Power offers many competitive advantages that can be leveraged for cloud service providers to deliver IaaS services and other types of service delivery. This session illustrates what Power offers by highlighting client examples and the results of IT economics studies performed for different cloud service providers.

(lSY2653) Why POWER8 Is the Platform of Choice for Linux; presenter Gary Andrews

Linux is the platform of choice for running next generation workloads. With POWER8, IBM is investing heavily into Linux and is adding major enhancements to the Power platform to make it the server of choice for running Linux workloads. This session discusses the new features and how they can help run business faster and at lower costs on the Power platform. Andrews also points out many advanced features of Linux on Power that you can’t do with Linux on x86. He shows how competitive comparisons and performance tests demonstrate that POWER8 increases the lead over x86 latest processor family. In short, attend this session to understand the competitive advantages that POWER8 on Linux can deliver compared to Linux on x86.

(pBA1244) POWER8: Built for Big Data; presenter William Starke

Starke explains how IBM technologies from semiconductors through micro-architecture, system design, system software, and database and analytic software culminate in the POWER8 family of products optimized around big data analytics workloads. He shows how the optimization across these technologies delivers order-of-magnitude improvements via several example scenarios.

 (pPE1350) Best Practices Guide to Get Maximum Performance from IBM POWER8; presenter Archana Ravindar

This session presents a set of best practices that have been tried and tested in various application domains to get the maximum performance of an application on a POWER8 processor. Performance improvement can be gained at various levels: the system level, where system parameters can be tuned; the application level, where some parameters can be tuned as there is no one-size-fits-all scenario; and the compiler level, where options for every kind of application have shown to improve performance. Some options are unique to IBM and give an edge over competition in gaming applications. In cases where applications are still under development, Ravindar presents guidelines to ensure the code runs fastest on Power.

DancingDinosaur supports strategies that enable data centers to reuse existing resources like this one. (pCV2276) Developing a POWERful Cloud Strategy; presenter, Susan Schreitmueller

Here you get to examine decision points for how and when to use an existing Power infrastructure in a cloud environment. This session covers on-premises and off-premises, single vs. multi-tenant hosting, and security concerns. You also review IaaS, PaaS, and hybrid cloud solutions incorporating existing assets into a cloud infrastructure. Discover provisioning techniques to go from months to days and then to hours for new instances.

One session DancingDinosaur hasn’t found yet is whether it is less costly for an enterprise to virtualize a couple of thousand Linux virtual machines on one of the new IBM Power servers pictured above or on the z13 as an Enterprise Linux server purchased under the System z Solution Edition Program. Hmm, will have to ask around about that. But either way you’d end up with very low cost VMs compared to x86.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here, there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. Please join DancingDinosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM’s z13 Redefines Mainframe Performance, Economics, and Versatility

January 14, 2015

With the introduction of the new IBM z13, the latest rev of the 50-year old mainframe product line introduced today, it will be hard for IT people to persist in the mistaken belief that the mainframe can’t handle today’s workloads or that it is too expensive. Built around an 8 core, 22nm processor, the IBM z13’s 141 configurable cores (any mix of CP, IFL, zIIP, ICF, SAP) delivers a 40% total capacity improvement over the zEC12.

 IBM z113

The z13 looks like the zEC12 but under the hood it’s far more powerful

The IBM z13 will handle up to 8,000 virtual enterprise-grade Linux servers per system, more than 50 per core.  Remember when Nationwide Insurance consolidated 3000 x86 servers mainly running Linux on a System z and saved $15 million over three years, a figure later revised considerably higher. They got a lot of press out of that, including from DancingDinosaur as recently as last May. With the IBM z13 Nationwide could consolidate more than twice the number of Linux servers at a lower cost and the resulting saving would be higher still.

If you consider Linux VMs synonymous with cloud services, the new machine will enable superior Cloud services at up to 32% lower cost than an x86-based cloud. It also will cost up to 60% less than Public Cloud over three years. In almost every metric, the IBM z13 delivers more capacity or performance at lower cost.

IBM delivered an almost constant stream of innovations that work to optimize performance and reduce cost. For example, it boosted single thread capacity by 10% over the zEC12. It also delivers 3x more memory to help both z/OS and Linux workloads. The more memory combined with a new cache design, improved I/O bandwidth, and compression will boost analytics on the machine. In fact, with the z13 you can do in-memory analytics if you want it.

The one thing it doesn’t do is boast the fastest commercial processor in terms of sheer speed. The zEC12 processor still is the fastest but with all the optimizations and enhancements IBM has built in the z13 should beat the z12 in handling the workloads organizations most want to run. For instance, the z13 performs 2X faster than the most common server processors, 300 percent more memory, 100 percent more bandwidth and delivers vector processing analytics to speed mobile transactions. As a result, the z13 transaction engine is capable of analyzing transactions in real time.

Similarly, simultaneous multi-threading delivers more throughput for Linux and zIIP-eligible workloads while larger caches optimize data serving. It also improved on-chip hardware compression, which saves disk space and cuts data transfer time.  Also, there is new workload container pricing and new multiplex pricing, both of which again will save money.

In addition, IBM optimized this machine for both mobile and analytics, as well as for cloud. This is the new versatility of this redefined mainframe. Last year, IBM discounted the cost of mobile transactions on the z. The new machine continues to optimize for mobile with consolidated REST APIs for all z/OS transactions through z/OS Connect while seamlessly channeling z/OS transactions to mobile devices with the MobileFirst Platform. It also ensures end-to-end security from mobile device to mainframe with z/OS, RACF, and MobileFirst products.

For analytics, IBM continues to optimize Hadoop and expand the analytics portfolio on the z13. Specifically, the massive memory capability, up to 10TB, opens new opportunities for in-memory computing. The ability to perform analytics by combining data from different data sources and do it in-memory and in real-time within the platform drives more efficiencies, such as eliminating the need for ETL and the need to move data between platforms, as had previously often been the case. Now, just use Hadoop on z to explore data there within the secure zone of the mainframe. This opens a wide variety of analytics workloads, anything from fraud prevention to customer retention.

In addition to improved price/performance overall, IBM announced Technology Update Pricing for z13, including AWLC price reductions for z13 that deliver 5% price/performance on average in addition to performance gains in software exploitation of z13. DancingDinosaur will dig deeper into the new z13 software pricing in a subsequent post.

And the list of new and improved capabilities with the z13 just keeps going on and on.  With security IBM has accelerated the speed of encryption up to 2x over the zEC12 to help protect the privacy of data throughout its life cycle.  It also extended enhanced public key support for constrained digital environments using Elliptic Curve Cryptography (ECC), which helps applications like Chrome, Firefox, and Apple’s iMessage. In addition, the z13 sports a few I/O enhancements, like the first system to use a standards based approach for enabling Forward Error Correction for a complete end-to-end solution.

Finally, IBM has not abandoned hybrid computing, where you can mix a variety of blades, including x86 Windows blades and others in the zBX extension cabinet. With the z13 IBM introduced the new Mod 004 zBX cabinet, an upgrade from the previous Mod 002 and 003.

DancingDinosaur expects the introduction of the z13 along with structural organization changes, will drive System z quarterly financial performance back into the black as soon as deliveries roll. And if IBM stays consistent with past behavior within a year or so you can expect a scaled down, lower cost business class version of the z13 although it may be not be called business class. Stay tuned; it should be an exciting year.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. You can follow him on Twitter, @mainframeblog, or check out more of his writing and analysis at Technologywriter.com or here.

IBM POWER8 CAPI for Efficient Top Performance

August 21, 2014

IBM’s Power Systems Power8 Coherent Accelerator Processor Interface (CAPI) is not for every IT shop running Power Systems. However, for those that aim to attach devices to their POWER8 systems over the PCIe interface and want fast, efficient performance CAPI will be unbeatable.  Steve Fields, IBM Distinguished Engineer and Director of Power Systems Design introduces it here. Some of it gets pretty geeky but slides #12-17 make the key points.

DancingDinosaur first covered CAPI here, in April, shortly after its introduction. At that point it looked like CAPI would be a game changer and nothing since suggests otherwise. As we described it then, CAPI sits directly on the POWER8 board and works with the same memory addresses that the processor uses. Pointers de-reference the same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable and, most importantly, a direct interface. In the process, it offloads complexity.

In short, CAPI provides:

  • SMP Coherence Protocol transported over PCI Express interface
  • Provides isolation and filtering through the support unit in the processor (“CAPP”)
  • Manages caching and address translation through the standard POWER Service Layer in the accelerator device
  • Enables accelerator Functional Units to operate as part of the application at the user (direct) level, just like a CPU

What you end up with is a coherent connected accelerator for just a fraction of the development effort otherwise required. As such, CAPI enables more efficient accelerator development. It can reduce the typical seven-step I/O model flow (1-Device Driver Call, 2-Copy or Pin Source Data, 3-MMIO Notify Accelerator, 4-Acceleration, 5-Poll/Int Completion, 6-Copy or Unpin Result Data, 7-Return From Device Driver Completion) to just three steps (1-shared memory/notify accelerator, 2-acceleration, and 3-shared memory completion). The result is an easier, more natural programming model with traditional thread-level programming and no need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing (e.g. Java garbage-collection).

Other advantages include an open ecosystem for accelerators built using Field Programmable Gate Arrays (FPGA). The number and size of FPGAs can be based on application requirements, and FPGAs can attach to other components, such as private DRAM, flash memory, or a high-speed network.

Driving the need for CAPI is the insatiable demand for performance.  For that, acceleration is required, which is complicated and resource-intensive to build. So IBM created CAPI, not just for pure compute but for any network-attached or storage-attached I/O. In the end it eliminates the overhead of the I/O subsystem, allowing the focus to be on the workload.

In one example IBM reported it was able to attach an IBM Flash appliance to POWER8 via the CAPI interface. As a result it could generate Read/Write commands from applications and eliminate 97% of code path length, a savings of 20-30 cores per 1M IOPS. In another test IBM reported being able to leverage CAPI to integrate flash into a server; the memory-like semantics allowed the flash to replace DRAM for many in-memory workloads. The result: 5x cost savings plus large density and energy improvements. Furthermore, by eliminating the I/O subsystem overhead from high IOPS flash access, it freed the CPU to focus on the application workload.

Finally, in a Monte Carlo simulation of 1 million iterations, a POWER8 core with FPGA and CAPI ran a full execution of the Heston pricing model for a single security 250x faster than the POWER8 core alone. It also proved easier to code, reducing the lines of C code to write by 40x compared to non-CAPI FPGA.

IBM is just getting started with CAPI. Coming up next will be CAPI working with Linux, mainly for use with analytics. Once Linux comes into the picture, expect more PCIe card vendors to deliver products that leverage CAPI. AIX too comes into the picture down the road.

Plan to attend IBM Enterprise2014 in Las Vegas, Oct. 6-19. Here is one intriguing CAPI presentation that will be there: Light up performance of your LAMP apps with a stack optimized for Power, by Alise Spence, Andi Gutmans, and Antonio Rosales. It will discuss how to leverage CAPI with POWER8 to create what they call a “killer stack” that brings together continuous delivery with exceptional performance at a competitive price. Other CAPI sessions also are in the works for Enterprise2014.

DancingDinosaur (Alan Radding) definitely is attending IBM Enterprise2014. You can follow DancingDinosaur on Twitter, @mainframeblog, or check out Technologywriter.com. Upcoming posts will look more closely at Enterprise2014 and explore some session content.


%d bloggers like this: