Posts Tagged ‘IOPs’

IBM Revamps V5000

April 5, 2019

On April 2nd IBM announced several key enhancements across the Storwize V5000 portfolio and along with new models. These new models include the V5010E, 5030E and the V5100. The E stands for EXPRESS.) To further complicate the story, it utilizes Broadwell, Intel’s new 14 nanometer die shrink of its Haswell microarchitecture. Broadwell did not completely replace the full range of CPUs from Intel’s previous Haswell microarchitecture but IBM is using it widely in the new V5000 models.

IBM NVMe Flash Core Module

And the results can be impressive. From a scale-out perspective the V5010E supports a single controller configuration, while the V5030E and V5100 both support up to two controller clusters. This provides for a maximum of 392 drives in the V5010E and a massive 1520 drives in either the V5030E or V5100 dual controller clusters. The V5030E includes the Broadwell DE 1.9GHz, 6 core processor in its two canisters. Each canister supports a maximum of 32GB of RAM. Better still, the V5100 boasts a single Skylake 1.7Ghz processor with 8 cores in each canister. RAM is increased to a total of 576GB for the entire controller, or 288GB maximum per canister.

.For the next generation Storwize V5000 platforms IBM encouraging them to be called Gen3. The Gen3 encompasses 8 new MTM (Machine Type Model) based on 3 hardware models, V5010E, V5030E and V5100. The V5100 comes in two models, a hybrid (HDD and Flash) and the All Flash model V5100F. Of these 4 types, each is available with a 1 year or 3 year warranty.

The V5000E models are based on the Gen2 hardware, with various enhancements, including more memory options on the V5010E. The V5100 models are all new hardware and bring same NVMe Flash Core Modules (FCM) that are available on the V7000 and FlashSystem9100 products, completing Core Modules the transition of the Storwize family to all NVMe arrays. If you haven’t seen or heard about IBM’s FCM technology introduced last year to optimize NVMe FCM are a family of high-performance flash drives that utilizes the NVMe protocol, a PCIe Gen3 interface, and high-speed NAND memory to provide high throughput and IOPS and very low latency. FCM is available in 4.8 TB, 9.6 TB, and 19.2 TB capacities. Hardware-based data compression and self-encryption are built in.

The all flash (F) variants of the V5000 can also attach SAS expansions to extend capacity using SAS based Flash drives to allow expansion up to 1520 drives. The drives, however, are not interchangeable with the new FCM drives. The E variants allow attachment of SAS 2.5” and 3.5” HDD drives, with the V5010E expandable to 392 drives and the others up to 1520.

Inbuilt host attachments come in the form of 10GbE ports for iSCSI workloads, with optional 16Gbit FibreChannel (SCSI or FC-NVMe) as well as additional 10GbE or 25GbE iSCSI. The V5100 models can also use the iSER (an iSCSI translation layer for operation over RDMA transports, such as InfiniBand) protocol over the 25GbE ports for clustering capability, with plans to support NVMeF over Ethernet. In terms of cache memory, the V5000E products are expandable up to 64GB per controller (IO Group) and the V5100 can support up to 576GB per controller. Similarly, IBM issued as a statement of direction for all 25GbE port types across the entire Spectrum Virtualize family of products.

As Lloyd Dean, IBM Senior Certified Executive IT Architect noted, the new lineup for the V5000 is impressive; regarding the quantity of drives, and the storage available per model will “blow your mind”. How mind blowing will depend, of course, on your configuration and IBM’s pricing. As usual, IBM talks about affordable and comparative cost and storage efficiency but they usually never state a price. But they did once 3 years ago: List price then for the V5010 was $9,250 including hardware, software and a one-year warranty, according to a published report. Today IBM will likely steer you to cloud pricing, which may or may not be a bargain depending on how the deal is structured and priced. With the cloud, everything is in the details.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Can SDS and Flash Resurrect IBM Storage?

November 4, 2016

As part of IBM’s ongoing string of quarterly losses storage has consistently contributed to the red ink, but the company is betting on cloud storage, all-flash strategy, and software defined storage (SDS) to turn things around. Any turn-around, however, is closely tied to the success of IBM’s strategic imperatives, which have emerged as bright spots amid the continuing quarterly losses; especially cloud, analytics, and cognitive computing.

climate-data-requires-fast-access-1

Climate study needs large amounts of fast data access

As a result, IBM needs to respond to two challenges created by its customers: 1) changes like the increased adoption of cloud, analytics, and most recently cognitive computing and 2) the need by customers to reduce the cost of the IT infrastructure. The problem as IBM sees it is this: How do I simultaneously optimize the traditional application infrastructure and free up money to invest in a new generation application infrastructure, especially if I expect move forward into the cognitive era at some point? IBM’s answer is to invest in flash and SDS.

A few years ago DancingDinosaur was skeptical, for example, that flash deployment would lower storage costs except in situations where low cost IOPS was critical. Today between the falling cost of flash and new ways to deploy increasingly cheaper flash DancingDinosaur now believes Flash storage can save IT real money.

According to the Evaluator Group and cited by IBM, flash and hybrid cloud technologies are dramatically changing the way companies deploy storage and design applications. As new applications are created–often for mobile or distributed access–the ability to store data in the right place, on the right media, and with the right access capability will become even more important.

In response, companies are adding cloud to lower costs, flash to increase performance, and SDS to add flexibility. IBM is integrating these capabilities together with security and data management for faster return on investment.  Completing the IBM pitch, the company offers choice among on-premise storage, SDS, or storage as a cloud service.

In an announcement earlier this week IBM introduced six products:

  • IBM Spectrum Virtualize 7.8 with transparent cloud tiering
  • IBM Spectrum Scale 4.2.2 with cloud data sharing
  • IBM Spectrum Virtualize family flash enhancements
  • IBM Storwize family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack—a joint IBM-Cisco initiative

In short, these announcements address Hybrid Cloud enablement, as a standard feature for new and existing users of Spectrum Virtualize to enable data sharing to the cloud through Spectrum Scale, which can sync file and object data across on-premises and cloud storage to connect cloud native applications. Plus, more high density, highly scalable all-flash storage now sports a new high density expansion enclosure that includes new 7TB and 15TB flash drives.

IBM Storwize, too, is included, now able to grow up to 8x larger than previously without disruption. That means up to 32PB of flash storage in only four racks to meet the needs of fast-growing cloud workloads in space-constrained data centers. Similarly, IBM’s new DeepFlash Elastic Storage Server (ESS) offers up to 8x better performance than HDD-based solutions for big data and analytics workloads. Built with IBM Spectrum Scale ESS includes virtually unlimited scaling, enterprise security features, and unified file, object, and HDFS support.

The z can play in this party too. IBM’s DS8888 now delivers 2x better performance and 3x more efficient use of rack space for mission-critical applications such as credit card and banking transactions as well as airline reservations running on IBM’s z System or IBM Power Systems. DancingDinosaur first reported on the all flash z, the DS8888, when it was introduced last May.

Finally hybrid cloud enablement for existing and new on-premises storage enhancements through IBM Spectrum Virtualize, which brings hybrid cloud capabilities for block storage to the Storwize family, FlashSystem V9000, SVC, and VersaStack, the IBM-Cisco collaboration.

Behind every SDS deployment lies some actual physical storage of some type. Many opt for generic, low cost white box storage to save money.  As part of IBM’s latest SDS offerings you can choose among any of nearly 400 storage systems from IBM and others. Doubt any of those others are white box products but at least they give you some non-IBM options to potentially lower your storage costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Mainframe Cloud Storage Attracts Renewed Interest at Share

March 4, 2016

Maybe it was Share 2016, which runs through today in San Antonio that attracted both EMC and Oracle to introduce updated products that specifically target mainframe storage. Given that IBM has been struggling in the storage area, who would have guessed the newfound interest in mainframe storage. Or maybe these vendors sense a vulnerability.

EMC-VMAX_AllFlash

Courtesy of EMC

EMC Corporation, for instance, announced new capabilities for its EMC VMAX and EMC Disk Library for mainframe storage products. With VMAX support for mainframe, in both the VMAX3 and the new VMAX All Flash products, mainframe shops can modernize, automate and consolidate disparate data center technologies within a simplified, high-performance data services platform. The additional capabilities of VMAX3 extend its automated performance tiering functionality to the mainframe.

The VMAX family, according to EMC, now offers twice the processing power in a third of the footprint for mainframe customers. Furthermore, in modernizing data protection for the mainframe, the company also announced what it refers to as the first-to-market scale-out automated snapshot solution for mainframe storage, called zDP (Data Protector for z Systems). It also announced updates to its EMC Disk Library for mainframe (DLm) technology that gives two virtual tape systems the ability to read from, write to, and update the

Not to be ignored at Share, Oracle announced its new StorageTek Virtual Storage Manager (VSM) 7 System, calling it the most secure and scalable data protection solution for mainframe and heterogeneous systems with the additional capability to provide fully automated tiering directly to the public cloud. Specifically, Oracle reports the StorageTek VSM 7 System delivers, 34x more capacity, significantly higher scalability to 256 StorageTek VSM 7 Systems, data deduplication, and native cloud tiering that provides mainframe and heterogeneous storage users the ability to access additional capacity on demand. Furthermore, Oracle’s StorageTek VSM 7 System has been architected to integrate with Oracle Storage Cloud Service—Object Storage and Oracle Storage Cloud Service – Archive Service to provide storage administrators with a built-in cloud strategy, making cloud storage as accessible as on-premises storage.

BTW, DancingDinosaur has not independently validated the specifications of either the new EMC or Oracle products. Links to their announcements are provided above should you want to perform further due diligence. Still, what we’re seeing here is that all enterprise data center systems vendors are sensing that with the growing embrace of cloud computing there is money to be made in modifying or augmenting their mainframe storage systems to accommodate cloud storage in a variety of ways. “Data center managers are starting to realize the storage potential of cloud, and the vendors are starting to connect the dots,” says Greg Schulz, principal of StorageIO.

Until recently cloud storage was not a first tier option for mainframe shops, in large part because cloud computing didn’t support FICON and still doesn’t.  “Mainframe data shops would have to piece together the cloud storage. Now, with so much intelligence built into the storage devices the necessary smart gateways, controllers, and bridges can be built in,” noted Schulz. Mainframe storage managers can put their FICON data in the cloud without the cloud specifically supporting FICON. What makes this possible is that all these capabilities are abstracted, same as  any software defined storage. Nobody on the mainframe side has to worry about anything; the vendors will take care of it through software or sometimes through firmware either in the data center storage device or in the cloud gateway or controller.

Along with cloud storage comes all the other goodies of the latest, most advanced storage, namely automated tiering and fast flash storage. For a mainframe data center, the cloud can simply be just one more storage tier, cheaper in some cases, faster but maybe a bit pricier (flash storage) in others. And flash, in terms of IOPS price/performance, shouldn’t be significantly more expensive if storage managers are using it appropriately.

IBM initially staked out the mainframe storage space decades ago, first on premises and later in the cloud. StorageTek and EMC certainly are not newcomers to mainframe storage. DancingDinosaur expects to see similar announcements from HDS any day now.

It’s telling that both vendors above–EMC, Oracle– specifically cited the mainframe storage although their announcements were primarily cloud focused. The strategy for mainframe storage managers at this point should be to leverage this rekindled interest in mainframe storage, especially mainframe storage in the cloud, to get the very best deals possible.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM POWER8 CAPI for Efficient Top Performance

August 21, 2014

IBM’s Power Systems Power8 Coherent Accelerator Processor Interface (CAPI) is not for every IT shop running Power Systems. However, for those that aim to attach devices to their POWER8 systems over the PCIe interface and want fast, efficient performance CAPI will be unbeatable.  Steve Fields, IBM Distinguished Engineer and Director of Power Systems Design introduces it here. Some of it gets pretty geeky but slides #12-17 make the key points.

DancingDinosaur first covered CAPI here, in April, shortly after its introduction. At that point it looked like CAPI would be a game changer and nothing since suggests otherwise. As we described it then, CAPI sits directly on the POWER8 board and works with the same memory addresses that the processor uses. Pointers de-reference the same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable and, most importantly, a direct interface. In the process, it offloads complexity.

In short, CAPI provides:

  • SMP Coherence Protocol transported over PCI Express interface
  • Provides isolation and filtering through the support unit in the processor (“CAPP”)
  • Manages caching and address translation through the standard POWER Service Layer in the accelerator device
  • Enables accelerator Functional Units to operate as part of the application at the user (direct) level, just like a CPU

What you end up with is a coherent connected accelerator for just a fraction of the development effort otherwise required. As such, CAPI enables more efficient accelerator development. It can reduce the typical seven-step I/O model flow (1-Device Driver Call, 2-Copy or Pin Source Data, 3-MMIO Notify Accelerator, 4-Acceleration, 5-Poll/Int Completion, 6-Copy or Unpin Result Data, 7-Return From Device Driver Completion) to just three steps (1-shared memory/notify accelerator, 2-acceleration, and 3-shared memory completion). The result is an easier, more natural programming model with traditional thread-level programming and no need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing (e.g. Java garbage-collection).

Other advantages include an open ecosystem for accelerators built using Field Programmable Gate Arrays (FPGA). The number and size of FPGAs can be based on application requirements, and FPGAs can attach to other components, such as private DRAM, flash memory, or a high-speed network.

Driving the need for CAPI is the insatiable demand for performance.  For that, acceleration is required, which is complicated and resource-intensive to build. So IBM created CAPI, not just for pure compute but for any network-attached or storage-attached I/O. In the end it eliminates the overhead of the I/O subsystem, allowing the focus to be on the workload.

In one example IBM reported it was able to attach an IBM Flash appliance to POWER8 via the CAPI interface. As a result it could generate Read/Write commands from applications and eliminate 97% of code path length, a savings of 20-30 cores per 1M IOPS. In another test IBM reported being able to leverage CAPI to integrate flash into a server; the memory-like semantics allowed the flash to replace DRAM for many in-memory workloads. The result: 5x cost savings plus large density and energy improvements. Furthermore, by eliminating the I/O subsystem overhead from high IOPS flash access, it freed the CPU to focus on the application workload.

Finally, in a Monte Carlo simulation of 1 million iterations, a POWER8 core with FPGA and CAPI ran a full execution of the Heston pricing model for a single security 250x faster than the POWER8 core alone. It also proved easier to code, reducing the lines of C code to write by 40x compared to non-CAPI FPGA.

IBM is just getting started with CAPI. Coming up next will be CAPI working with Linux, mainly for use with analytics. Once Linux comes into the picture, expect more PCIe card vendors to deliver products that leverage CAPI. AIX too comes into the picture down the road.

Plan to attend IBM Enterprise2014 in Las Vegas, Oct. 6-19. Here is one intriguing CAPI presentation that will be there: Light up performance of your LAMP apps with a stack optimized for Power, by Alise Spence, Andi Gutmans, and Antonio Rosales. It will discuss how to leverage CAPI with POWER8 to create what they call a “killer stack” that brings together continuous delivery with exceptional performance at a competitive price. Other CAPI sessions also are in the works for Enterprise2014.

DancingDinosaur (Alan Radding) definitely is attending IBM Enterprise2014. You can follow DancingDinosaur on Twitter, @mainframeblog, or check out Technologywriter.com. Upcoming posts will look more closely at Enterprise2014 and explore some session content.

Flash Economics and Implementation Take Front Stage at Edge2014

April 4, 2014

The Edge2014 Guide to the Technical Sessions is now online and accessible to the public, here.  There are more must-see sessions than any human can attend the few days of the conference and still have time for the Sheryl Crow concert. If you haven’t registered for Edge2014 in Las Vegas, May 19-23 at the Venetian, just do it here.

So what’s in the guide? Descriptions of 450+ technical sessions, according from IBM. Over the next few weeks DancingDinosaur will look at a few of the session tracks. Let’s start this week with flash. Flash is a technology that keeps getting better and cheaper and more useful in more and more ways.

Begin with the economics of flash. Initially flash was considered very expensive and it was if you considered it only on the cost/gigabyte basis and compared it to hard disk drives. Since then, flash costs have dropped but, more importantly, organizations are using it in ways where cost/gigabyte isn’t relevant. Instead, there are new ways to understand flash. Let’s look at five flash sessions coming to Edge2014.

The New Cost Metrics of Implementing Flash to Save Money

Presenter: Matt Key—Flash storage can be cheaper to implement than disk storage. This session explores the reasons and cost justification for implementing flash vs. disk without the focus on low cost/ IOPS, which was the initial justification for so-called costly flash. The session also examines the boundaries where other technologies such as RAM, disk, and tape are still a better fit.

After you have learned the metrics to justify an investment in flash here are a couple of sessions that will show you how to best take advantage of it.

Where to Use Flash in the Data Center

Presenters: Woody Hutsell, Chris Breaux—they will review data center economics and then explore the main reasons to use flash in the data center. For example, flash is best used to accelerate applications and infrastructure, reduce cost through less space, meet power and cooling requirements, and create new business opportunities, mainly through its speed and efficiency.  Any workload that can benefit from cheap IOPS is another place to use flash.

Common IBM FlashSystem Implementation Strategies

Presenter: Erik Eyberg—covers similar ground but focuses on the variety of ways flash is being deployed: primary data storage, tiering, mirroring, and many others. Specifically, the session will cover three common FlashSystem deployment strategies for both tactical and strategic flash deployments, plus a few customer stories illustrating their effectiveness.

The next sessions described below don’t fit easy categorization, but they are intriguing nonetheless.

A Business Overview of Software Defined Flash

Presenter: David Gimpl—takes on this newly emerging flash topic, software defined storage (SDS) as applied to all flash storage arrays. In these cases, flash creates a new entity Gimpl refers to as software defined flash. Here he’ll describe the properties of the low latency, high IOPS flash medium coupled with the feature-rich advanced capabilities that provide Tier 1 storage for your business. This session should be cutting edge.

DancingDinosaur has long been a fan of VDI but except for a handful of specialized use cases it hasn’t gained widespread adoption.  Something was missing. The System z should be especially good at VDI workloads, given its ability to support tens of thousands of virtual desktops. Maybe flash will provide the missing ingredient.

Simplifying the desktop virtualization data problem with IBM FlashSystem

Presenter: Rawley Burbridge—IBM offers a wide range of complete solutions for deploying desktop virtualization environments but data storage is still often a costly and complex component to configure and deploy. The macro efficient and high performance data storage offered by the IBM FlashSystem storage portfolio helps to simplify the often complex storage requirements for VDI environments, and reduce data costs to less than those of a physical PC. This session will explore the methods and benefits for utilizing IBM FlashSystem or your desktop virtualization deployments.

So here are five interesting sessions from over 30 in just the flash category alone. Plan to register for Edge2014. You will learn things that should more than pay for your trip and have a good time in the process. And don’t forget the Sheryl Crow concert.

Next week is the kickoff of Mainframe50, the start of the 50th anniversary celebration of the mainframe. The event itself is sold out but you needn’t be left out; it is being streamed live on Livestream, so you can attend from wherever you are.

Follow DancingDinosaur on Twitter, @mainframeblog.  Will be tweeting from the Mainframe50 event and others.

Expect Flash to be Big at Edge 2014

March 26, 2014

You can almost hear the tom-toms thumping as IBM picks up the beat for flash storage and its FlashSystem, and for good reason. Almost everything companies want to do these days requires fast, efficient storage. Everything waits for data—applications, servers, algorithms, virtually any IT resource. And fast data, of course, depends on the responsiveness of the storage. Flash’s time has arrived.

To get the responsiveness they need companies previously loaded up with conventional DASD, spinning disks that top out at 15K RPM or cheaper DASD at 5400 RPM. To coax sufficient IO/second (IOPS) they ganged together massive amounts of DASD just to get more parallel spindles to compensate for the low IOPS. Sure the disks were cheap but still the cost per IOPS was sky high, especially considering all the overhead and inefficiency they had to absorb.

But in this era of big data analytics, where an organization’s very competitiveness depends on absorbing massive amounts of data fast that old approach doesn’t work anymore. You can’t aggregate enough spindles to handle the huge amounts of machine-generated or sensor or meter data, not to mention data created by millions, possible even billions, of people on Facebook or Twitter and everywhere else to keep up with the data flow. You can’t possibly come up with meaningful results fast enough to be effective. Opportunities will fly past you.

Furthermore, traditional high performance storage comes at a high price, not just in the acquisition cost of large volumes of spinning disk but also in the inefficiency of its deployment. Sure, the cost per gigabyte may be low but aggregating spindles by the ton while not even utilizing the resulting large chunks of unused capacity will quickly offset any gains from a low cost per gigabyte. In short, traditional storage, especially high performance storage, imposes economic limits on the usefulness and scalability of many analytics environments.

Since data access depends on the response of storage, flash has emerged as the way to achieve high IOPS at a low cost, and with the cost of flash storage dropping steadily it will only become a better deal doing forward. Expect to hear a lot about IBM FlashSystem storage at Edge 2014. As IBM points out, it can eliminate wait times and accelerate critical applications for faster decision making, which translates into faster time to results.

Specifically, IBM reports its FlashSystem delivers:

  • 45x performance improvement with 10x more durability
  • 115x better energy efficiency with 315x superior density
  • 19x more efficient $/IOPS.

Here’s how: both the initial acquisition costs and the ongoing operational costs, such as staffing and environmental costs of FlashSystem storage, according to IBM, can be lower than both performance-optimized hard drive storage solutions and emerging hybrid- or all-flash solutions. In short, IBM FlashSystem delivers the three key attributes data analytics workloads require: compelling data economics, enterprise resiliency, and easy infrastructure integration along with high performance.

As proof, IBM cites a German transport services company that deployed FlashSystem storage to support a critical SAP e- business analytics infrastructure and realized a 50% TCO reduction versus competing solutions.

On top of that, IBM reports FlashSystem storage unlocks additional value from many analytics environments by both turbo-charging response times with its use of MicroLatency technology, effectively multiplying the amount of data that can be analyzed. MicroLatency enables a streamlined high performance data path to accelerate critical applications. The resulting faster response times can yield more business agility and quicker time to value from analytics.

In fact, recent IBM research has found that IBM InfoSphere Identity Insight entity analytics processes can be accelerated by over 6x using FlashSystem storage instead of traditional disk. More data analyzed at once means more potential value streams.

Data has long been considered a valuable asset. For some data has become the most important commodity of all. The infrastructure supporting the analytics environment that converts data as a commodity into valuable business insights must be designed for maximum resiliency. FlashSystem brings a set of data protection features that can help enhance reliability, availability and serviceability while minimizing the impact of failures and down-time due to maintenance. In short it protects what for many is the organization’s data, its most valuable asset.

DancingDinosaur is looking forward to attending Edge 2014 sessions that will drill down into the specifics of how IBM FlashSystem storage works under the cover. It is being held May 19-23 in Las Vegas, at the Venetian. Register now and get a discount. And as much as DancingDinosaur is eager to delve into the details of FlashSystem storage the Sheryl Crow concert is very appealing too. When not in sessions or at the concert look for DancingDinosaur in the bloggers lounge. Please join me.

Follow DancingDinosaur on Twitter: @mainframeblog

IBM FlashSystem Remakes Data Center Economics

April 12, 2013

Yesterday IBM announced the IBM FlashSystem, to drive Flash technology further into the enterprise. The IBM FlashSystem is a line of all-Flash storage appliances based on technology IBM acquired from Texas Memory Systems.

Flash can shorten the response of servers and storage systems to data requests from milliseconds to microseconds – an order of magnitude improvement. And because it is all electronic—nothing mechanical involved—and being delivered cost-efficiently at even petabyte scale, it can remake data center economics, especially for transaction-intensive and IOPS-intensive situations.

For example, the IBM FlashSystem 820 is the size of a pizza box but 20x faster than spinning hard drives and can store up to 24 TB of data.  At the high end, you can assemble a 1 PB FlashSystem that fits in one rack and delivers 22 million IOs per second (IOPS). IBM calculates you would need 630 racks of high capacity hard disk drives or 315 racks of performance optimized disk to generate an equal amount of IOPS.

Mainframe shops already are familiar with Flash mainly in the form of cache and SSD. The zEnterprise makes extensive use of cache to boost performance and to ensure reliability and availability. The DS8000 storage line has been SSD capable for several years.  The IBM System Storage DS8870, for example, comes equipped with IBM POWER7- based controllers. In a tiered storage environment it can automatically optimize the use of each storage tier, particularly SSD and now Flash, through the free IBM Easy Tier capability.

The IBM FlashSystem changes data center economics. One cloud provider reported deploying 5TB in 3.5 inches of rack space compared to deploying 1300 hard disks to achieve 400k IOPS and it did so at one-tenth the cost.  Overall, Wikibon reports an all Flash approach will lower total system costs by 30%; that’s $4.9 million for all flash compared to $7.1 million for hard disk.  Specifically, it reduced software license costs 38%, required 17% few servers, and lowered environmental costs by 74% and operational support costs by 35%.  At the same time it boosted storage utilization by 50% while reducing maintenance and simplifying management with corresponding labor savings.

For data center managers, this runs counter to everything they learned about the cost of storage. Traditional storage economics starts with the cost of hard disk storage being substantially less than the cost of SSD or Flash on a $/GB basis. Organizations could justify SSD, however, by using it in small amounts to tap its sizeable cost/IOPS advantage for IOPS-intensive workloads.

IBM reversed traditional storage economics with the new FlashSystem storage by adopting a different approach to understanding the storage investment. Forget about cost/GB; even forget about cost/IOPS. Instead, focus on a systems perspective by considering all the costs involved in the total solution, from energy consumption to hard disk failure to labor to the cost of server software licensing. Then factor in the economic benefits of handling more transactions faster, more responsive systems, faster analytics, and more.

As reported in PC World, Steve Mills, IBM Senior Vice President put it this way at the introduction:  Right now, generic hard drives cost about $2 per gigabyte. An enterprise hard drive will cost about $4 per gigabyte, and a high-performance hard drive will run about $6 per gigabyte. If an organization stripes its data across more disks for better performance, the cost goes up to about $10 per gigabyte. In some cases, where performance is critical, hard-drive costs can skyrocket to $30 or $50 per gigabyte. A solid state disk from IBM runs about $10 per gigabyte and can be filled to capacity, so they actually are less expensive in many cases.

And Mills was only talking from the cost/GB perspective; when you take a full systems perspective Flash looks even better. Said Ambuj Goyal, General Manager, Systems Storage, IBM Systems & Technology Group in the announcement: “The economics and performance of Flash are at a point where the technology can have a revolutionary impact on enterprises, especially for transaction-intensive applications.” But this actually goes beyond just transactions. Also look at big data analytics workloads, technical computing, and any other IOPS-intensive work.

As far as z data centers go, the IBM FlashSystem appliances, aimed primarily at open systems, are off in the future. However, mainframe data centers can continue to leverage SSD and Flash as they have and even expand it since it is increasingly easier to justify the investment, especially with an IBM enterprise-class SSD running about $10 per gigabyte.  IBM further extends the value of SSD/Flash through the use of Real-time Compression and thin provisioning, which stretches your bang for the buck.  So, using your workloads as the guide start thinking about cost-effectively working more SSD/Flash into your data center to lower costs.


%d bloggers like this: