Posts Tagged ‘storage’

Latest IBM Initiatives Drive Power Advantages over x86

November 20, 2015

This past week IBM announced a multi-year strategic collaboration between it and Xilinx that aims to enable higher performance and energy-efficient data center applications through Xilinx FPGA-enabled workload acceleration on IBM POWER-based systems. The goal is to deliver open acceleration infrastructures, software, and middleware to address applications like machine learning, network functions virtualization (NFV), genomics, high performance computing (HPC), and big data analytics. In the process, IBM hopes to put x86 systems at an even greater price/performance disadvantage.


Courtesy of IBM

At the same time IBM and several fellow OpenPOWER Foundation members revealed new technologies, collaborations and developer resources to enable clients to analyze data more deeply and at high speed. The new offerings center on the tight integration of IBM’s open and licensable POWER processors with accelerators and dedicated high performance x86e processors optimized for computationally intensive software code. The accelerated POWER-based offerings come at a time when many companies are seeking the best platform for Internet of Things, machine learning, and other performance hungry applications.

The combination of collaborations and alliances are clearly aimed at establishing Power as the high performance leader for the new generation of workloads. Noted IBM, independent software vendors already are leveraging IBM Flash Storage attached to CAPI to create very large memory spaces for in-memory processing of analytics, enabling the same query workloads to run with a fraction of the number of servers compared to commodity x86 solutions.  These breakthroughs enable POWER8-based systems to continue where the promise of Moore’s Law falls short, by delivering performance gains through OpenPOWER ecosystem-driven, full stack innovation. DancingDinosaur covered efforts to expand Moore’s Law on the z a few weeks back here.

The new workloads present different performance challenges. To begin, we’re talking about heterogeneous workloads that are becoming increasingly prevalent, forcing data centers to turn to application accelerators just to keep up with the demands for throughput and latency at low power. The Xilinx All Programmable FPGAs promise to deliver the power efficiency that makes accelerators practical to deploy throughout the data center. Just combine IBM’s open and licensable POWER architecture with Xilinx FPGAs to deliver compelling performance, performance/watt, and lower total cost of ownership for this new generation of data centers workloads.

As part of the IBM and Xilinx strategic collaboration, IBM Systems Group developers will create solution stacks for POWER-based servers, storage, and middleware systems with Xilinx FPGA accelerators for data center architectures such as OpenStack, Docker, and Spark. IBM will also develop and qualify Xilinx accelerator boards for IBM Power Systems servers. Xilinx is developing and will release POWER-based versions of its leading software defined SDAccel™ Development Environment and libraries for the OpenPOWER developer community.

But there is more than this one deal. IBM is promising new products, collaborations and further investments in accelerator-based solutions on top of the POWER processor architecture.  Most recently announced were:

The coupling of NVIDIA® Tesla® K80 GPUs, the flagship offering of the NVIDIA Tesla Accelerated Computing Platform, with Watson’s POWER-based architecture to accelerate Watson’s Retrieve and Rank API capabilities to 1.7x of its normal speed. This speed-up can further improve the cost-performance of Watson’s cloud-based services.

On the networking front Mellanox announced the world’s first smart network switch, the Switch-IB 2, capable of delivering an estimated 10x system performance improvement. NEC also announced availability of its ExpEther Technology suited for POWER architecture-based systems, along with plans to leverage IBM’s CAPI technology to deliver additional accelerated computing value in 2016.

Finally, two OpenPOWER members, E4 Computer Engineering and Penguin Computing, revealed new systems based on the OpenPOWER design concept and incorporating IBM POWER8 and NVIDIA Tesla GPU accelerators. IBM also reported having ported a series of key IBM Internet of Things, Spark, Big Data, and Cognitive applications to take advantage of the POWER architecture with accelerators.

The announcements include the names of partners and products but product details were in short supply as were cost and specific performance details. DancingDinosaur will continue to chase those down.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

DancingDinosaur will not post the week of Thanksgiving. Have a delicious holiday.

IBM Expands Spectrum Storage in the Cloud with Spectrum Protect

September 18, 2015

IBM is targeting storage for hybrid clouds with Spectrum Protect. Specifically, it brings new cloud backup and a new management dashboard aimed to help businesses back up data to on-premises object storage or the cloud without the expense of cloud-gateway appliances. It also enables advanced data placement across all storage types to maximize performance, availability, and cost efficiency. Spectrum Protect represents the latest part of the IBM Spectrum storage family; which provides advanced software defined storage (SDS) storage capabilities and flexible storage either as software, an appliance, or a cloud service.  IBM announced Spectrum Protect at the end of August.

ibm Spectrum Protect Dashboard dino

Courtesy IBM: Spectrum Protect dashboard (click to enlarge)

Introduced early this year, IBM Spectrum brings a family of optimized SDS solutions designed to work together. It offers SDS file, object, and block storage with common management and a consistent user and administrator experience.  Although it is based on IBM’s existing storage hardware products like XIV, Storwize, IBM FlashSystem, and SVC you can deploy it as software on some non IBM hardware too. It also offers support for VMware environments and includes VMware API support for VASA, VAAI, and VMware SRM. With Spectrum, IBM appears to have come up with a winner; over the last six months, IBM reports more than 1,000 new clients have chosen products from the IBM Spectrum Storage portfolio.

Specifically, IBM Spectrum Protect supports IBM Cloud infrastructure today with plans to expand to other public clouds in future. IBM Spectrum Accelerate (XIV block storage) also can be accessed as a service by IBM Cloud customers via the SoftLayer cloud infrastructure. There it allows companies to deploy block storage on SoftLayer without having to buy new storage hardware or manage appliance farm.

In competitive analysis, IBM found that a single IBM Spectrum Protect server performs the work of up to 15 CommVault servers. This means that large enterprises can consolidate backup servers to reduce cost and complexity while managing data growth from mobile, social, and Internet of Things environments.  Furthermore, SMBs can eliminate the need for a slew of infrastructure devices, including additional backup servers, media servers, and deduplication appliances, thereby reducing complexity and cost. Cost analysis with several beta customers, reports IBM, indicates that the enhanced IBM Spectrum Protect software can help clients reduce backup infrastructure costs on average by up to 53 percent.

IBM reports that the Spectrum Storage portfolio can centrally manage more than 300 different storage devices and yottabytes (yotta=1024 bytes) of data.  Its device interoperability is the broadest in the industry – incorporating both IBM and non-IBM hardware and tape systems.  IBM Spectrum Storage can help reduce storage costs up to 90 percent in certain environments by automatically moving data onto the most economical storage device – either from IBM or non-IBM flash, disk, and tape systems.

IBM Spectrum Storage portfolio packages key storage software from conventional IBM storage products. These include IBM Spectrum Accelerate (IBM XIV), Spectrum Virtualize (IBM SAN Volume Controller along with IBM Storwize), Spectrum Scale (IBM General Parallel File System or GPFS technology, previously referred to as Elastic Storage), Spectrum Control (IBM Virtual Storage Center and IBM Storage Insights), Spectrum Protect (Tivoli Storage Manager family) and Spectrum Archive (various IBM tape backup products).

The portfolio is presented as a software-only product and, presumably, you can run it on IBM and some non-IBM storage hardware if you chose. You will have to compare the cost of the software license with the cost of the IBM and non-IBM hardware to decide which gets you the best deal.  It may turn out that running Spectrum Accelerate (XIV) on low cost, generic disks rather than buying a rack of XIV disk to go with it may be the lowest price. But keep in mind that the lowest cost generic disk may not meet your performance or reliability specifications.

IBM reports it also is enhancing the software-only version of IBM Spectrum Accelerate to reduce costs by consolidating storage and compute resources on the same servers. In effect, IBM is making XIV software available with portable licensing across XIV systems, on- premises servers, and cloud environments to offer greater operational flexibility. Bottom line: Possibly a good deal but be prepared to do some detailed comparative cost analysis to identify the best mix of SDS, cloud storage, and hardware at the best price for your particular needs.

In general, however, DancingDinosaur favors almost anything that increases data center configuration and pricing flexibility. With that in mind consider the IBM Spectrum options the next time you plan storage changes. (BTW, DancingDinosaur also does storage and server cost assessments should you want help.)

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.



Legacy Storage vs. Software Defined Storage at IBM Edge2015

May 21, 2015

At Edge2015 software defined storage (SDS) primarily meant IBM Spectrum Storage, the new storage software portfolio designed to address data storage inefficiencies by separating storage functionality from the underlying hardware through an intelligent software layer. To see what DancingDinosaur posted on Spectrum Storage in February when it was unveiled click here. Spectrum became the subject of dozens of sessions at the conference. Check out a general sampling of Edge2015 sessions here.

Jon Toigo, a respected storage consultant and infuriating iconoclast to some, jumped into the discussion of legacy storage vs. SDS at a session provocatively titled 50 Shades of Grey. He started by declaring “true SANs never reached the market.” On the other hand, SDS promises the world—storage flexibility, efficiency, avoidance of vendor lock-in, and on and on.

 edge2015 toigo san

Courtesy Jon Toigo (click to enlarge)

What the industry actually did as far as storage sharing, Toigo explained, was provide serial SCSI over a physical layer fabric and the use of a physical layer switch to make and break server-storage connections at high speed. Although network-like there was no management layer (which should be part of any true network model, he believes). Furthermore, the result was limited by the Fibre Channel Protocol and standards designed so that “two vendors could implement switch products that conformed to the letter of the standard…with absolute certainty that they would NOT work together,” said Toigo. iSCSI later enabled storage fabrics using TCP/IP, which made it easier to deploy the fabric since organizations already were deploying TCP/IP networks for other purposes.

Toigo’s key requirement: unified storage management, which means managing the diversity and heterogeneity of the arrays comprising the SAN. The culprit preventing this, as he sees it, are so call value-add services on array controllers that create islands of storage. You know these services: thin provisioning, on-array tiering, mirroring, replication, dedupe, and more. The same value-add services are the culprits driving the high cost of storage. “Storage hardware components are commoditized, but value-add software sustains pricing.”

With Spectrum Storage IBM incorporates more than 700 patents and is designed to help organizations transform to a hybrid cloud business model by managing massive amounts of data where they want it, how they want it, in a fast and easy manner from a single dashboard.  The software helps clients move data to the right location, at the right time to flash storage for fast access or to tape and cloud for the lowest cost.

This apparently works for Toigo, with only a few quibbles: vendors make money by adding more software, and inefficiency is added when they implement non-standard commands. IBM, however, is mostly in agreement with Toigo. According to IBM, a new approach is needed to help organizations address [storage] cost and complexity driven by tremendous data growth.  Traditional storage is inefficient in today’s world. However, Spectrum Storage software, IBM continued, helps organizations to more efficiently leverage their hardware investments to extract the full business value of data. Listen closely and you might even hear Toigo mutter Amen.

SDS may or may not be the solution. Toigo titled this session fifty shades of grey because the vendors can’t even agree on a definition for what constitutes SDS.  Yet, it is being presented as a panacea for everything that is wrong with legacy storage.

The key differentiator for Toigo is where a vendor’s storage intelligence resides; on the array controller, in the server hypervisor, or part of the software stack. As it turns out, some solutions are hypervisor dedicated or hypervisor dependent.  VMware’s Virtual SAN, for instance, only works with its hypervisor.  Microsoft’s Clustered Storage Spaces is proprietary to Microsoft, though it promises to share its storage with VMware – simple as pie, just convert your VMware workload into Microsoft VHD format and import it into Hyper-V so you can share the Microsoft SDS infrastructure.

IBM Spectrum passes Toigo’s 50 Shades test. It promises simple, efficient storage without the cost or complexity of dedicated hardware. IBM managers at Edge2015 confirmed Spectrum could run on generic servers and with generic disk arrays. With SDS you want everything agnostic for maximum flexibility.

Toigo’s preferred approach: virtualized SDS with virtual storage pools and centralized select value-add services that can be readily allocated to any workload regardless of the hypervisor. DancingDinosaur will drill down into other interesting Edge2015 sessions in subsequent posts.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

Storage Looms Large at IBMEdge 2015

April 17, 2015

Been a busy year in storage with software defined storage (SDS), real-time compression, flash, storage virtualization, OpenStack, and more all gaining traction. Similarly, big data, analytics, cloud, and mobile are impacting storage. You can expect to find them and more at IBM Edge2015, coming May 10-15 in Las Vegas.

 But storage continues to make news every week. Recently IBM scientists demonstrated an areal recording density triumph, hitting 123 billion bits of uncompressed data per square inch on low cost, particulate magnetic tape. That translates into the equivalent of a 220 terabyte tape cartridge that could fit in the palm of your hand, or comparable to 1.37 trillion mobile text messages or the text of 220 million books, which would require a 2,200 km bookshelf spanning from Las Vegas to Houston, Texas. (see graphic below)

Tape compression breakthrough

Courtesy of IBM (click to enlarge)

Let’s take a look at some sessions delving into the current hot storage topics at Edge2015, starting with tape, since we’ve been talking about it.

(sSS1335) The Future of Tape; presenter Mark Lantz. He discusses current and future scaling trends of magnetic tape technology—see announcement above—from the perspective of IBM Research. He begins by first comparing recent scaling trends of both tape and hard disk drive technology. He then looks at future capacity scaling potential of tape and hard disks. In that context he offers an in-depth look at a new world record tape areal density demonstration of more than 100 Gb/in2, performed by IBM research in collaboration with Fujifilm, using low cost particulate tape media. He also discusses the new hardware and tape media technologies developed for this demonstration as well as key challenges for the continued scaling of tape.

If you are thinking future, check out this session too. (sBA2523) Part III: A Peek into the Future; presenter Bruce Hillsberg. This session looks at novel and innovate technologies to address clients’ most challenging technical and business problems across a wide range of technologies and disciplines. The presentation looks at everything from the most fundamental materials level all the way to working on the world’s largest big data problems. Many of the technologies developed by the Storage Systems research team lead to new IBM products or become new features in existing products. Topics covered in this lecture include atomic scale storage, research into new materials, advances in current storage media, advanced object stores, cloud storage, and more.

Combine big data, flash, and the z13 all here. (sBA1952) How System z13 and IBM DS8870 Flash Technology Enables Your Hadoop Environments; presenter Renan Ugalde.  Analyzing large amounts of data introduces challenges that can impact the goals of any organization. Companies require a reliable and high performing infrastructure to extract value from their structure and unstructured data. The unique features offered by the integration of IBM System z13 and DS8870 Flash technology enable a platform to support real-time decisions such as fraud detection. This session explains how integration among System z13, DS8870, and Hadoop maximizes performance by enabling the infrastructure’s unique big data capabilities.

Jon Toigo is an outstanding non-IBM presenter and somewhat of an iconoclast when it comes to storage. This year he is offering a 3-part session on Disaster Recovery Planning in an Era of Mobile Computing and Big Data:

  • (aBA2511) Part I: For all the hype around hypervisor-based computing and new software-defined infrastructure models, the ongoing need for disaster preparedness is often being buried in the discussion. High availability server clustering is increasingly believed to trump disaster recovery preparations, despite the fact that the transition to an agile data center is fraught with disaster potentials. In the first of three sessions, Toigo looks at the trends that are occurring in IT and the potential they present for disruption.
  • sBA2512) Part II: builds on the previous session by examining the technologies available for data protection and the trend away from backups in favor of real-time mirroring and replication. He notes promising approaches, including storage virtualization and object storage that can make a meaningful contribution.
  • (sBA2513) Part III: completes his disaster recovery planning series with the use of mobile computing technologies and public clouds as an adjunct to successful business recovery following an unplanned interruption event. Here he discusses techniques and technologies that either show promise as recovery expediters or may place businesses at risk of an epic fail.

Several SDS sessions follow: (sSS0884) Software Defined Storage — Why? What? How? Presenter: Tony Pearson. Here Pearson explains why companies are excited about SDS, what storage products and solutions IBM has to offer, and how they are deployed. This session provides an overview of the new IBM Spectrum Storage family of offerings.

 A second session by Pearson. (sCV3179): IBM Spectrum Storage Integration in IBM Cloud Manager with OpenStack: IBM’s Cloud Storage Options; presenter Tony Pearson. This session will look at the value of IBM storage products in the cloud with a focus on OpenStack. Specifically, it will look at how Spectrum Virtualize can be integrated and used in a complete 3-tier app with OpenStack.

Finally, (sSS2453) Myth Busting Software Defined Storage – Top 7 Misconceptions; presenter Jeffrey Barnett. This session looks at the top misconceptions to cut through the hype and understand the real value potential. DancingDinosaur could only come up with six misconceptions. Will have to check out this session for sure.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. There also will be a weird but terrific group, 2Cellos. Stick with it to the end (about 3 min.) for the kicker.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here. And join DancingDinsosaur at IBM Edge2015.

IBM Redefines Software Defined Storage

February 25, 2015

On Feb. 17 IBM unveiled IBM Spectrum Storage, a new storage software portfolio designed to address data storage inefficiencies by changing the economics of storage with a layer of intelligent software; in short, a software defined storage (SDS) initiative.  IBM’s new software creates an efficient data footprint that dynamically stores every bit of data at the optimal cost, helping maximize performance and ensuring security, according to the IBM announcement here.

Jared Lazarus/Feature Photo Service for IBM

Courtesy of IBM: IBM Storage GM demonstrates new Spectrum storage management dashboard

To accelerate the development of next-generation storage software, IBM included plans to invest more than $1 billion in its storage software portfolio over the next five years. The objective: extend its storage technology leadership, having recently been ranked #1 in SDS platforms for the first three quarters of 2014 by leading industry analyst firm IDC. The investment will focus on R&D of new cloud storage software, object storage, and open standard technologies including OpenStack.

“Traditional storage is inefficient in today’s world where the value of each piece of data is changing all the time,” according to Tom Rosamilia, Senior Vice President, IBM Systems, in the announcement. He went on: “IBM is revolutionizing storage with our Spectrum Storage software that helps clients to more efficiently leverage their hardware investments to extract the full business value of data.”

Two days later IBM announced another storage initiative, flash products aimed directly at, EMC. The announcement focused on two new all-flash enterprise storage solutions, FlashSystem V9000 and FlashSystem 900. Each promises industry-leading performance and efficiency, along with outstanding reliability to help lower costs and accelerate data-intensive applications. The new solutions can provide real-time analytical insights with up to 50x better performance than traditional enterprise storage, and up to 4x better capacity in less rack space than EMC XtremIO flash technology.

Driving interest in IBM Spectrum storage is research suggesting that less than 50% of storage is effectively utilized. Storage silos continue to be rampant throughout the enterprise as companies recreate islands of Hadoop-based data along with more islands of storage to support ad hoc cloud usage. Developers create yet more data silos for dev, testing, and deployment.

IBM Storage Spectrum addresses these issues and more through a SDS approach that separates storage capabilities and intelligence from the physical devices. The resulting storage is self-tuning and leverages analytics for efficiency, automation, and optimization. By capitalizing on its automatic data placement capabilities IBM reports it can meet services levels while reducing storage costs by as much as 90%.

Specifically, IBM Spectrum consists of six storage software elements:

  1. IBM Spectrum Control—analytics-driven data management to reduce costs by up to 50%
  2. IBM Spectrum Protect—optimize data protection to reduce backup costs by up to 38%
  3. IBM Spectrum Archive—fast data retention that reduces TCO for archive data by up to 90%
  4. IBM Spectrum Virtualize—virtualization of mixed environment to store up to 5x more data
  5. IBM Spectrum Accelerate—enterprise storage for cloud, which can be deployed in minutes instead of months
  6. IBM Spectrum Scale—high-performance, highly scalable storage for unstructured data

Each of these elements can be mapped back to existing IBM storage solutions.  Spectrum Accelerate, for example, uses IBM’s XIV capabilities. Spectrum virtualization is based on IBM’s San Volume Controller (SVC) technology. Spectrum Scale is based on GPFS, now called Elastic Storage, to handle file and object storage at massive scale yet within a single global name space.  Spectrum Archive, based on IBM’s LTFS, allows an organization to treat tape as a low cost, fully active tier.  In effect, with IBM Spectrum, an organization can go from flash cache to tape, all synced worldwide within a single name space.

A big part of what IBM is doing amounts to repackaging the capabilities it has built into its storage systems and proven in various products like XIV or GPFS or SVC as software components to be used as part of an SDS deployment. This raises some interesting possibilities. For instance, is it cheaper to use Spectrum Accelerate with a commodity storage array or buy the conventional XIV storage product?  The same probably could be asked of Spectrum Virtualize with SVC or Spectrum Archive with LTFS.

DancingDinosaur asked the Spectrum marketing team exactly that question.  Their response: With Accelerate you have the flexibility to size the server to the performance needs of the solution, so while the software cost remains the same regardless of the server you select. The cost of the server will vary depending on what the client needs. We will make available a sizing guide soon so each client’s situation can be modeled based on the solution requirements. In all cases it really depends on the hardware chosen vs. the (IBM) appliance. If the hardware closely matches the hardware of the appliance then costs differences will be minimal. It all depends on the price the client gets, so yes, in theory, a white box may be lower cost.

With Spectrum Accelerate (XIV), IBM continues, the client can also deploy the software on a cluster of just 3 servers (minimum) and leverage existing Ethernet networking.  This minimum configuration will be much lower cost than the minimum XIV system configuration cost. Spectrum Accelerate can also be licensed on a monthly basis, so those clients with variable needs or deploying to the cloud the client can deploy and pay for only what they need when they need it.

It is a little different for the other Spectrum offerings. DancingDinosaur will continue chasing down those details. Stay tuned. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Follow more of his IT writing on and here.

IBM Creates Comprehensive Cloud Security Portfolio

November 6, 2014

On Wednesday IBM introduced what it describes as the industry’s first intelligent security portfolio for protecting people, data, and applications in the cloud. Not a single product but a set of products that taps a wide range of IBM’s cloud security, analytics, and services offerings.  The portfolio dovetails with IBM’s end-to-end mainframe security solution as described at Enterprise2014 last month.

Cloud security certainly is needed. In a recent IBM CISO survey, 44% of security leaders said they expect a major cloud provider to suffer a significant security breach in the future; one that will drive a high percentage of customers to switch providers, not to mention the risks to their data and applications.  Cloud security fears have long been one of the biggest impediments to organizations moving more data, applications, and processes to the cloud. These fears are further complicated by the fact the IT managers feel that much their cloud providers do is beyond their control. An SLA only gets you so far.

2014 IBM study of CISO 44 high

The same survey found 86% of leaders surveyed say their organizations are now moving to cloud, of those three-fourths see their cloud security budget increasing over the next 3-5 years.

As is typical of IBM when it identifies an issue and feels it has an edge, the company assembles a structured portfolio of tools, a handful of which were offered Wednesday. The portfolio includes versions of IBM’s own tools optimized for the cloud and tools and technologies IBM has acquired.  Expect more cloud security tools to follow. Together the tools aim to manage access, protect data and applications, and enable visibility in the cloud.

For example, for access management IBM is bringing out Cloud Identity Services which  onboards and handles users through IBM-hosted infrastructure.  To safeguard access to cloud-deployed apps it is bringing a Cloud Sign-On service used with Bluemix. Through Cloud Sign-On developers can quickly add single-sign on to web and mobile apps via APIs.  Another product, Cloud Access Manager, works with SoftLayer to protect cloud applications with pattern-based security, multi-factor authentication, and context-based access control. IBM even has a tool to handle privileged users like DBAs and cloud admins, the Cloud Privilege Identity Manager.

Here is a run-down of what was announced Wednesday. Expect it to grow.

  • Cloud Identity Services—IBM Cloud Identity Services
  • Cloud Sign-On Service –IBM Single Sign On
  • Cloud Access Manager –IBM Security Access Manager
  • Cloud Privileged Identity Manager—IBM Security Privileged Identity Manager (v2.0)
  • Cloud Data Activity Monitoring—IBM InfoSphere Guardium Data Activity Monitoring
  • Cloud Mobile App Analyzer Service –IBM AppScan Mobile Analyzer
  • Cloud Web App Analyzer Service –IBM AppScan Dynamic Analyzer
  • Cloud Security Intelligence –IBM QRadar Security Intelligence (v7.2.4)
  • Cloud Security Managed Services –IBM Cloud Security Managed Services

Now let’s see how these map to what the z data center already can get with IBM’s End-to-End Security Solution for the Mainframe. For starters, security is built into every level of the System z structure: processor, hypervisor, operating system, communications, and storage.

In terms of security analytics; zSecure, Guardium, AppScan, and QRadar improve your security intelligence. Some of these tools are included in the new Cloud security portfolio. Intelligence is collected from z/OS, RACF, CA ACF2, CA Top Secret, CICS, and DB2. The zSecure suite also helps address compliance challenges. In addition, InfoSphere Guardium Real-time Activity Monitoring handles activity monitoring, blocking and masking, and vulnerability assessment.

Of course the z brings its crypto coprocessor, Crypto Express4S, which complements the cryptographic capabilities of CPACF. There also is a new zEC12 coprocessor, the EP11 processor, amounting to a Crypto Express adapter configured with the Enterprise PKCS #11 (EP11) firmware, also called the CEX4P adapter. It provides hardware-accelerated support for crypto operations that are based on RSA’s PKCS #11 Cryptographic Token Interface Standard. Finally, the z supports the necessary industry standards, like FIPS 140-2 Level 4, to ensure multi-tenanted public and private cloud workloads remain securely isolated. So the cloud, at least, is handled to some extent.

The mainframe has long been considered the gold standard for systems security. Now it is being asked to take on cloud-oriented and cloud-based workloads while delivering the same level of unassailable security. Between IBM’s end-to-end mainframe security solution and the new intelligent (analytics-driven) security portfolio for the cloud enterprise shops now have the tools to do the job right.

And you will want all those tools because security presents a complex, multi-dimensional puzzle requiring different layers of integrated defense. It involves not only people, data, applications, and infrastructure but also mobility, on premise and off premise, structured, unstructured, and big data. This used to be called defense in depth, but with the cloud and mobility the industry is moving far beyond that.

DancingDinosaur is Alan Radding, a veteran IT analyst with well over 20 years covering IT and the System z. You can find more of my writing at and here. Also follow DancingDinosaur on Twitter, @mainframeblog.

Industrial Strength SDS for the Cloud

June 12, 2014

The hottest thing in storage today is software defined storage (SDS). Every storage vendor is jumping on the SDS bandwagon.

The presentation titled Industrial -Strength SDS for the Cloud, by Sven Oehme, IBM Senior Research Scientist, drew a packed audience at Edge 2014 and touched on many of the sexiest acronyms in IBM’s storage portfolio.  These included not just GPFS but also GSS (also called GPFS Storage Server), GNR, LROC (local read-only cache), and even worked in Linear Tape File System (LTFS).

The session promised to outline the customer problems SDS solves and show how to deploy it in large scale OpenStack environments with IBM GPFS.  Industrial strength generally refers to large-scale, highly secure and available multi-platform environments.

The session abstract explained that the session would show how GPFS enables resilient, robust, reliable, storage deployed on low-cost industry standard hardware delivering limitless scalability, high performance, and automatic policy-based storage tiering from flash to disk to tape, further lowering costs. It also promised to provide examples of how GPFS provides a single, unified, scale-out data plane for cloud developers across multiple data centers worldwide. GPFS unifies OpenStack VM images, block devices, objects, and files with support for Nova, Cinder, Swift and Glance (OpenStack components), along with POSIX interfaces for integrating legacy applications. C’mon, if you have even a bit of IT geekiness, doesn’t that sound tantalizing?

One disclaimer before jumping into some of the details; despite having written white papers on SDS and cloud your blogger can only hope to approximate the rich context provided at the session.

Let’s start with the simple stuff; the expectations and requirements for cloud  storage:

  • Elasticity, within and across sites
  • Secure isolation between tenants
  • Non-disruptive operations
  • No degradation by failing parts as components fail at scale
  • Different tiers for different workloads
  • Converged platform to handle boot volumes as well as file/object workload
  • Locality awareness and acceleration for exceptional performance
  • Multiple forms of data protection

Of course, affordable hardware and maintenance is expected as is quota/usage and workload accounting.

Things start getting serious with IBM’s General Parallel File System (GPFS). This what IBMers really mean when they refer to Elastic Storage, a single name space provided across individual storage resources, platforms, and operating systems. Add in different classes of storage devices (fast or slow disk, SSD, Flash, even LTFS tape), storage pools, and policies to control data placement and you’ve got the ability to do storage tiering.  You can even geographically distribute the data through IBM’s Active Cloud Engine, initially a SONAS capability sometimes referred to as Active File Manager. Now you have a situation where users can access data by the same name regardless of where it is located. And since the system keeps distributed copies of the latest data it can handle a temporary loss of connectivity between sites.

To protect the data add in declustered software RAID, aka GNR or even GSS (GPFS Storage Server). The beauty of this is it reduces the space overhead of replication through declustered parity (80% vs. 33% utilization) while delivering extremely fast rebuild.  In the process you can remove hardware storage controllers from the picture by doing the migration and RAID management in software on your commodity servers.

dino industrial sds 1

In the above graphic, focus on everything below the elongated blue triangle. Since it is being done in software, you can add an Object API for object storage. Throw in encryption software. Want Hadoop? Add that too. The power of SDS.  Sweet

The architecture Oehme lays out utilizes generic servers with direct-attached switched JBOD (SBOD). It also makes ample use of LROC, which provides a large read cache that benefits many workloads, including SPECsfs, VMware, OpenStack, other virtualization, and database workloads.

A key element in Oehme’s SDS for the cloud is OpenStack. From a storage standpoint OpenStack Cinder, which provides access to block storage as if it were local, enables the efficient sharing of data between services. Cinder supports advanced features, such as snapshots, cloning, and backup. On the back end, Cinder supports Linux servers with iSCSI and LVM; storage controllers; shared filesystems like GPFS, NFS, GlusterFS; and more.

Since Oehme’s  is to produceindustrial-strength SDS for the Cloud it needs to protect data. Data protection is delivered through backups, snapshots, cloning, replication, file level encryption, and declustered RAID, which spans all disks in the declustered array and results in faster RAID rebuild (because there are more disks available for RAID rebuild.)

The result is highly virtualized, industrial strength SDS for deployment in the cloud. Can you bear one more small image that promises to put this all together? Will try to leave it as big as can fit. Notice it includes a lot of OpenStack components connecting storage elements. Here it is.

dino industrial sds 2

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter @mainframeblog

Learn more about Alan Radding at

New IBM Flash Storage for the Mainframe

June 2, 2014

IBM is serious about flash storage and they are enabling just about everything for flash—the DS8000 family, San Volume Controller, EasyTier, Real-time Compression (RtC), and more.  Of particular interest to DancingDinosaur readers should be the recently announced DS8870 all flash enclosure.

Storage in general is changing fast. Riding Moore’s Law for the past two decades, storage users could assume annual drops in the cost per gigabyte. It was as predictable as passing go in Monopoly and collecting $200. But with that ride coming to an end companies like IBM are looking elsewhere to engineer the continued improvements everyone assumed and benefited from. For example, IBM is combining SVC, RtC, and flash to get significantly more performance out of less actual storage capacity.

The DS8870 is particularly interesting. In terms of reliability, for instance, it delivers not five-nines (99.999%) availability but six-nines (99.9999%) availability. That works out to be about 30 seconds of downtime each year. It works with all IBM servers, not just the z, and it protects data through full disk encryption and advanced access control. With the new flash enclosure packed with IBM’s enhanced flash the DS8870 delivers 4x faster flash performance in 50% less space. That translates into a 3.2x improvement in database performance.

Flash is not cheap when viewed through the traditional cost/gigabyte metric, but the above performance data suggests a different way to gauge the cost of flash, which continues to steadily fall in price. The 3.2x increase in database performance, for example, means you can handle over 300% more transactions.

Let’s start with the assumption that more transactions ultimately translate into more revenue. The same for that extra 9 in availability. The high-performance all flash DS8870 configuration with the High Performance Flash Enclosure also reduces the footprint by 50% and reduces power consumption by 12%, which means lower space and energy costs. It also enables you to shrink batch times by 10%, according to IBM. DancingDinosaur will be happy to help you pull together a TCO analysis for an all-flash DS8870 investment.

The sheer specs of the new system are impressive. IBM reports the product’s up to 8 PCIe enclosures populated with 400 GB flash cards provides 73.6TB of usable capacity. For I/O capacity the 8 I/O bays installed in the base frame provide up to 128 8Gb FC ports. Depending on the internal server you install in the DS8870 you can also get up to 1TB of cache.

all flash rack enclosure

all flash rack enclosure

ds8870 rack

The Flash Enclosure itself is a 1U drawer that can take up to 30 flash cards.  By opting for thirty 400GB flash cards you will end up with 9.2TB Usable (12 TB raw). Since the high-performance all flash DS8870 can take up to 8 Flash Enclosures you can get 96TB raw (73.6TB usable) flash capacity per system.

A hybrid DS8870 system, as opposed to the high-performance all flash version, will allow up to 120 Flash cards in 4 Flash Enclosures for 48TB raw (36.8TB usable), along with 1536 2.5” HDDs/SSDs. Then, connect it all to the DS8870 internal PCIe fabric for impressive performance— 200,000 IOPS (100% read) and 130,000 IOPS (100% write). From there, you can connect it to flash-enabled SVC and Easy Tier.

Later this year, reports Clod Barrera, IBM’s storage CTO, you will be able to add 4 more enclosures in hybrid configurations for boosting flash capacity up to 96TB raw.  Together you can combine the DS8870, flash, SVC, RtC, and EasyTier for a lightning fast and efficient storage infrastructure.

Even the most traditional System z shop will soon find itself confronting mixed workloads consisting of traditional and non-traditional workload. You probably already are as mobile devices initiate requests for mainframe data. Pretty soon you will be faced with incorporating traditional and new workloads. When that happens you will want a fast, efficient, flexible infrastructure like the DS8870.

DancingDinosaur is Alan Radding. Follow him onTwitter, @mainframeblog

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog


Get every new post delivered to your Inbox.

Join 813 other followers

%d bloggers like this: