Posts Tagged ‘Object Storage’

IBM Changes the Economics of Cloud Storage

March 31, 2017

Storage tiering used to be simple: active data went to your best high performance storage, inactive data went to low cost archival storage, and cloud storage filled in for one or whatever else was needed. Unfortunately, today’s emphasis on continuous data analytics, near real-time predictive analytics, and now cognitive has complicated this picture and the corresponding economics of storage.

In response, last week IBM unveiled new additions to the IBM Cloud Object Storage family. The company is offering clients new choices for archival data and a new pricing model to more easily apply intelligence to unpredictable data patterns using analytics and cognitive tools.

Analytics drive new IBM cloud storage pricing

By now, line of business (LOB) managers, having been exhorted to leverage big data and analytics for years, are listening. More recently, the analytics drumbeat has expanded to include not just big data but sexy IoT, predictive analytics, machine learning, and finally cognitive science. The idea of keeping data around for a few months and parking it in a long term archive to never be looked at again until it is finally deleted permanently just isn’t happening as it was supposed to (if it ever did). The failure to permanently remove expired data can become costly from a storage standpoint as well as risky from an e-discovery standpoint.

IBM puts it this way: Businesses typically have to manage across three types of data workloads: “hot” for data that’s frequently accessed and used; “cool” for data that’s infrequently accessed and used; and “cold” for archival data. Cold storage is often defined as cheaper but slower. For example, if a business uses cold storage, it typically has to wait to retrieve and access that data, limiting the ability to rapidly derive analytical or cognitive insights. As a result, there is a tendency to store data in more expensive hot storage.

IBM’s new cloud storage offering, IBM Cloud Object Storage Flex (Flex), uses a “pay as you use” model of storage tiers potentially lowering the price by 53 percent compared to AWS S3 IA1 and 75 percent compared to Azure GRS Cool Tier.2 (See footnotes at the bottom of the IBM press release linked to above. However IBM is not publishing the actual Flex storage prices.) Flex, IBM’s new cloud storage service, promises simplified pricing for clients whose data usage patterns are difficult to predict. Flex promises organizations will benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.

Of course, you could just lower the cost of storage by permanently removing unneeded data.  Simply insist that the data owners specify an expiration date when you set up the storage initially. When the date arrives in 5, 10, 15 years automatically delete the data. At least that’s how I was taught eons ago. Of course storage costs orders of magnitude less now although storage volumes are orders of magnitude greater and near real-time analytics weren’t in the picture.

Without the actual rates for the different storage tiers you cannot determine how much Storage Flex may save you.  What it will do, however, is make it more convenient to perform analytics on archived data you might otherwise not bother with.  Expect this issue to come up increasingly as IoT ramps up and you are handling more data that doesn’t need hot storage beyond the first few minutes of its arrival.

Finally, the IBM Cloud Object Storage Cold Vault (Cold Vault) service gives clients access to cold storage data on the IBM Cloud and is intended to lead the category for cold data recovery times among its major competitors. Cold Vault joins its existing Standard and Vault tiers to complete a range of IBM cloud storage tiers that are available with expanded expertise and methods via Bluemix and through the IBM Bluemix Garages.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Spectrum Suite Returns IBM to the Storage Game

January 29, 2016

The past four quarters haven’t been kind to IBM storage as the storage group racked up consecutive quarterly revenue losses. The Spectrum Suite V 1.0 is IBM’s latest software defined storage (SDS) initiative, one of the hottest trends in storage. The product release promises to start turning things around for IBM storage.

IBM Mobile Storage (Jared Lazarus/Feature Photo Service for IBM)

IBM Mobile Storage, Jamie,Thomas, GM Storage (Jared Lazarus/Feature Photo Service for IBM)

Driving interest in SDS is the continuing rapid adoption on new workload, new application, and new ways of storing and consuming data. The best thing about the Spectrum Suite is the way IBM is now delivering it—as a broad set of storage software capabilities that touch every type of storage operation. It doesn’t much matter which workloads or applications are driving it or what kind of storage you need.  Seventy percent of clients report deploying object storage, and 60% already are committed to SDS.  Over three-quarters of storage device interface (SDI) adopters also indicated a strong preference for single-vendor storage solutions.  This all bodes well for IBM’s Spectrum Suite.

Also working in IBM’s favor is the way storage has traditionally been delivered. Even within one enterprise there can be multiple point solutions from different vendors or even incompatible solutions from the same vendor. Companies need to transition among storage software offerings as business needs change, which entails adding and removing software licenses. This always is complex and may even lead to dramatic cost gyrations due to different licensing metrics and different vendor policies.  On top of that, procurement may not play along so quickly, leaving the organization with a gap in functionality.  Then there are the typical inconsistent user interfaces among offerings, which invariably reduces productivity and may increase errors.

Add to that the usual hassles of learning different products with different interfaces and different ways to run new storage processes. As a result, a switch to SDS may not be as smooth or efficient as you hoped, and it probably won’t be cheap.

IBM is counting on these storage complications, outlined above, and more to give it a distinct advantage in the SDS market  IBM should know; the company has been one of the offenders creating similar complications as they cobbled together a wide array of storage products with different interfaces and management processes over the years.

With the new Spectrum Storage Suite IBM finally appears to have gotten it right. IBM is offering a simplified and predictable licensing model for entire Spectrum Storage family. Pricing is pegged to the capacity being used, regardless of what that capacity is and how it is being used. Block, file, object—doesn’t matter; the same per-terabyte pricing applies. IBM estimates that alone can save up to 40% compared to licensing different software capabilities separately. Similarly, there are no software licensing hassles when migrating from one form of storage or data type to another. Even the cost won’t change unless you add capacity. Then, you pay the same per-terabyte cost for the additional capacity.

The Spectrum Suite and its licensing model work for mainframe shops running Linux on z and LinuxONE. Sorry, no z/OS yet.

The new Spectrum Storage approach has advantages when running a storage shop. There are no unexpected charges when using new capabilities and IBM isn’t charging for non-production uses like dev and test.

Finally, you will find a consistent user interface across all storage components in the Spectrum suite. That was never the case with IBM’s underlying storage hardware products but Spectrum SDS makes those difference irrelevant. The underlying hardware array doesn’t really matter; admins will rarely ever have to touch it.

The storage capabilities included in IBM Spectrum Storage Suite V1.0 should be very familiar to you from the traditional IBM storage products you probably are currently using. They include:

  • IBM Spectrum Accelerate, Version 11.5.3
  • IBM Spectrum Archive Enterprise Edition, Version 1.2 (Linux edition)
  • IBM Spectrum Control Advanced Edition 5.2
  • IBM Spectrum Protect Suite 7.1
  • IBM Spectrum Scale Advanced and Standard Editions (Protocols) V4.2
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Real-time Compression
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Encryption Software

With Spectrum Storage you can, for example, run SAN storage, storage rich servers, and a tape library. Add up the storage capacity for each and pay the per-terabyte licensing cost. Re-allocate the existing capacity between the different types of storage and your charges don’t change. Pretty nifty, huh? To DancingDinosaur, who has sat through painful discussions of complicated IBM software pricing slopes, this is how you spell relief. Maybe there really is a new IBM coming that actually gets it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem Continues Surge in 4Q15

January 22, 2016

DancingDinosaur follows technology, not financial investments, so you’d be an idiot if you considered what follows as investment advice. It is not.  Still, as one who has built a chunk of his career around the mainframe, it is good to see the z System continuing to remain in the black and beating the sexier Power lineup although I do follow both closely. See the latest IBM financials here.

  ibm-z13

The IBM z13 System

 Specifically, as IBM reported on Tuesday, revenues from z Systems mainframe server products increased 16 percent compared with the year-ago period (up 21 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS (millions of instructions per second), increased 28 percent.  Revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency).

Almost as good, revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency). Power revenues have been up most of the year although they got a little blurry in the accounting.

In the storage market, which is getting battered by software defined storage (SDS) on one hand and cloud-based storage on the other, IBM reported revenues from System Storage decreased 11 percent (down 7 percent adjusting for currency). The storage revenues probably won’t bounce back fast, at least not without IBM bringing out radically new storage products. That storage rival EMC got acquired by Dell should be some kind of signal that the storage market as the traditional enterprise players knew it is drastically different. For now object storage, SDS, and even Flash won’t replace the kind of revenue IBM used to see from DS8000 disk systems or TS enterprise tape libraries loaded with mechanical robotics.

Getting more prominence is IBM’s strategic initiative. This has been a company priority all year. Strategic initiatives include cloud, mobile, analytics, security, IoT, and cognitive computing. Q4 revenues, as reported by IBM, from these strategic imperatives — cloud, analytics, and engagement — increased 10 percent year-to-year (up 16 percent adjusting for currency).  For the full year, revenues from strategic imperatives increased 17 percent (up 26 percent adjusting for currency and the divested System x business) to $28.9 billion and now represents 35 percent of total IBM consolidated revenue.

For the full year, total cloud revenues (public, private and hybrid) increased 43 percent (up 57 percent adjusting for currency and the divested System x business) to $10.2 billion.  Revenues for cloud delivered as a service — a subset of the total cloud revenue — increased 50 percent to $4.5 billion; and the annual as-a-service run rate increased to $5.3 billion from $3.5 billion in the fourth quarter of 2014.

Meanwhile, revenues from business analytics increased 7 percent (up 16 percent adjusting for currency) to $17.9 billion.  Revenues from mobile more than tripled and from security increased 5 percent (up 12 percent adjusting for currency).

Commenting on IBM latest financial was Timothy Prickett Morgan, who frequently writes on IBM’s platforms. Citing Martin Schroeter, IBM’s chief financial officer, statements to analyst, Morgan suggested that low profit margins, which other financial analysts complained about, put pressure on the System z13 product line that launched early in the year. After a fast start, apparently, the z13 is now experiencing a slowdown in the upgrade cycle. It’s at this point that DancingDinosaur usually expects to see a new z, typically a business class version of the latest mainframe, the z13 in this case, but that does not appear to be in the offing. About the closest IBM got to that was the RockHopper model of the LinuxOne, a z optimized only for Linux, cloud, mobile, and analytics.

Morgan also noted that IBM added about 50 new mainframe customers for the year on an installed base of about 6,000 active customers. DancingDinosaur has been tracking that figure for years and it has not fluctuated much in recent years. And am never sure how to count the handful of IT shops that run a z in the IBM cloud.  But 5000-6000 active z shops still sounds about right.

Power Systems, which has also grown four quarters in a row, and was up 8 percent at constant currency. This has to be a relief to the company, which has committed over $1 billion to Power. IBM attributes some of this growth to its enthusiastic embrace of Linux on Power8, but Morgan complains of having no sense of how much of the Power Systems pie is driven by scale-out Linux machines intended to compete against Intel Xeon servers. Power also is starting to get some boost from the OpenPOWER Foundation, members that started to ship products in the past few months. It’s probably minimal revenue now but over time it should grow.

For those of us who are counting on z and Power to be around for a while longer, the latest financials should be encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Edge2014 Explores Software Defined Everything in Depth

April 18, 2014

IBM Edge2014 is coming up fast, May 19-23 in Las Vegas, at the Venetian. Plus there is the Sheryl Crow concert you don’t want to miss.

IBM Edge2014 is bringing over 400 program sessions across more than a dozen topic tracks; choices for everyone from the geekiest techie to the most buttoned down business executive. One that crosses both camps—technical and business—is Software Defined Environments (SDE), which DancingDinosaur of thinks of as software-defined everything.

SDE takes the abstraction, or the decoupling, of software and hardware to the next level. It takes virtualization and moves it higher up in the stack.  There you can virtualize not only servers but network switches, storage, and more. The benefit: efficiency, speed, and flexibility. You can allocate and deploy system resources quickly and easily through software. And add or move capabilities as needed, again through software.

Through software defined virtualization the IT industry can virtualize nearly every resource and capability. If you can encapsulate a set of capabilities as software you have created a virtualized component that can run on almost any network-attached device capable of hosting software.  In short, you have effectively decoupled those capabilities usually embedded as firmware from whatever underlying physical device previously hosted them.

IBM Edge2014 offers numerous software defined sessions. Let’s look at a few:

Software Defined Storage – Storage for Software Defined Environments

Presented by Clodoaldo Barrera

As described in the program, Software Defined Environments (SDE) have become the preferred approach for modern IT operations, combining the values of automation, policy enforcement, and efficiency. Within these SDE operations, storage must be properly configured to support the expected user experience and cost benefits, while still providing performance, availability, and data protection. In this session Barrera will explain how storage is configured and then managed through a stack of virtualization, provisioning, and orchestration software.

IBM Software Defined Storage Vision and Roadmap

Presented by Vincent Hsu, Tom Clark

This session introduces the core technology for IBM Software Defined Storage (SDS) vision and the SDS product roadmap. This includes the control plane technology as well as the data plane technology to address the future software defined data center.

But it is not only about storage. IBM’s Strategy for Software Defined Networking

Presented by Andy Wright

Here Software Defined Networking (SDN) is an emerging framework designed for virtual, dynamic and flexible networking that allows organizations to easily modify, control and manage physical and virtual networks. IBM already is a leader in this space with SDN VE offerings and the roadmap above tells you where it is headed. Wright’s session examines IBM’s Vision, Network Virtualization (Overlay) capabilities for existing networks, and the capabilities of OpenFlow networks. These technologies promise to improve the working lives of system, virtualization, cloud, and network administrators. If you fill one of these roles, you probably don’t want to miss this.

Continuity Requirements for Software Defined Data Centers

Presented by Jon Toigo

One the benefits of software defined resources is the ability to spin up additional resource virtually. That leads to the assumption of an agile and dynamic data center that can turn on a dime in response to business requirements. That is true in theory. Rarely discussed, however, are the inherent risks of combining physical and virtual resources, both locally deployed and sourced from remote cloud-based providers. This presentation will identify some disaster recovery and business continuity requirements to keep in mind as you plan your software defined data center and need to rein in management’s wishful thinking.

And a related topic, It’s All About the Object! IBM Customer Experiences with Object Storage

Presented by Manuel Avalos, Dan Lucky

Travis County in Austin, TX is an early adopter of IBM object storage. This case study positions IBM’s Statement Of Direction (SOD) about object storage based on OpenStack Swift. Here you can learn from six months of Travis County user experiences, values, and next steps.

And there is another entire session on SDE in the federal government. Overall, IBM Edge2014 is delivering considerable real world experiences direct from the user.

Look for DancingDinosaur at IBM Edge2014, Mon-Wed. at sessions or in the bloggers lounge.

And follow DancingDinosaur on Twitter, @mainframeblog


%d bloggers like this: