Posts Tagged ‘software-defined storage (SDS)’

IBM Insists Storage is Generating Positive Revenue

May 19, 2017

At a recent quarterly briefing on the company’s storage business, IBM managers crowed over its success: 2,000 new Spectrum Storage customers, 1,300 new DS8880 systems shipped, 1500 PB of capacity shipped, 7% revenue gain Q1’17. This appeared to contradict yet another consecutive losing quarter in which only IBM’s Cognitive Solutions (includes Solutions Software and Transaction Processing Software) posted positive revenue.

However, Martin Schroeter, Senior Vice President and Chief Financial Officer (1Q’17 financials here), sounded upbeat about IBM storage in the quarterly statement: Storage hardware was up seven percent this quarter, led by double-digit growth in our all-flash array offerings. Flash contributed to our Storage revenue growth in both midrange and high-end. In storage, we continue to see the shift in value towards software-defined environments, where we continue to lead the market. We again had double-digit revenue growth in Software-Defined Storage, which is not reported in our Systems segment. Storage software now represents more than 40 percent of our total storage revenue.

IBM Flash System A9000

Highly parallel all-flash storage for hyperscale and cloud data centers

Schroeter continued: Storage gross margins are down, as hardware continues to be impacted by price pressure. To summarize Systems, our revenue and gross profit performance were driven by expected cycle declines in z Systems and Power, mitigated by Storage revenue growth. We continue to expand our footprint and add new capabilities, which address changing workloads. While we are facing some shifting market dynamics and ongoing product transitions, our portfolio remains uniquely optimized for cognitive and cloud computing.

DancingDinosaur hopes he is right.  IBM has been signaling a new z System coming for months, along with enhancements to Power storage. Just two weeks ago IBM reported achievements with Power and Nvidia, as DancingDinosaur covered at that time.

If there was any doubt, all-flash storage is the way IBM and most other storage providers are heading for the performance and competitive economics. In January IBM announced three all flash DS888* all flash products, which DancingDinosaur covered at the time here. Specifically:

  • DS8884 F (the F designates all flash)—described by IBM as performance delivered within a flexible and space-saving package
  • DS8886 F—combines performance, capacity, and cost to support a variety of workloads and applications
  • DS8888 F—promises performance and capacity designed to address the most demanding business workload requirements

The three products are intended to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. Doubt that a lot of mainframe data centers are doing much with cognitive systems yet, but that will be coming.

Spectrum Storage also appears to be looming large in IBM’s storage plans. Spectrum Storage is IBM’s software defined storage (SDS) family of products. DancingDinosaur covered the latest refresh of the suite of products this past February.

The highlights of the recent announcement included the addition of Cloud Object Storage and a version of Spectrum Virtualize as software only.  Spectrum Control got a slew of enhancements, including new cloud-based storage analytics for Dell EMC VNX, VNXe, and VMAX; extended capacity planning views for external storage, and transparent cloud tiering for IBM Spectrum Scale.  The on-premises editions added consolidated chargeback/showback and support for Dell EMC VNXe file storage. This should make it clear that Spectrum Storage is not only for underlying IBM storage products.

Along the same lines, Spectrum Storage added VMware 6 support and the certified vSphere Web client. In the area of cloud object storage, IBM added native NFS access, enhance STaaS multi-tenancy, IPV6 support, and preconfigured bundles.

IBM also previewed enhancements coming in 2Q’17.   Of specific interest to DancingDinosaur readers will likely be  the likely updates to the FlashSystem and VeraStack portfolio.

The company is counting on these enhancements and more to help pull IBM out of its tailspin. As Schroeter wrote in the 1Q’17 report: New systems product introductions later in the year will drive improved second half performance as compared to the first. Hope so; already big investors are cashing out. Clients, however, appear to be staying for now.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Can SDS and Flash Resurrect IBM Storage?

November 4, 2016

As part of IBM’s ongoing string of quarterly losses storage has consistently contributed to the red ink, but the company is betting on cloud storage, all-flash strategy, and software defined storage (SDS) to turn things around. Any turn-around, however, is closely tied to the success of IBM’s strategic imperatives, which have emerged as bright spots amid the continuing quarterly losses; especially cloud, analytics, and cognitive computing.

climate-data-requires-fast-access-1

Climate study needs large amounts of fast data access

As a result, IBM needs to respond to two challenges created by its customers: 1) changes like the increased adoption of cloud, analytics, and most recently cognitive computing and 2) the need by customers to reduce the cost of the IT infrastructure. The problem as IBM sees it is this: How do I simultaneously optimize the traditional application infrastructure and free up money to invest in a new generation application infrastructure, especially if I expect move forward into the cognitive era at some point? IBM’s answer is to invest in flash and SDS.

A few years ago DancingDinosaur was skeptical, for example, that flash deployment would lower storage costs except in situations where low cost IOPS was critical. Today between the falling cost of flash and new ways to deploy increasingly cheaper flash DancingDinosaur now believes Flash storage can save IT real money.

According to the Evaluator Group and cited by IBM, flash and hybrid cloud technologies are dramatically changing the way companies deploy storage and design applications. As new applications are created–often for mobile or distributed access–the ability to store data in the right place, on the right media, and with the right access capability will become even more important.

In response, companies are adding cloud to lower costs, flash to increase performance, and SDS to add flexibility. IBM is integrating these capabilities together with security and data management for faster return on investment.  Completing the IBM pitch, the company offers choice among on-premise storage, SDS, or storage as a cloud service.

In an announcement earlier this week IBM introduced six products:

  • IBM Spectrum Virtualize 7.8 with transparent cloud tiering
  • IBM Spectrum Scale 4.2.2 with cloud data sharing
  • IBM Spectrum Virtualize family flash enhancements
  • IBM Storwize family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack—a joint IBM-Cisco initiative

In short, these announcements address Hybrid Cloud enablement, as a standard feature for new and existing users of Spectrum Virtualize to enable data sharing to the cloud through Spectrum Scale, which can sync file and object data across on-premises and cloud storage to connect cloud native applications. Plus, more high density, highly scalable all-flash storage now sports a new high density expansion enclosure that includes new 7TB and 15TB flash drives.

IBM Storwize, too, is included, now able to grow up to 8x larger than previously without disruption. That means up to 32PB of flash storage in only four racks to meet the needs of fast-growing cloud workloads in space-constrained data centers. Similarly, IBM’s new DeepFlash Elastic Storage Server (ESS) offers up to 8x better performance than HDD-based solutions for big data and analytics workloads. Built with IBM Spectrum Scale ESS includes virtually unlimited scaling, enterprise security features, and unified file, object, and HDFS support.

The z can play in this party too. IBM’s DS8888 now delivers 2x better performance and 3x more efficient use of rack space for mission-critical applications such as credit card and banking transactions as well as airline reservations running on IBM’s z System or IBM Power Systems. DancingDinosaur first reported on the all flash z, the DS8888, when it was introduced last May.

Finally hybrid cloud enablement for existing and new on-premises storage enhancements through IBM Spectrum Virtualize, which brings hybrid cloud capabilities for block storage to the Storwize family, FlashSystem V9000, SVC, and VersaStack, the IBM-Cisco collaboration.

Behind every SDS deployment lies some actual physical storage of some type. Many opt for generic, low cost white box storage to save money.  As part of IBM’s latest SDS offerings you can choose among any of nearly 400 storage systems from IBM and others. Doubt any of those others are white box products but at least they give you some non-IBM options to potentially lower your storage costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Spectrum Suite Returns IBM to the Storage Game

January 29, 2016

The past four quarters haven’t been kind to IBM storage as the storage group racked up consecutive quarterly revenue losses. The Spectrum Suite V 1.0 is IBM’s latest software defined storage (SDS) initiative, one of the hottest trends in storage. The product release promises to start turning things around for IBM storage.

IBM Mobile Storage (Jared Lazarus/Feature Photo Service for IBM)

IBM Mobile Storage, Jamie,Thomas, GM Storage (Jared Lazarus/Feature Photo Service for IBM)

Driving interest in SDS is the continuing rapid adoption on new workload, new application, and new ways of storing and consuming data. The best thing about the Spectrum Suite is the way IBM is now delivering it—as a broad set of storage software capabilities that touch every type of storage operation. It doesn’t much matter which workloads or applications are driving it or what kind of storage you need.  Seventy percent of clients report deploying object storage, and 60% already are committed to SDS.  Over three-quarters of storage device interface (SDI) adopters also indicated a strong preference for single-vendor storage solutions.  This all bodes well for IBM’s Spectrum Suite.

Also working in IBM’s favor is the way storage has traditionally been delivered. Even within one enterprise there can be multiple point solutions from different vendors or even incompatible solutions from the same vendor. Companies need to transition among storage software offerings as business needs change, which entails adding and removing software licenses. This always is complex and may even lead to dramatic cost gyrations due to different licensing metrics and different vendor policies.  On top of that, procurement may not play along so quickly, leaving the organization with a gap in functionality.  Then there are the typical inconsistent user interfaces among offerings, which invariably reduces productivity and may increase errors.

Add to that the usual hassles of learning different products with different interfaces and different ways to run new storage processes. As a result, a switch to SDS may not be as smooth or efficient as you hoped, and it probably won’t be cheap.

IBM is counting on these storage complications, outlined above, and more to give it a distinct advantage in the SDS market  IBM should know; the company has been one of the offenders creating similar complications as they cobbled together a wide array of storage products with different interfaces and management processes over the years.

With the new Spectrum Storage Suite IBM finally appears to have gotten it right. IBM is offering a simplified and predictable licensing model for entire Spectrum Storage family. Pricing is pegged to the capacity being used, regardless of what that capacity is and how it is being used. Block, file, object—doesn’t matter; the same per-terabyte pricing applies. IBM estimates that alone can save up to 40% compared to licensing different software capabilities separately. Similarly, there are no software licensing hassles when migrating from one form of storage or data type to another. Even the cost won’t change unless you add capacity. Then, you pay the same per-terabyte cost for the additional capacity.

The Spectrum Suite and its licensing model work for mainframe shops running Linux on z and LinuxONE. Sorry, no z/OS yet.

The new Spectrum Storage approach has advantages when running a storage shop. There are no unexpected charges when using new capabilities and IBM isn’t charging for non-production uses like dev and test.

Finally, you will find a consistent user interface across all storage components in the Spectrum suite. That was never the case with IBM’s underlying storage hardware products but Spectrum SDS makes those difference irrelevant. The underlying hardware array doesn’t really matter; admins will rarely ever have to touch it.

The storage capabilities included in IBM Spectrum Storage Suite V1.0 should be very familiar to you from the traditional IBM storage products you probably are currently using. They include:

  • IBM Spectrum Accelerate, Version 11.5.3
  • IBM Spectrum Archive Enterprise Edition, Version 1.2 (Linux edition)
  • IBM Spectrum Control Advanced Edition 5.2
  • IBM Spectrum Protect Suite 7.1
  • IBM Spectrum Scale Advanced and Standard Editions (Protocols) V4.2
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Real-time Compression
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Encryption Software

With Spectrum Storage you can, for example, run SAN storage, storage rich servers, and a tape library. Add up the storage capacity for each and pay the per-terabyte licensing cost. Re-allocate the existing capacity between the different types of storage and your charges don’t change. Pretty nifty, huh? To DancingDinosaur, who has sat through painful discussions of complicated IBM software pricing slopes, this is how you spell relief. Maybe there really is a new IBM coming that actually gets it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Industrial Strength SDS for the Cloud

June 12, 2014

The hottest thing in storage today is software defined storage (SDS). Every storage vendor is jumping on the SDS bandwagon.

The presentation titled Industrial -Strength SDS for the Cloud, by Sven Oehme, IBM Senior Research Scientist, drew a packed audience at Edge 2014 and touched on many of the sexiest acronyms in IBM’s storage portfolio.  These included not just GPFS but also GSS (also called GPFS Storage Server), GNR, LROC (local read-only cache), and even worked in Linear Tape File System (LTFS).

The session promised to outline the customer problems SDS solves and show how to deploy it in large scale OpenStack environments with IBM GPFS.  Industrial strength generally refers to large-scale, highly secure and available multi-platform environments.

The session abstract explained that the session would show how GPFS enables resilient, robust, reliable, storage deployed on low-cost industry standard hardware delivering limitless scalability, high performance, and automatic policy-based storage tiering from flash to disk to tape, further lowering costs. It also promised to provide examples of how GPFS provides a single, unified, scale-out data plane for cloud developers across multiple data centers worldwide. GPFS unifies OpenStack VM images, block devices, objects, and files with support for Nova, Cinder, Swift and Glance (OpenStack components), along with POSIX interfaces for integrating legacy applications. C’mon, if you have even a bit of IT geekiness, doesn’t that sound tantalizing?

One disclaimer before jumping into some of the details; despite having written white papers on SDS and cloud your blogger can only hope to approximate the rich context provided at the session.

Let’s start with the simple stuff; the expectations and requirements for cloud  storage:

  • Elasticity, within and across sites
  • Secure isolation between tenants
  • Non-disruptive operations
  • No degradation by failing parts as components fail at scale
  • Different tiers for different workloads
  • Converged platform to handle boot volumes as well as file/object workload
  • Locality awareness and acceleration for exceptional performance
  • Multiple forms of data protection

Of course, affordable hardware and maintenance is expected as is quota/usage and workload accounting.

Things start getting serious with IBM’s General Parallel File System (GPFS). This what IBMers really mean when they refer to Elastic Storage, a single name space provided across individual storage resources, platforms, and operating systems. Add in different classes of storage devices (fast or slow disk, SSD, Flash, even LTFS tape), storage pools, and policies to control data placement and you’ve got the ability to do storage tiering.  You can even geographically distribute the data through IBM’s Active Cloud Engine, initially a SONAS capability sometimes referred to as Active File Manager. Now you have a situation where users can access data by the same name regardless of where it is located. And since the system keeps distributed copies of the latest data it can handle a temporary loss of connectivity between sites.

To protect the data add in declustered software RAID, aka GNR or even GSS (GPFS Storage Server). The beauty of this is it reduces the space overhead of replication through declustered parity (80% vs. 33% utilization) while delivering extremely fast rebuild.  In the process you can remove hardware storage controllers from the picture by doing the migration and RAID management in software on your commodity servers.

dino industrial sds 1

In the above graphic, focus on everything below the elongated blue triangle. Since it is being done in software, you can add an Object API for object storage. Throw in encryption software. Want Hadoop? Add that too. The power of SDS.  Sweet

The architecture Oehme lays out utilizes generic servers with direct-attached switched JBOD (SBOD). It also makes ample use of LROC, which provides a large read cache that benefits many workloads, including SPECsfs, VMware, OpenStack, other virtualization, and database workloads.

A key element in Oehme’s SDS for the cloud is OpenStack. From a storage standpoint OpenStack Cinder, which provides access to block storage as if it were local, enables the efficient sharing of data between services. Cinder supports advanced features, such as snapshots, cloning, and backup. On the back end, Cinder supports Linux servers with iSCSI and LVM; storage controllers; shared filesystems like GPFS, NFS, GlusterFS; and more.

Since Oehme’s  is to produceindustrial-strength SDS for the Cloud it needs to protect data. Data protection is delivered through backups, snapshots, cloning, replication, file level encryption, and declustered RAID, which spans all disks in the declustered array and results in faster RAID rebuild (because there are more disks available for RAID rebuild.)

The result is highly virtualized, industrial strength SDS for deployment in the cloud. Can you bear one more small image that promises to put this all together? Will try to leave it as big as can fit. Notice it includes a lot of OpenStack components connecting storage elements. Here it is.

dino industrial sds 2

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter @mainframeblog

Learn more about Alan Radding at technologywriter.com


%d bloggers like this: