Posts Tagged ‘IBM Spectrum Storage’

IBM Brings NVMe to Revamped Storage

February 23, 2018

The past year has been good for IBM storage and it’s not only that the company rang up four consecutive quarters of positive storage revenue. Over that period and starting somewhat earlier, the company embarked on a thorough revamping of its storage lineup, adding all the hot goodies from flash to software defined storage (Spectrum) to NVMe (Non-Volatile Memory express) in 2018. NVMe represents a culmination of sorts by allowing the revamped storage products to actually deliver on the low latency and parallelism promises of the latest technology.

Hyper-Scale Manager for IBM FlashSystem (Jared Lazarus/Feature Photo Service for IBM)

The revamp follows changes in the way organizations are deploying technology. They now are wrestling with exponential volumes of data growth and the need to quickly modernize their traditional IT infrastructures by taking advantage of multi-cloud, analytics, and cognitive/AI workloads going forward.

This is not just a revamp of existing products. IBM has added innovations and enhancements across the storage portfolio to expand the range of data types supported, deliver new function, and enable new technology deployment.

This week, IBM Storage — the #2 storage software vendor by revenue market share according to IDC—announced a wide-ranging set of innovations to its software-defined storage (SDS), data protection, and storage systems portfolio. Continuing IBM investments in enhancing its SDS (Spectrum), data protection, and storage systems capabilities, these announcements demonstrate its commitment to IBM storage solutions as the foundation for multi-cloud and cognitive/AI applications and workloads.

With these enhancements, IBM is aiming to transform on-premises infrastructure to meet these new business imperatives. Recent innovations and enhancements across the IBM Storage portfolio expand the range of data types supported, deliver new function, and enable new technology deployment. For example, IBM Spectrum NAS delivers enterprise capabilities and SDS simplicity with cost benefits for common file workloads, including support for Microsoft environments. Or, IBM Spectrum Protect still addresses data security concerns but just added General Data Protection Regulation (GDPR) and automated detection and alerting of ransomware.

Along the same lines, IBM Spectrum Storage Suite brings a complete solution for software-defined storage needs while gaining expanded range and value through the inclusion of IBM Spectrum Protect Plus at no additional charge. Similarly, IBM Spectrum Virtualize promises lower data storage costs through new and better performing data reduction technologies for the IBM Storwize family, IBM SVC, and IBM FlashSystem V9000, as well as for over 440 non-IBM vendor storage systems.

Finally, IBM Spectrum Connect simplifies management of complex server environments by providing a consistent experience when provisioning, monitoring, automating, and orchestrating IBM storage in containerized VMware and Microsoft PowerShell environments. Orchestration is critical in increasingly complex container environments.

The newest part of the IBM storage announcements is NVM Express (NVMe). This is an open logical device interface specification for accessing non-volatile storage media attached via a PCIe bus. The non-volatile memory referred to is flash memory, typically in the form of solid-state drives (SSDs). NVMe provides a logical device interface designed from the ground up to capitalize on the low latency and internal parallelism of flash-based storage devices, essentially mirroring the parallelism of modern CPUs, platforms and applications.

By its design, NVMe allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVMe reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple, long command queues, and reduced latency. (The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a lengthy delay in response exists between a request and the corresponding data receipt due to much slower data speeds than RAM speeds could generate a fault.

NVMe devices exist both in the form of standard PCIe expansion card and as 2.5-inch form-factor devices that provide a four-lane PCIe interface through the U.2 connector (formerly known as SFF-8639) and SATA storage devices and the M.2 specification for internally mounted computer expansion cards also support NVMe as the logical device interface.

Maybe NVMe sounds like overkill now but it won’t the next time you upgrade your IT infrastructure. Don’t plan on buying more HDD or going back to IPv3. With IoT, cognitive computing, blockchain, and more your users will have no tolerance for a slow infrastructure.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Legacy Storage vs. Software Defined Storage at IBM Edge2015

May 21, 2015

At Edge2015 software defined storage (SDS) primarily meant IBM Spectrum Storage, the new storage software portfolio designed to address data storage inefficiencies by separating storage functionality from the underlying hardware through an intelligent software layer. To see what DancingDinosaur posted on Spectrum Storage in February when it was unveiled click here. Spectrum became the subject of dozens of sessions at the conference. Check out a general sampling of Edge2015 sessions here.

Jon Toigo, a respected storage consultant and infuriating iconoclast to some, jumped into the discussion of legacy storage vs. SDS at a session provocatively titled 50 Shades of Grey. He started by declaring “true SANs never reached the market.” On the other hand, SDS promises the world—storage flexibility, efficiency, avoidance of vendor lock-in, and on and on.

 edge2015 toigo san

Courtesy Jon Toigo (click to enlarge)

What the industry actually did as far as storage sharing, Toigo explained, was provide serial SCSI over a physical layer fabric and the use of a physical layer switch to make and break server-storage connections at high speed. Although network-like there was no management layer (which should be part of any true network model, he believes). Furthermore, the result was limited by the Fibre Channel Protocol and standards designed so that “two vendors could implement switch products that conformed to the letter of the standard…with absolute certainty that they would NOT work together,” said Toigo. iSCSI later enabled storage fabrics using TCP/IP, which made it easier to deploy the fabric since organizations already were deploying TCP/IP networks for other purposes.

Toigo’s key requirement: unified storage management, which means managing the diversity and heterogeneity of the arrays comprising the SAN. The culprit preventing this, as he sees it, are so call value-add services on array controllers that create islands of storage. You know these services: thin provisioning, on-array tiering, mirroring, replication, dedupe, and more. The same value-add services are the culprits driving the high cost of storage. “Storage hardware components are commoditized, but value-add software sustains pricing.”

With Spectrum Storage IBM incorporates more than 700 patents and is designed to help organizations transform to a hybrid cloud business model by managing massive amounts of data where they want it, how they want it, in a fast and easy manner from a single dashboard.  The software helps clients move data to the right location, at the right time to flash storage for fast access or to tape and cloud for the lowest cost.

This apparently works for Toigo, with only a few quibbles: vendors make money by adding more software, and inefficiency is added when they implement non-standard commands. IBM, however, is mostly in agreement with Toigo. According to IBM, a new approach is needed to help organizations address [storage] cost and complexity driven by tremendous data growth.  Traditional storage is inefficient in today’s world. However, Spectrum Storage software, IBM continued, helps organizations to more efficiently leverage their hardware investments to extract the full business value of data. Listen closely and you might even hear Toigo mutter Amen.

SDS may or may not be the solution. Toigo titled this session fifty shades of grey because the vendors can’t even agree on a definition for what constitutes SDS.  Yet, it is being presented as a panacea for everything that is wrong with legacy storage.

The key differentiator for Toigo is where a vendor’s storage intelligence resides; on the array controller, in the server hypervisor, or part of the software stack. As it turns out, some solutions are hypervisor dedicated or hypervisor dependent.  VMware’s Virtual SAN, for instance, only works with its hypervisor.  Microsoft’s Clustered Storage Spaces is proprietary to Microsoft, though it promises to share its storage with VMware – simple as pie, just convert your VMware workload into Microsoft VHD format and import it into Hyper-V so you can share the Microsoft SDS infrastructure.

IBM Spectrum passes Toigo’s 50 Shades test. It promises simple, efficient storage without the cost or complexity of dedicated hardware. IBM managers at Edge2015 confirmed Spectrum could run on generic servers and with generic disk arrays. With SDS you want everything agnostic for maximum flexibility.

Toigo’s preferred approach: virtualized SDS with virtual storage pools and centralized select value-add services that can be readily allocated to any workload regardless of the hypervisor. DancingDinosaur will drill down into other interesting Edge2015 sessions in subsequent posts.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


%d bloggers like this: