SHARE Puts Spotlight on Mainframe Storage, Compression, and More

The mainframe crowd gathered this week at SHARE Boston for a dive into the intricacies of the mainframe.  Storage, especially through the MVS Storage Management Project, grabbed a lot of attention. One of the project’s primary ongoing activities—identifying new requirements and communicating their priority to IBM—clearly bore fruit as seen in Barbara McDonald’s DFSMS Latest and Greatest session.

McDonald started out by noting that many of the enhancements, maybe all, in one way or another, address the explosive growth and management of customer data, which is being driven by the surging growth in interest in big data. To start, DFSMS exploits zEnterprise Data Compression (zEDC) Express, low-cost data compression for z/OS.

Specifically, zEDC, combined with a new hardware feature of the zEC12/zBC12 called zEDC Express, offers compression acceleration designed for high performance, low latency compression with little additional overhead. According to IBM it is optimized for use with large sequential files and it uses zlib, an industry standard compression library. zEDC is complementary to existing System z compression technology. Smaller records and files should still use z/OS hardware compression instructions to get best benefit while larger files will be directed to zEDC.

Support for additional access methods, specifically BSAM and QSAM, is planned for the end of the first quarter of 2014. This should save disk space and improve effective channel and network bandwidth without incurring significant CPU overhead.

The latest enhancements to DFSMS, among other things, also will enable increased data storage capacity and scalability, handle cross platform data and storage, and ensure data availability at all levels of the storage hierarchy.  They also enhance business resiliency through improved point-in-time copy, fast replication, and continuous data mirroring functions.

Although DFSMS proved interesting at SHARE and included dozens of enhancements, many suggested at previous SHARE conferences, there actually was much more going on. For example, a session on Real-time Compression (RtC), another innovative compression technology from IBM, drew a good crowd.

The RtC session was run by Steve Kenniston, IBM’s business line executive for data protection. RtC differs from zEDC, starting with different compression algorithms; RtC uses LZ where zEDC use zlib. RtC also leverages over 35 IBM patents to enable it to do real-time compression of active, production data with no impact on either application or storage performance. Kenniston also noted RtC, the industry’s only heterogeneous storage technology, now handles both block and file data. Find his RtC presentation here.

Kenniston drilled into the how and why of RtC and its key features, such as compressing active data on the fly. Some of it comes down to difference between fixed and variable compression.  Fixed compression, which most competitors do, has to add changes to the data at the end, which leads EMC and NetApp to quip that with RtC you could end up with a bigger compressed file than the original file due to the adds at the end.

Not so. RtC sits in the data stream in front of the storage and sees all the data. All the like data is compressed as it is written together.  The data then is put into known fixed segments. It amounts to a journaled file system inside the compressed file. RtC tracks where the compressed segments are put. When compressed data is deleted, RtC knows where the data was in the compressed file and can put new like data in the now empty space. At worst, the RtC compressed data temporarily becomes a single I/O larger than it started.

RtC also uses time-based optimization rather than location-based optimization as IBM’s competitors do.  Since it sits in the data stream and understands how the data was created RtC can achieve greater optimization.  And since it doesn’t have to do reads before writes it can get better compression ratios. In one example Kenniston showed RtC delivering up to 5x compression while maintaining or improving database transaction response time and overall business throughput.

In general, companies can expect a 30%-40% reduction in $/GB for common configurations.  In one case, 168 900GB SAS drives representing 100TB listed for $757,880. Using RtC, you needed only 67 SAS drives to get equivalent capacity. The cost, including the RtC software license and three years of maintenance, was $460,851. The $297,000 difference amounted to a 40% savings.

There was more interesting stuff at SHARE, including how the rise of BYOD smartphones and tablets could return the mainframe data center to a new version on 3270 computing, sort of an interesting back-to-the-future scenario.  Will take that up and more in another posting.

Tags: , , , , , , , , , , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: