Posts Tagged ‘storage’

Industrial Strength SDS for the Cloud

June 12, 2014

The hottest thing in storage today is software defined storage (SDS). Every storage vendor is jumping on the SDS bandwagon.

The presentation titled Industrial -Strength SDS for the Cloud, by Sven Oehme, IBM Senior Research Scientist, drew a packed audience at Edge 2014 and touched on many of the sexiest acronyms in IBM’s storage portfolio.  These included not just GPFS but also GSS (also called GPFS Storage Server), GNR, LROC (local read-only cache), and even worked in Linear Tape File System (LTFS).

The session promised to outline the customer problems SDS solves and show how to deploy it in large scale OpenStack environments with IBM GPFS.  Industrial strength generally refers to large-scale, highly secure and available multi-platform environments.

The session abstract explained that the session would show how GPFS enables resilient, robust, reliable, storage deployed on low-cost industry standard hardware delivering limitless scalability, high performance, and automatic policy-based storage tiering from flash to disk to tape, further lowering costs. It also promised to provide examples of how GPFS provides a single, unified, scale-out data plane for cloud developers across multiple data centers worldwide. GPFS unifies OpenStack VM images, block devices, objects, and files with support for Nova, Cinder, Swift and Glance (OpenStack components), along with POSIX interfaces for integrating legacy applications. C’mon, if you have even a bit of IT geekiness, doesn’t that sound tantalizing?

One disclaimer before jumping into some of the details; despite having written white papers on SDS and cloud your blogger can only hope to approximate the rich context provided at the session.

Let’s start with the simple stuff; the expectations and requirements for cloud  storage:

  • Elasticity, within and across sites
  • Secure isolation between tenants
  • Non-disruptive operations
  • No degradation by failing parts as components fail at scale
  • Different tiers for different workloads
  • Converged platform to handle boot volumes as well as file/object workload
  • Locality awareness and acceleration for exceptional performance
  • Multiple forms of data protection

Of course, affordable hardware and maintenance is expected as is quota/usage and workload accounting.

Things start getting serious with IBM’s General Parallel File System (GPFS). This what IBMers really mean when they refer to Elastic Storage, a single name space provided across individual storage resources, platforms, and operating systems. Add in different classes of storage devices (fast or slow disk, SSD, Flash, even LTFS tape), storage pools, and policies to control data placement and you’ve got the ability to do storage tiering.  You can even geographically distribute the data through IBM’s Active Cloud Engine, initially a SONAS capability sometimes referred to as Active File Manager. Now you have a situation where users can access data by the same name regardless of where it is located. And since the system keeps distributed copies of the latest data it can handle a temporary loss of connectivity between sites.

To protect the data add in declustered software RAID, aka GNR or even GSS (GPFS Storage Server). The beauty of this is it reduces the space overhead of replication through declustered parity (80% vs. 33% utilization) while delivering extremely fast rebuild.  In the process you can remove hardware storage controllers from the picture by doing the migration and RAID management in software on your commodity servers.

dino industrial sds 1

In the above graphic, focus on everything below the elongated blue triangle. Since it is being done in software, you can add an Object API for object storage. Throw in encryption software. Want Hadoop? Add that too. The power of SDS.  Sweet

The architecture Oehme lays out utilizes generic servers with direct-attached switched JBOD (SBOD). It also makes ample use of LROC, which provides a large read cache that benefits many workloads, including SPECsfs, VMware, OpenStack, other virtualization, and database workloads.

A key element in Oehme’s SDS for the cloud is OpenStack. From a storage standpoint OpenStack Cinder, which provides access to block storage as if it were local, enables the efficient sharing of data between services. Cinder supports advanced features, such as snapshots, cloning, and backup. On the back end, Cinder supports Linux servers with iSCSI and LVM; storage controllers; shared filesystems like GPFS, NFS, GlusterFS; and more.

Since Oehme’s  is to produceindustrial-strength SDS for the Cloud it needs to protect data. Data protection is delivered through backups, snapshots, cloning, replication, file level encryption, and declustered RAID, which spans all disks in the declustered array and results in faster RAID rebuild (because there are more disks available for RAID rebuild.)

The result is highly virtualized, industrial strength SDS for deployment in the cloud. Can you bear one more small image that promises to put this all together? Will try to leave it as big as can fit. Notice it includes a lot of OpenStack components connecting storage elements. Here it is.

dino industrial sds 2

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter @mainframeblog

Learn more about Alan Radding at technologywriter.com

New IBM Flash Storage for the Mainframe

June 2, 2014

IBM is serious about flash storage and they are enabling just about everything for flash—the DS8000 family, San Volume Controller, EasyTier, Real-time Compression (RtC), and more.  Of particular interest to DancingDinosaur readers should be the recently announced DS8870 all flash enclosure.

Storage in general is changing fast. Riding Moore’s Law for the past two decades, storage users could assume annual drops in the cost per gigabyte. It was as predictable as passing go in Monopoly and collecting $200. But with that ride coming to an end companies like IBM are looking elsewhere to engineer the continued improvements everyone assumed and benefited from. For example, IBM is combining SVC, RtC, and flash to get significantly more performance out of less actual storage capacity.

The DS8870 is particularly interesting. In terms of reliability, for instance, it delivers not five-nines (99.999%) availability but six-nines (99.9999%) availability. That works out to be about 30 seconds of downtime each year. It works with all IBM servers, not just the z, and it protects data through full disk encryption and advanced access control. With the new flash enclosure packed with IBM’s enhanced flash the DS8870 delivers 4x faster flash performance in 50% less space. That translates into a 3.2x improvement in database performance.

Flash is not cheap when viewed through the traditional cost/gigabyte metric, but the above performance data suggests a different way to gauge the cost of flash, which continues to steadily fall in price. The 3.2x increase in database performance, for example, means you can handle over 300% more transactions.

Let’s start with the assumption that more transactions ultimately translate into more revenue. The same for that extra 9 in availability. The high-performance all flash DS8870 configuration with the High Performance Flash Enclosure also reduces the footprint by 50% and reduces power consumption by 12%, which means lower space and energy costs. It also enables you to shrink batch times by 10%, according to IBM. DancingDinosaur will be happy to help you pull together a TCO analysis for an all-flash DS8870 investment.

The sheer specs of the new system are impressive. IBM reports the product’s up to 8 PCIe enclosures populated with 400 GB flash cards provides 73.6TB of usable capacity. For I/O capacity the 8 I/O bays installed in the base frame provide up to 128 8Gb FC ports. Depending on the internal server you install in the DS8870 you can also get up to 1TB of cache.

all flash rack enclosure

all flash rack enclosure

ds8870 rack

The Flash Enclosure itself is a 1U drawer that can take up to 30 flash cards.  By opting for thirty 400GB flash cards you will end up with 9.2TB Usable (12 TB raw). Since the high-performance all flash DS8870 can take up to 8 Flash Enclosures you can get 96TB raw (73.6TB usable) flash capacity per system.

A hybrid DS8870 system, as opposed to the high-performance all flash version, will allow up to 120 Flash cards in 4 Flash Enclosures for 48TB raw (36.8TB usable), along with 1536 2.5” HDDs/SSDs. Then, connect it all to the DS8870 internal PCIe fabric for impressive performance— 200,000 IOPS (100% read) and 130,000 IOPS (100% write). From there, you can connect it to flash-enabled SVC and Easy Tier.

Later this year, reports Clod Barrera, IBM’s storage CTO, you will be able to add 4 more enclosures in hybrid configurations for boosting flash capacity up to 96TB raw.  Together you can combine the DS8870, flash, SVC, RtC, and EasyTier for a lightning fast and efficient storage infrastructure.

Even the most traditional System z shop will soon find itself confronting mixed workloads consisting of traditional and non-traditional workload. You probably already are as mobile devices initiate requests for mainframe data. Pretty soon you will be faced with incorporating traditional and new workloads. When that happens you will want a fast, efficient, flexible infrastructure like the DS8870.

DancingDinosaur is Alan Radding. Follow him onTwitter, @mainframeblog

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog

IBM Edge2014 Explores Software Defined Everything in Depth

April 18, 2014

IBM Edge2014 is coming up fast, May 19-23 in Las Vegas, at the Venetian. Plus there is the Sheryl Crow concert you don’t want to miss.

IBM Edge2014 is bringing over 400 program sessions across more than a dozen topic tracks; choices for everyone from the geekiest techie to the most buttoned down business executive. One that crosses both camps—technical and business—is Software Defined Environments (SDE), which DancingDinosaur of thinks of as software-defined everything.

SDE takes the abstraction, or the decoupling, of software and hardware to the next level. It takes virtualization and moves it higher up in the stack.  There you can virtualize not only servers but network switches, storage, and more. The benefit: efficiency, speed, and flexibility. You can allocate and deploy system resources quickly and easily through software. And add or move capabilities as needed, again through software.

Through software defined virtualization the IT industry can virtualize nearly every resource and capability. If you can encapsulate a set of capabilities as software you have created a virtualized component that can run on almost any network-attached device capable of hosting software.  In short, you have effectively decoupled those capabilities usually embedded as firmware from whatever underlying physical device previously hosted them.

IBM Edge2014 offers numerous software defined sessions. Let’s look at a few:

Software Defined Storage – Storage for Software Defined Environments

Presented by Clodoaldo Barrera

As described in the program, Software Defined Environments (SDE) have become the preferred approach for modern IT operations, combining the values of automation, policy enforcement, and efficiency. Within these SDE operations, storage must be properly configured to support the expected user experience and cost benefits, while still providing performance, availability, and data protection. In this session Barrera will explain how storage is configured and then managed through a stack of virtualization, provisioning, and orchestration software.

IBM Software Defined Storage Vision and Roadmap

Presented by Vincent Hsu, Tom Clark

This session introduces the core technology for IBM Software Defined Storage (SDS) vision and the SDS product roadmap. This includes the control plane technology as well as the data plane technology to address the future software defined data center.

But it is not only about storage. IBM’s Strategy for Software Defined Networking

Presented by Andy Wright

Here Software Defined Networking (SDN) is an emerging framework designed for virtual, dynamic and flexible networking that allows organizations to easily modify, control and manage physical and virtual networks. IBM already is a leader in this space with SDN VE offerings and the roadmap above tells you where it is headed. Wright’s session examines IBM’s Vision, Network Virtualization (Overlay) capabilities for existing networks, and the capabilities of OpenFlow networks. These technologies promise to improve the working lives of system, virtualization, cloud, and network administrators. If you fill one of these roles, you probably don’t want to miss this.

Continuity Requirements for Software Defined Data Centers

Presented by Jon Toigo

One the benefits of software defined resources is the ability to spin up additional resource virtually. That leads to the assumption of an agile and dynamic data center that can turn on a dime in response to business requirements. That is true in theory. Rarely discussed, however, are the inherent risks of combining physical and virtual resources, both locally deployed and sourced from remote cloud-based providers. This presentation will identify some disaster recovery and business continuity requirements to keep in mind as you plan your software defined data center and need to rein in management’s wishful thinking.

And a related topic, It’s All About the Object! IBM Customer Experiences with Object Storage

Presented by Manuel Avalos, Dan Lucky

Travis County in Austin, TX is an early adopter of IBM object storage. This case study positions IBM’s Statement Of Direction (SOD) about object storage based on OpenStack Swift. Here you can learn from six months of Travis County user experiences, values, and next steps.

And there is another entire session on SDE in the federal government. Overall, IBM Edge2014 is delivering considerable real world experiences direct from the user.

Look for DancingDinosaur at IBM Edge2014, Mon-Wed. at sessions or in the bloggers lounge.

And follow DancingDinosaur on Twitter, @mainframeblog

Expect Flash to be Big at Edge 2014

March 26, 2014

You can almost hear the tom-toms thumping as IBM picks up the beat for flash storage and its FlashSystem, and for good reason. Almost everything companies want to do these days requires fast, efficient storage. Everything waits for data—applications, servers, algorithms, virtually any IT resource. And fast data, of course, depends on the responsiveness of the storage. Flash’s time has arrived.

To get the responsiveness they need companies previously loaded up with conventional DASD, spinning disks that top out at 15K RPM or cheaper DASD at 5400 RPM. To coax sufficient IO/second (IOPS) they ganged together massive amounts of DASD just to get more parallel spindles to compensate for the low IOPS. Sure the disks were cheap but still the cost per IOPS was sky high, especially considering all the overhead and inefficiency they had to absorb.

But in this era of big data analytics, where an organization’s very competitiveness depends on absorbing massive amounts of data fast that old approach doesn’t work anymore. You can’t aggregate enough spindles to handle the huge amounts of machine-generated or sensor or meter data, not to mention data created by millions, possible even billions, of people on Facebook or Twitter and everywhere else to keep up with the data flow. You can’t possibly come up with meaningful results fast enough to be effective. Opportunities will fly past you.

Furthermore, traditional high performance storage comes at a high price, not just in the acquisition cost of large volumes of spinning disk but also in the inefficiency of its deployment. Sure, the cost per gigabyte may be low but aggregating spindles by the ton while not even utilizing the resulting large chunks of unused capacity will quickly offset any gains from a low cost per gigabyte. In short, traditional storage, especially high performance storage, imposes economic limits on the usefulness and scalability of many analytics environments.

Since data access depends on the response of storage, flash has emerged as the way to achieve high IOPS at a low cost, and with the cost of flash storage dropping steadily it will only become a better deal doing forward. Expect to hear a lot about IBM FlashSystem storage at Edge 2014. As IBM points out, it can eliminate wait times and accelerate critical applications for faster decision making, which translates into faster time to results.

Specifically, IBM reports its FlashSystem delivers:

  • 45x performance improvement with 10x more durability
  • 115x better energy efficiency with 315x superior density
  • 19x more efficient $/IOPS.

Here’s how: both the initial acquisition costs and the ongoing operational costs, such as staffing and environmental costs of FlashSystem storage, according to IBM, can be lower than both performance-optimized hard drive storage solutions and emerging hybrid- or all-flash solutions. In short, IBM FlashSystem delivers the three key attributes data analytics workloads require: compelling data economics, enterprise resiliency, and easy infrastructure integration along with high performance.

As proof, IBM cites a German transport services company that deployed FlashSystem storage to support a critical SAP e- business analytics infrastructure and realized a 50% TCO reduction versus competing solutions.

On top of that, IBM reports FlashSystem storage unlocks additional value from many analytics environments by both turbo-charging response times with its use of MicroLatency technology, effectively multiplying the amount of data that can be analyzed. MicroLatency enables a streamlined high performance data path to accelerate critical applications. The resulting faster response times can yield more business agility and quicker time to value from analytics.

In fact, recent IBM research has found that IBM InfoSphere Identity Insight entity analytics processes can be accelerated by over 6x using FlashSystem storage instead of traditional disk. More data analyzed at once means more potential value streams.

Data has long been considered a valuable asset. For some data has become the most important commodity of all. The infrastructure supporting the analytics environment that converts data as a commodity into valuable business insights must be designed for maximum resiliency. FlashSystem brings a set of data protection features that can help enhance reliability, availability and serviceability while minimizing the impact of failures and down-time due to maintenance. In short it protects what for many is the organization’s data, its most valuable asset.

DancingDinosaur is looking forward to attending Edge 2014 sessions that will drill down into the specifics of how IBM FlashSystem storage works under the cover. It is being held May 19-23 in Las Vegas, at the Venetian. Register now and get a discount. And as much as DancingDinosaur is eager to delve into the details of FlashSystem storage the Sheryl Crow concert is very appealing too. When not in sessions or at the concert look for DancingDinosaur in the bloggers lounge. Please join me.

Follow DancingDinosaur on Twitter: @mainframeblog

Edge 2014 Technical Track Hits Right Hot Buttons

March 21, 2014

The organizers of the Edge 2014 conference, May 19-23  have finally started to roll out some specifics of the program, although still missing are session details.  First released is the program for Technical Edge, the technical track.  There also will be an executive track and another for partners and ISVs. The overall theme of Edge 2014 is Infrastructure Innovation.

Technical Edge will consist of over 450 sessions spread across 14 topic areas. The topic areas can be found here, at the Storage Community website. The technical sessions will hit all the hot topics the industry has been buzzing about for the past year or more. You have much to choose from:

  • Application Infrastructure – ISV
  • Big Data & Analytics
  • Business Continuity
  • Data Efficiency Solutions
  • Dynamic Cloud (Hybrid)
  • Expert Integrated Systems
  • Flash Solutions
  • IT Storage Solutions (Tivoli Storage Manager)
  • Networking Solutions
  • Software Defined Environments (Systems Management)
  • Software Defined Environments (Virtualization)
  • System x and Flex System
  • Technology Partners
  • Technology Update

DancingDinosaur already is planning to attend sessions on Big Data, Flash, and software defined anything. As the organizers post details on the individual sessions, DancingDinosaur will call out some of the most interesting ones in the weeks ahead.

And IBM is bringing its heavy hitters, starting with Tom Rosamilia, Arvind Krishna, and more. But maybe the most exciting event will be a special concert by Sheryl Crow.  Look for DancingDinosaur as close to the front as a middle aged dad can get.

 dino edge 2014 crow image

Otherwise, when not attending sessions look for me in the bloggers lounge. Plan to register soon; you can catch discounts until April 20.

On a related topic, the standings are starting to come in for the global Master the Mainframe contest. See the results here. Looks like a Canadian is holding first for now, followed by a Ukrainian and a Columbian.

BTW, the author of a recent article in Information Week headlined Mainframe Brain Drain Raises Serious Concerns obviously hadn’t been aware of what was going on with IBM’s Academic Initiative for the past half dozen years or the Master the Mainframe competition worldwide.  DancingDinosaur covered this for year, most recentl a few weeks back here. The writer must have seen the light, however, because he just published another piece this week with the opposite message, here. Go figure. Hope to see you at Edge 2014.

Technology Change is Coming for the zBX

November 1, 2013

The zBX hasn’t been subject to much in the way of big new announcements this year.  Maybe the most obvious was a quiet announcement that the zBX would connect to the zBC12, the newest System z machine announced early in the summer. Buried deeply in that July announcement was that starting in Sept. 2013 you could attach the IBM zBX Model 003 to the new machine. Machines older than the zEC12 would need the zBX Model 002.

At Enterprise 2013, however, the zBX managed to grab a little of the spotlight in a session by Harv Emery titled IBM zEnterprise BladeCenter Extension Model 3 and Model 2 Deep Dive Update. OK, it’s not exactly a riveting title, but Emery’s 60 slides were packed with far more detail than can possibly fit here.

To summarize:  a slew of software and firmware updates will be coming through the end of this year and into 2014. Similarly, starting next year and beyond, IBM will begin to stop marketing older zBX hardware and eventually stop supporting the older stuff.  This is standard IBM practice; what makes it surprising is the realization that the zBX no longer is the new kid on the scene. PureSystems in their various iterations are the sexy newcomer.  As of the end of last year somewhat over 200 z hybrid units (zBX cabinets) had been sold along with considerably more blades. Again, PureSystems are IBM’s other hybrid platform.

Still, as Emery pointed out, new zBX functionality continues to roll out. This includes:

  • CPU management for x86 blades
  • Support for Windows 12, and current LDP OS releases
  • GDPS automated site recovery for zBX
  • Ensemble Availability Manager for improved monitoring and reporting
  • Support for Layer 2 communications
  • An IBM statement of direction (SOD) on support for next generation DataPower Virtual Appliance XI52
  • Support for next generation hardware technologies in the zBX
  • zBX firmware currency
  • A stand-alone zBX node to preserve the investment
  • Bolstered networking including a new BNT Virtual Fabric 10 GbE Switch
  • zBX integrated hypervisor for IBM System x blades and running KVM

Emery also did a little crystal balling about future capabilities, relying partly on recent IBM SODs. These include:

  • Support of zBX with the next generation server
  • New technology configuration extensions in the zBX
  • CEC and zBX continued investment in the virtualization and management capabilities for hybrid computing environment
  • Enablement of Infrastructure as a Service (IAAS) for Cloud
  • Unified Resource Manager improvements and extensions for guest mobility
  • More monitoring instrumentation
  • Autonomic management functions
  • Integration with the STG Portfolio
  • Continued efforts by zEnterprise and STG to leverage the Tivoli portfolio to deliver enterprise-wide management capabilities across all STG systems

DancingDinosaur periodically has been asked questions about how to handle storage for the zBX and the blades it contains.  Emery tried to address some of those.  Certain blades, DataPower for example, now come with their own storage and don’t need to any outside storage on the host z.  Through the top of the rack switch in the zBX you can connect to a distributed SAN.

Emery also noted the latest supported storage devices.  Supported IBM storage products as of Sept. 2013 include: DS3400, 3500, 3950, 4100, 4200, 4700 4800, 5020, 5100, 5300, 6000, 8100, 8300, 8700, 8800, SVC 2145, XIV, 2105, 2107, and Storwize v7000. Non-IBM storage is possible but you or you’re the OEM storage vendor will have to figure it out.

Finally, Emery made numerous references to Unified Resource Manager (or zManager, although it manages more than z) for the zBX and Flex System Manager for PureSystems.  Right now IBM tries to bridge the two systems with higher level management from Tivoli.  Another possibility, Emery hinted, is OpenStack to unify hybrid management. Sounds very intriguing, especially given IBM’s announced intention to make extensive use of OpenStack. Is there an interoperable OpenStack version of Unified Resource Manager and Flex System Manager in the works?

Follow DancingDinosaur on Twitter, @mainframeblog.

IBM Technical Edge 2013 Tackles Flash – Big Data – Cloud & More

June 3, 2013

IBM Edge 2013 kicks off in just one week, 6/10 and runs through 6/14. Still time to register.  This blogger will be there through 6/13.  You can follow me on Twitter for conference updates @Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference. As noted here previously I’ll buy a drink for the first two people who come up to me and say they read DancingDinosaur.  How’s that for motivation!

The previous post looked at the Executive track. Now let’s take a glimpse at the technical track, which ranges considerably wider, beyond the System z to IBM’s other platforms, flash, big data, cloud, virtualization, and more

Here’s a sample of the flash sessions:

Assessing the World of Flash looks at the key competitors, chief innovators, followers, and leaders. You’ll quickly find that not all flash solutions are the same and why IBM’s flash strategy stands at the forefront of this new and strategic technology.

There are many ways to deploy flash. This session examines Where to Put Flash in the Data Center.  It will focus particularly on the new IBM FlashSystem products and other technologies from IBM’s Texas Memory Systems acquisition. However, both storage-based and server-based flash technologies will be covered with an eye toward determining what works best for client performance needs.

The session on IBM’s Flash Storage Future will take a look at how IBM is leveraging its Texas Memory Systems acquisition and other IBM technologies to deliver a flash portfolio that will play a major role across not only IBM’s storage products but its overall solution portfolio and its roadmap moving forward.

The flash sessions also will look at how Banco Azteco, Thompson Reuters, and Sprint are deploying and benefiting from flash.

In the big data track, the Future of Analytics Infrastructure looks interesting. Although most organizations understand the value of business analytics many don’t understand how the infrastructure choices they make will impact the success or failure of their analytics projects.  The session will identify the key requirements of any analytical environment: agility, scalability, multipurpose, compliance, cost-effective, and partner-ready; and how they can be met within a single, future-ready analytics infrastructure to meet the needs of current and future analytics strategies.

Big data looms large at the conference. A session titled Hadoop…It’s Not Just about Internal Storage explores how the Hadoop MapReduce approach is evolving from server internal disks to external storage. Initially, Hadoop provided massively scalable, distributed file storage and analytic capabilities. New thinking, however, has emerged that looks at a tiered approach for implementing the Hadoop framework with external storage. Understanding the workload architectural considerations is important as companies begin to integrate analytic workloads to drive higher business value. The session will review the workload considerations to show why an architectural approach makes sense and offer tips and techniques, and share information about IBM’s latest offerings in this space.

An Overview of IBM’s Big Data Strategy details the company’s industrial-strength big data platform to address the full spectrum of big data business opportunities. This session is ideal for those who are just getting started with big data.

And no conference today can skip the cloud. IBM Edge 2013 offers a rich cloud track. For instance, Building the Cloud Enabled Data Center explains how to get maximum value out of an existing virtualized environment through self-service delivery and optimization along with virtualization optimization capabilities. It also describes how to enable business and infrastructure agility with workload optimized clouds that provide orchestration across the entire data center and accelerate application updates to respond faster to stakeholder demands and competitive threats. Finally it looks at how an open and extensible cloud delivery platform can fully automate application deployment and lifecycle management by integrating compute, network, storage, and server automation.

A pair of sessions focus on IBM Cloud Storage Architectures and Understanding IBM’s Cloud Options. The first session looks at several cloud use cases, such as storage and systems management.  The other session looks at IBM SmartCloud Entry, SmartCloud Provisioning, and ServiceDelivery Manager.  The session promises to be an excellent introduction for the cloud technical expert who desires a quick overview of what IBM has to offer in cloud software and the specific value propositions for its various offerings, along with their architectural features and technical requirements.

A particularly interesting session will examine Desktop Cloud through Virtual Desktop Infrastructure and Mobile Computing. The corporate desktop has long been a costly and frustrating challenge complicated even more by mobile access. The combination of the cloud and Virtual Desktop Infrastructure (VDI) provides a way for companies to connect end users to a virtual server environment that can grow as needed while mitigating the issues that have frustrated desktop computing, such as software upgrades and patching.

There is much more in the technical track. All the main IBM platforms are featured, including PureFlex Systems, the IBM BladeCenter, IBM’s Enterprise X-Architecture, the IBM XIV storage system, and, for DancingDinosaur readers, sessions on the DS8000.

Have you registered for IBM Edge 2013 yet?  There still is time. As noted above, find me in the Social Media Lounge at the conference and in the sessions.  You can follow me on Twitter for conference updates @Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference. I’ll buy a drink for the first two people who come up to me and say they read DancingDinosaur.  How much more motivation do you need?

SmartCloud Storage Access Simplifies Private Cloud Storage

February 15, 2013

IBM has long been offering storage as part of its SmartCloud family of products.  In early February it introduced SmartCloud Storage Access, a storage software appliance that looks to be a game changer, at least for IBM storage shops. It offers easy private cloud storage-as-a-service through a self-service portal for storage provisioning, monitoring and reporting. DancingDinosaur, however, also finds the software appliance concept intriguing, especially as it can be applied to simplifythe zEnterprise.

Two issues are driving interest in SmartCloud Storage Access. The first is a report that labor costs will consume 70% of IT spending this year. The second: 90% of organizations expect to adopt or deploy a cloud model in the next three years.  It’s no surprise that IBM thinks the time is right to introduce SmartCloud Storage Access as a way to facilitate storage in the cloud while lowering the IT labor costs associated with storage.

IBM SmartCloud Storage Access enables organizations to implement a private cloud storage service through which users can create an account, provision storage, and upload files over the Internet—with only a few clicks and without involving IT labor if done through the automated self-service GUI portal. Not only will SmartCloud Storage Access reduce IT labor involvement but it should speed the delivery of storage resources, which in turn boosts user productivity.  No longer do they have to wait for a storage admin to provision storage for them.

Unlike several IBM appliances, SmartCloud Storage Access is a software-only appliance; no hardware ships with it. Of course, if you are using it with a private cloud, you still need to populate your private cloud with physical storage. For that you can use most of the IBM storage products. The SmartCloud Storage Access appliance itself installs on an Intel server running VMware. Basically, it is a VMware image loaded on a VMware virtual machine.

As an appliance, all the technical complexity of the storage provisioning process is hidden; this abstraction to the storage-as-a-service level relieves the user of dealing with conventional storage provisioning. It simplifies and standardizes monitoring, reporting, and control to reduce operational complexity. It also does away with the typical collection of point tools required to address various storage functions.

It allows admins to quickly and easily set up a private cloud storage service complete with elastic capacity, automatic or routed approval flows, multiple service classes with different QoS. Admins can simply define the service without concern about the underlying technical details. The SmartCloud Storage Access appliance hides all the complexity. It also simplifies monitoring and reporting.

The bottom line: it delivers storage provisioning on demand in seconds and with minimal involvement of IT. The result is increased productivity for both storage users and IT; fast, consistent high quality service; and high operational efficiency. Reduced IT involvement translates into better TCO while automated self-service leads to faster deployment and higher end user satisfaction.

And it can play with most of IBM’s storage lineup; XIV, Storwize V7000 and V7000 Unified, and SONAS.  The DS8000 and other block storage systems are not expected to play much of a role in SmartCloud Storage Access; clouds today primarily focus on file storage.

In recent years the idea of a private storage cloud has emerged as the Holy Grail of storage; something that would finally put an end to the difficulty of responding to rapidly escalating storage demands for different types of storage. But it proved difficult to implement.

As a software appliance, however, SmartCloud Storage Access already is proving effective, as early adopters like ETH Zurich University and the Tallink Group attest.  If the software appliance concept can simplify private storage clouds what other aspects of enterprise computing can it help?  One IBM researcher already thinks that because it can hide complicated platform specifics, the concept is a particularly good fit for System z.  More on this to follow.

EMC Introduces New Mainframe VTL

August 16, 2012

EMC introduced the high end DLm8000, the latest in its family of VTL products. This one is aimed for large enterprise mainframe environments and promises to ensure consistency of data at both production and recovery sites and provide the shortest possible RPO and RTO for critical recovery operations.

It is built around EMC VMAX enterprise storage and its SRDF replication and relies on synchronous replication to ensure immediate data consistency between the primary and target storage by writing the data simultaneously at each. Synchronous replication addresses the potential problem latency mismatch that occurs with the usual asynchronous replication, where a lag between writes to the primary and to the backup target storage can result in inconsistent data.

Usually this mismatch exists for a brief period. EMC suggests the issue, especially for large banks and financial firms—its key set of mainframe target customers—is much more serious. Large financial organizations with high transaction volume, EMC notes, have historically faced recovery challenges because their mainframe tape and DASD data at production and secondary sites were never fully in synch.  As such, recovery procedures often slowed until the differences between the two data sets were resolved, which slowed the resulting failover.  This indeed may be a real issue but for only a small number of companies, specifically those that need an RTO and RPO of just about zero.

EMC used the introduction of the DLm8000 to beat up tape backup in general. Physical tape transportation by third party records management companies, EMC notes, hinders recovery efforts by reducing what it refers to as the granularity of RPOs while dramatically increasing the RTO.  In addition, periodic lack of tape drive availability for batch processing and for archive and backup applications can impair SLAs, further increasing the risks and business impact associated with unplanned service interruptions. That has been long recognized, but, remember EMC is a company that sells disk, not tape storage, and ran a Tape Sucks campaign after its purchase of Data Domain. What would you expect them to say? 

The DLm8000 delivers throughput of up to 2.7 GB/s, which it claims is 2.5x the performance of its nearest competitor. DancingDinosaur can’t validate that claim, but EMC does have a novel approach to generating the throughput. The DLm8000 is packed with eight Bus-Tech engines (acquired in its acquisition of Bus-Tech in Nov. 2010) and it assigns two FICON connections to each engine for a total of 16 FICON ports cranking up the throughput. No surprise they can aggregate that level of throughput.

EMC has not announced pricing for the DLm8000. The device, however, is the top of its VTL lineup and VMAX enterprise storage tops its storage line. With high throughput and synchronous replication, this product isn’t going to be cheap. However, if you need near zero RPO and RTO then you have only a few choices.

Foremost among those choices should be the IBM TS7700 family, particularly the 7740 and the 7720. Both of these systems provide VTL connectivity. The TS7700 avoids the latency mismatch issue by using a buffer to get the most optimal write performance and then periodically synch primary and target data. “Synchronous as EMC does it for VTL is overkill,” says an IBM tape manager. The EMC approach essentially ignores the way mainframe tape has been optimized.

Among the other choices are the Oracle Virtual Storage Manager and Virtual Library Extension. Oracle uses StorageTek tape systems. The Oracle approach promises to improve tape drive operating efficiencies and lower TCO by optimizing tape drive and library resources through a disk-based virtual tape architecture. HDS also has a mainframe tape backup and VTL product that uses Luminex technology.

EMC is a disk storage company and its DLm8000 demonstrates that. When it comes to backup, however, mainframe shops are not completely averse to tape. Disk-oriented VTL has some advantages but don’t expect mainframe shops to completely abandon tape.

In breaking storage news, IBM today announced acquiring Texas Memory Systems (TMS), a long established (1978) Texas company that provides solid state memory to deliver significantly faster storage throughput and data access while consuming less power. TMS offers its memory as solid state disk (SSD) through its RamSan family of shared rackmount systems and Peripheral Component Interconnect Express (PCIe) cards. SSD may be expensive on a cost per gigabyte basis but it blows away spinning hard disk on a cost per IOPS. Expect to see IBM to use TMS’s SSD across its storage products as one of its key future storage initiatives, as described by Jai Menon, CTO and VP, Technical Strategy for IBM Systems and Technology Group (STG), at last June’s Storage Edge 2012 conference. BottomlineIT, DancingDinosaur’s sister blog, covered it here back in June. BTW, Edge 2013 already is scheduled for June 10-14 in Las Vegas.


Follow

Get every new post delivered to your Inbox.

Join 672 other followers

%d bloggers like this: