Posts Tagged ‘storage’

IBM Edge2014 Explores Software Defined Everything in Depth

April 18, 2014

IBM Edge2014 is coming up fast, May 19-23 in Las Vegas, at the Venetian. Plus there is the Sheryl Crow concert you don’t want to miss.

IBM Edge2014 is bringing over 400 program sessions across more than a dozen topic tracks; choices for everyone from the geekiest techie to the most buttoned down business executive. One that crosses both camps—technical and business—is Software Defined Environments (SDE), which DancingDinosaur of thinks of as software-defined everything.

SDE takes the abstraction, or the decoupling, of software and hardware to the next level. It takes virtualization and moves it higher up in the stack.  There you can virtualize not only servers but network switches, storage, and more. The benefit: efficiency, speed, and flexibility. You can allocate and deploy system resources quickly and easily through software. And add or move capabilities as needed, again through software.

Through software defined virtualization the IT industry can virtualize nearly every resource and capability. If you can encapsulate a set of capabilities as software you have created a virtualized component that can run on almost any network-attached device capable of hosting software.  In short, you have effectively decoupled those capabilities usually embedded as firmware from whatever underlying physical device previously hosted them.

IBM Edge2014 offers numerous software defined sessions. Let’s look at a few:

Software Defined Storage – Storage for Software Defined Environments

Presented by Clodoaldo Barrera

As described in the program, Software Defined Environments (SDE) have become the preferred approach for modern IT operations, combining the values of automation, policy enforcement, and efficiency. Within these SDE operations, storage must be properly configured to support the expected user experience and cost benefits, while still providing performance, availability, and data protection. In this session Barrera will explain how storage is configured and then managed through a stack of virtualization, provisioning, and orchestration software.

IBM Software Defined Storage Vision and Roadmap

Presented by Vincent Hsu, Tom Clark

This session introduces the core technology for IBM Software Defined Storage (SDS) vision and the SDS product roadmap. This includes the control plane technology as well as the data plane technology to address the future software defined data center.

But it is not only about storage. IBM’s Strategy for Software Defined Networking

Presented by Andy Wright

Here Software Defined Networking (SDN) is an emerging framework designed for virtual, dynamic and flexible networking that allows organizations to easily modify, control and manage physical and virtual networks. IBM already is a leader in this space with SDN VE offerings and the roadmap above tells you where it is headed. Wright’s session examines IBM’s Vision, Network Virtualization (Overlay) capabilities for existing networks, and the capabilities of OpenFlow networks. These technologies promise to improve the working lives of system, virtualization, cloud, and network administrators. If you fill one of these roles, you probably don’t want to miss this.

Continuity Requirements for Software Defined Data Centers

Presented by Jon Toigo

One the benefits of software defined resources is the ability to spin up additional resource virtually. That leads to the assumption of an agile and dynamic data center that can turn on a dime in response to business requirements. That is true in theory. Rarely discussed, however, are the inherent risks of combining physical and virtual resources, both locally deployed and sourced from remote cloud-based providers. This presentation will identify some disaster recovery and business continuity requirements to keep in mind as you plan your software defined data center and need to rein in management’s wishful thinking.

And a related topic, It’s All About the Object! IBM Customer Experiences with Object Storage

Presented by Manuel Avalos, Dan Lucky

Travis County in Austin, TX is an early adopter of IBM object storage. This case study positions IBM’s Statement Of Direction (SOD) about object storage based on OpenStack Swift. Here you can learn from six months of Travis County user experiences, values, and next steps.

And there is another entire session on SDE in the federal government. Overall, IBM Edge2014 is delivering considerable real world experiences direct from the user.

Look for DancingDinosaur at IBM Edge2014, Mon-Wed. at sessions or in the bloggers lounge.

And follow DancingDinosaur on Twitter, @mainframeblog

Expect Flash to be Big at Edge 2014

March 26, 2014

You can almost hear the tom-toms thumping as IBM picks up the beat for flash storage and its FlashSystem, and for good reason. Almost everything companies want to do these days requires fast, efficient storage. Everything waits for data—applications, servers, algorithms, virtually any IT resource. And fast data, of course, depends on the responsiveness of the storage. Flash’s time has arrived.

To get the responsiveness they need companies previously loaded up with conventional DASD, spinning disks that top out at 15K RPM or cheaper DASD at 5400 RPM. To coax sufficient IO/second (IOPS) they ganged together massive amounts of DASD just to get more parallel spindles to compensate for the low IOPS. Sure the disks were cheap but still the cost per IOPS was sky high, especially considering all the overhead and inefficiency they had to absorb.

But in this era of big data analytics, where an organization’s very competitiveness depends on absorbing massive amounts of data fast that old approach doesn’t work anymore. You can’t aggregate enough spindles to handle the huge amounts of machine-generated or sensor or meter data, not to mention data created by millions, possible even billions, of people on Facebook or Twitter and everywhere else to keep up with the data flow. You can’t possibly come up with meaningful results fast enough to be effective. Opportunities will fly past you.

Furthermore, traditional high performance storage comes at a high price, not just in the acquisition cost of large volumes of spinning disk but also in the inefficiency of its deployment. Sure, the cost per gigabyte may be low but aggregating spindles by the ton while not even utilizing the resulting large chunks of unused capacity will quickly offset any gains from a low cost per gigabyte. In short, traditional storage, especially high performance storage, imposes economic limits on the usefulness and scalability of many analytics environments.

Since data access depends on the response of storage, flash has emerged as the way to achieve high IOPS at a low cost, and with the cost of flash storage dropping steadily it will only become a better deal doing forward. Expect to hear a lot about IBM FlashSystem storage at Edge 2014. As IBM points out, it can eliminate wait times and accelerate critical applications for faster decision making, which translates into faster time to results.

Specifically, IBM reports its FlashSystem delivers:

  • 45x performance improvement with 10x more durability
  • 115x better energy efficiency with 315x superior density
  • 19x more efficient $/IOPS.

Here’s how: both the initial acquisition costs and the ongoing operational costs, such as staffing and environmental costs of FlashSystem storage, according to IBM, can be lower than both performance-optimized hard drive storage solutions and emerging hybrid- or all-flash solutions. In short, IBM FlashSystem delivers the three key attributes data analytics workloads require: compelling data economics, enterprise resiliency, and easy infrastructure integration along with high performance.

As proof, IBM cites a German transport services company that deployed FlashSystem storage to support a critical SAP e- business analytics infrastructure and realized a 50% TCO reduction versus competing solutions.

On top of that, IBM reports FlashSystem storage unlocks additional value from many analytics environments by both turbo-charging response times with its use of MicroLatency technology, effectively multiplying the amount of data that can be analyzed. MicroLatency enables a streamlined high performance data path to accelerate critical applications. The resulting faster response times can yield more business agility and quicker time to value from analytics.

In fact, recent IBM research has found that IBM InfoSphere Identity Insight entity analytics processes can be accelerated by over 6x using FlashSystem storage instead of traditional disk. More data analyzed at once means more potential value streams.

Data has long been considered a valuable asset. For some data has become the most important commodity of all. The infrastructure supporting the analytics environment that converts data as a commodity into valuable business insights must be designed for maximum resiliency. FlashSystem brings a set of data protection features that can help enhance reliability, availability and serviceability while minimizing the impact of failures and down-time due to maintenance. In short it protects what for many is the organization’s data, its most valuable asset.

DancingDinosaur is looking forward to attending Edge 2014 sessions that will drill down into the specifics of how IBM FlashSystem storage works under the cover. It is being held May 19-23 in Las Vegas, at the Venetian. Register now and get a discount. And as much as DancingDinosaur is eager to delve into the details of FlashSystem storage the Sheryl Crow concert is very appealing too. When not in sessions or at the concert look for DancingDinosaur in the bloggers lounge. Please join me.

Follow DancingDinosaur on Twitter: @mainframeblog

Edge 2014 Technical Track Hits Right Hot Buttons

March 21, 2014

The organizers of the Edge 2014 conference, May 19-23  have finally started to roll out some specifics of the program, although still missing are session details.  First released is the program for Technical Edge, the technical track.  There also will be an executive track and another for partners and ISVs. The overall theme of Edge 2014 is Infrastructure Innovation.

Technical Edge will consist of over 450 sessions spread across 14 topic areas. The topic areas can be found here, at the Storage Community website. The technical sessions will hit all the hot topics the industry has been buzzing about for the past year or more. You have much to choose from:

  • Application Infrastructure – ISV
  • Big Data & Analytics
  • Business Continuity
  • Data Efficiency Solutions
  • Dynamic Cloud (Hybrid)
  • Expert Integrated Systems
  • Flash Solutions
  • IT Storage Solutions (Tivoli Storage Manager)
  • Networking Solutions
  • Software Defined Environments (Systems Management)
  • Software Defined Environments (Virtualization)
  • System x and Flex System
  • Technology Partners
  • Technology Update

DancingDinosaur already is planning to attend sessions on Big Data, Flash, and software defined anything. As the organizers post details on the individual sessions, DancingDinosaur will call out some of the most interesting ones in the weeks ahead.

And IBM is bringing its heavy hitters, starting with Tom Rosamilia, Arvind Krishna, and more. But maybe the most exciting event will be a special concert by Sheryl Crow.  Look for DancingDinosaur as close to the front as a middle aged dad can get.

 dino edge 2014 crow image

Otherwise, when not attending sessions look for me in the bloggers lounge. Plan to register soon; you can catch discounts until April 20.

On a related topic, the standings are starting to come in for the global Master the Mainframe contest. See the results here. Looks like a Canadian is holding first for now, followed by a Ukrainian and a Columbian.

BTW, the author of a recent article in Information Week headlined Mainframe Brain Drain Raises Serious Concerns obviously hadn’t been aware of what was going on with IBM’s Academic Initiative for the past half dozen years or the Master the Mainframe competition worldwide.  DancingDinosaur covered this for year, most recentl a few weeks back here. The writer must have seen the light, however, because he just published another piece this week with the opposite message, here. Go figure. Hope to see you at Edge 2014.

Technology Change is Coming for the zBX

November 1, 2013

The zBX hasn’t been subject to much in the way of big new announcements this year.  Maybe the most obvious was a quiet announcement that the zBX would connect to the zBC12, the newest System z machine announced early in the summer. Buried deeply in that July announcement was that starting in Sept. 2013 you could attach the IBM zBX Model 003 to the new machine. Machines older than the zEC12 would need the zBX Model 002.

At Enterprise 2013, however, the zBX managed to grab a little of the spotlight in a session by Harv Emery titled IBM zEnterprise BladeCenter Extension Model 3 and Model 2 Deep Dive Update. OK, it’s not exactly a riveting title, but Emery’s 60 slides were packed with far more detail than can possibly fit here.

To summarize:  a slew of software and firmware updates will be coming through the end of this year and into 2014. Similarly, starting next year and beyond, IBM will begin to stop marketing older zBX hardware and eventually stop supporting the older stuff.  This is standard IBM practice; what makes it surprising is the realization that the zBX no longer is the new kid on the scene. PureSystems in their various iterations are the sexy newcomer.  As of the end of last year somewhat over 200 z hybrid units (zBX cabinets) had been sold along with considerably more blades. Again, PureSystems are IBM’s other hybrid platform.

Still, as Emery pointed out, new zBX functionality continues to roll out. This includes:

  • CPU management for x86 blades
  • Support for Windows 12, and current LDP OS releases
  • GDPS automated site recovery for zBX
  • Ensemble Availability Manager for improved monitoring and reporting
  • Support for Layer 2 communications
  • An IBM statement of direction (SOD) on support for next generation DataPower Virtual Appliance XI52
  • Support for next generation hardware technologies in the zBX
  • zBX firmware currency
  • A stand-alone zBX node to preserve the investment
  • Bolstered networking including a new BNT Virtual Fabric 10 GbE Switch
  • zBX integrated hypervisor for IBM System x blades and running KVM

Emery also did a little crystal balling about future capabilities, relying partly on recent IBM SODs. These include:

  • Support of zBX with the next generation server
  • New technology configuration extensions in the zBX
  • CEC and zBX continued investment in the virtualization and management capabilities for hybrid computing environment
  • Enablement of Infrastructure as a Service (IAAS) for Cloud
  • Unified Resource Manager improvements and extensions for guest mobility
  • More monitoring instrumentation
  • Autonomic management functions
  • Integration with the STG Portfolio
  • Continued efforts by zEnterprise and STG to leverage the Tivoli portfolio to deliver enterprise-wide management capabilities across all STG systems

DancingDinosaur periodically has been asked questions about how to handle storage for the zBX and the blades it contains.  Emery tried to address some of those.  Certain blades, DataPower for example, now come with their own storage and don’t need to any outside storage on the host z.  Through the top of the rack switch in the zBX you can connect to a distributed SAN.

Emery also noted the latest supported storage devices.  Supported IBM storage products as of Sept. 2013 include: DS3400, 3500, 3950, 4100, 4200, 4700 4800, 5020, 5100, 5300, 6000, 8100, 8300, 8700, 8800, SVC 2145, XIV, 2105, 2107, and Storwize v7000. Non-IBM storage is possible but you or you’re the OEM storage vendor will have to figure it out.

Finally, Emery made numerous references to Unified Resource Manager (or zManager, although it manages more than z) for the zBX and Flex System Manager for PureSystems.  Right now IBM tries to bridge the two systems with higher level management from Tivoli.  Another possibility, Emery hinted, is OpenStack to unify hybrid management. Sounds very intriguing, especially given IBM’s announced intention to make extensive use of OpenStack. Is there an interoperable OpenStack version of Unified Resource Manager and Flex System Manager in the works?

Follow DancingDinosaur on Twitter, @mainframeblog.

IBM Technical Edge 2013 Tackles Flash – Big Data – Cloud & More

June 3, 2013

IBM Edge 2013 kicks off in just one week, 6/10 and runs through 6/14. Still time to register.  This blogger will be there through 6/13.  You can follow me on Twitter for conference updates @Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference. As noted here previously I’ll buy a drink for the first two people who come up to me and say they read DancingDinosaur.  How’s that for motivation!

The previous post looked at the Executive track. Now let’s take a glimpse at the technical track, which ranges considerably wider, beyond the System z to IBM’s other platforms, flash, big data, cloud, virtualization, and more

Here’s a sample of the flash sessions:

Assessing the World of Flash looks at the key competitors, chief innovators, followers, and leaders. You’ll quickly find that not all flash solutions are the same and why IBM’s flash strategy stands at the forefront of this new and strategic technology.

There are many ways to deploy flash. This session examines Where to Put Flash in the Data Center.  It will focus particularly on the new IBM FlashSystem products and other technologies from IBM’s Texas Memory Systems acquisition. However, both storage-based and server-based flash technologies will be covered with an eye toward determining what works best for client performance needs.

The session on IBM’s Flash Storage Future will take a look at how IBM is leveraging its Texas Memory Systems acquisition and other IBM technologies to deliver a flash portfolio that will play a major role across not only IBM’s storage products but its overall solution portfolio and its roadmap moving forward.

The flash sessions also will look at how Banco Azteco, Thompson Reuters, and Sprint are deploying and benefiting from flash.

In the big data track, the Future of Analytics Infrastructure looks interesting. Although most organizations understand the value of business analytics many don’t understand how the infrastructure choices they make will impact the success or failure of their analytics projects.  The session will identify the key requirements of any analytical environment: agility, scalability, multipurpose, compliance, cost-effective, and partner-ready; and how they can be met within a single, future-ready analytics infrastructure to meet the needs of current and future analytics strategies.

Big data looms large at the conference. A session titled Hadoop…It’s Not Just about Internal Storage explores how the Hadoop MapReduce approach is evolving from server internal disks to external storage. Initially, Hadoop provided massively scalable, distributed file storage and analytic capabilities. New thinking, however, has emerged that looks at a tiered approach for implementing the Hadoop framework with external storage. Understanding the workload architectural considerations is important as companies begin to integrate analytic workloads to drive higher business value. The session will review the workload considerations to show why an architectural approach makes sense and offer tips and techniques, and share information about IBM’s latest offerings in this space.

An Overview of IBM’s Big Data Strategy details the company’s industrial-strength big data platform to address the full spectrum of big data business opportunities. This session is ideal for those who are just getting started with big data.

And no conference today can skip the cloud. IBM Edge 2013 offers a rich cloud track. For instance, Building the Cloud Enabled Data Center explains how to get maximum value out of an existing virtualized environment through self-service delivery and optimization along with virtualization optimization capabilities. It also describes how to enable business and infrastructure agility with workload optimized clouds that provide orchestration across the entire data center and accelerate application updates to respond faster to stakeholder demands and competitive threats. Finally it looks at how an open and extensible cloud delivery platform can fully automate application deployment and lifecycle management by integrating compute, network, storage, and server automation.

A pair of sessions focus on IBM Cloud Storage Architectures and Understanding IBM’s Cloud Options. The first session looks at several cloud use cases, such as storage and systems management.  The other session looks at IBM SmartCloud Entry, SmartCloud Provisioning, and ServiceDelivery Manager.  The session promises to be an excellent introduction for the cloud technical expert who desires a quick overview of what IBM has to offer in cloud software and the specific value propositions for its various offerings, along with their architectural features and technical requirements.

A particularly interesting session will examine Desktop Cloud through Virtual Desktop Infrastructure and Mobile Computing. The corporate desktop has long been a costly and frustrating challenge complicated even more by mobile access. The combination of the cloud and Virtual Desktop Infrastructure (VDI) provides a way for companies to connect end users to a virtual server environment that can grow as needed while mitigating the issues that have frustrated desktop computing, such as software upgrades and patching.

There is much more in the technical track. All the main IBM platforms are featured, including PureFlex Systems, the IBM BladeCenter, IBM’s Enterprise X-Architecture, the IBM XIV storage system, and, for DancingDinosaur readers, sessions on the DS8000.

Have you registered for IBM Edge 2013 yet?  There still is time. As noted above, find me in the Social Media Lounge at the conference and in the sessions.  You can follow me on Twitter for conference updates @Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference. I’ll buy a drink for the first two people who come up to me and say they read DancingDinosaur.  How much more motivation do you need?

SmartCloud Storage Access Simplifies Private Cloud Storage

February 15, 2013

IBM has long been offering storage as part of its SmartCloud family of products.  In early February it introduced SmartCloud Storage Access, a storage software appliance that looks to be a game changer, at least for IBM storage shops. It offers easy private cloud storage-as-a-service through a self-service portal for storage provisioning, monitoring and reporting. DancingDinosaur, however, also finds the software appliance concept intriguing, especially as it can be applied to simplifythe zEnterprise.

Two issues are driving interest in SmartCloud Storage Access. The first is a report that labor costs will consume 70% of IT spending this year. The second: 90% of organizations expect to adopt or deploy a cloud model in the next three years.  It’s no surprise that IBM thinks the time is right to introduce SmartCloud Storage Access as a way to facilitate storage in the cloud while lowering the IT labor costs associated with storage.

IBM SmartCloud Storage Access enables organizations to implement a private cloud storage service through which users can create an account, provision storage, and upload files over the Internet—with only a few clicks and without involving IT labor if done through the automated self-service GUI portal. Not only will SmartCloud Storage Access reduce IT labor involvement but it should speed the delivery of storage resources, which in turn boosts user productivity.  No longer do they have to wait for a storage admin to provision storage for them.

Unlike several IBM appliances, SmartCloud Storage Access is a software-only appliance; no hardware ships with it. Of course, if you are using it with a private cloud, you still need to populate your private cloud with physical storage. For that you can use most of the IBM storage products. The SmartCloud Storage Access appliance itself installs on an Intel server running VMware. Basically, it is a VMware image loaded on a VMware virtual machine.

As an appliance, all the technical complexity of the storage provisioning process is hidden; this abstraction to the storage-as-a-service level relieves the user of dealing with conventional storage provisioning. It simplifies and standardizes monitoring, reporting, and control to reduce operational complexity. It also does away with the typical collection of point tools required to address various storage functions.

It allows admins to quickly and easily set up a private cloud storage service complete with elastic capacity, automatic or routed approval flows, multiple service classes with different QoS. Admins can simply define the service without concern about the underlying technical details. The SmartCloud Storage Access appliance hides all the complexity. It also simplifies monitoring and reporting.

The bottom line: it delivers storage provisioning on demand in seconds and with minimal involvement of IT. The result is increased productivity for both storage users and IT; fast, consistent high quality service; and high operational efficiency. Reduced IT involvement translates into better TCO while automated self-service leads to faster deployment and higher end user satisfaction.

And it can play with most of IBM’s storage lineup; XIV, Storwize V7000 and V7000 Unified, and SONAS.  The DS8000 and other block storage systems are not expected to play much of a role in SmartCloud Storage Access; clouds today primarily focus on file storage.

In recent years the idea of a private storage cloud has emerged as the Holy Grail of storage; something that would finally put an end to the difficulty of responding to rapidly escalating storage demands for different types of storage. But it proved difficult to implement.

As a software appliance, however, SmartCloud Storage Access already is proving effective, as early adopters like ETH Zurich University and the Tallink Group attest.  If the software appliance concept can simplify private storage clouds what other aspects of enterprise computing can it help?  One IBM researcher already thinks that because it can hide complicated platform specifics, the concept is a particularly good fit for System z.  More on this to follow.

EMC Introduces New Mainframe VTL

August 16, 2012

EMC introduced the high end DLm8000, the latest in its family of VTL products. This one is aimed for large enterprise mainframe environments and promises to ensure consistency of data at both production and recovery sites and provide the shortest possible RPO and RTO for critical recovery operations.

It is built around EMC VMAX enterprise storage and its SRDF replication and relies on synchronous replication to ensure immediate data consistency between the primary and target storage by writing the data simultaneously at each. Synchronous replication addresses the potential problem latency mismatch that occurs with the usual asynchronous replication, where a lag between writes to the primary and to the backup target storage can result in inconsistent data.

Usually this mismatch exists for a brief period. EMC suggests the issue, especially for large banks and financial firms—its key set of mainframe target customers—is much more serious. Large financial organizations with high transaction volume, EMC notes, have historically faced recovery challenges because their mainframe tape and DASD data at production and secondary sites were never fully in synch.  As such, recovery procedures often slowed until the differences between the two data sets were resolved, which slowed the resulting failover.  This indeed may be a real issue but for only a small number of companies, specifically those that need an RTO and RPO of just about zero.

EMC used the introduction of the DLm8000 to beat up tape backup in general. Physical tape transportation by third party records management companies, EMC notes, hinders recovery efforts by reducing what it refers to as the granularity of RPOs while dramatically increasing the RTO.  In addition, periodic lack of tape drive availability for batch processing and for archive and backup applications can impair SLAs, further increasing the risks and business impact associated with unplanned service interruptions. That has been long recognized, but, remember EMC is a company that sells disk, not tape storage, and ran a Tape Sucks campaign after its purchase of Data Domain. What would you expect them to say? 

The DLm8000 delivers throughput of up to 2.7 GB/s, which it claims is 2.5x the performance of its nearest competitor. DancingDinosaur can’t validate that claim, but EMC does have a novel approach to generating the throughput. The DLm8000 is packed with eight Bus-Tech engines (acquired in its acquisition of Bus-Tech in Nov. 2010) and it assigns two FICON connections to each engine for a total of 16 FICON ports cranking up the throughput. No surprise they can aggregate that level of throughput.

EMC has not announced pricing for the DLm8000. The device, however, is the top of its VTL lineup and VMAX enterprise storage tops its storage line. With high throughput and synchronous replication, this product isn’t going to be cheap. However, if you need near zero RPO and RTO then you have only a few choices.

Foremost among those choices should be the IBM TS7700 family, particularly the 7740 and the 7720. Both of these systems provide VTL connectivity. The TS7700 avoids the latency mismatch issue by using a buffer to get the most optimal write performance and then periodically synch primary and target data. “Synchronous as EMC does it for VTL is overkill,” says an IBM tape manager. The EMC approach essentially ignores the way mainframe tape has been optimized.

Among the other choices are the Oracle Virtual Storage Manager and Virtual Library Extension. Oracle uses StorageTek tape systems. The Oracle approach promises to improve tape drive operating efficiencies and lower TCO by optimizing tape drive and library resources through a disk-based virtual tape architecture. HDS also has a mainframe tape backup and VTL product that uses Luminex technology.

EMC is a disk storage company and its DLm8000 demonstrates that. When it comes to backup, however, mainframe shops are not completely averse to tape. Disk-oriented VTL has some advantages but don’t expect mainframe shops to completely abandon tape.

In breaking storage news, IBM today announced acquiring Texas Memory Systems (TMS), a long established (1978) Texas company that provides solid state memory to deliver significantly faster storage throughput and data access while consuming less power. TMS offers its memory as solid state disk (SSD) through its RamSan family of shared rackmount systems and Peripheral Component Interconnect Express (PCIe) cards. SSD may be expensive on a cost per gigabyte basis but it blows away spinning hard disk on a cost per IOPS. Expect to see IBM to use TMS’s SSD across its storage products as one of its key future storage initiatives, as described by Jai Menon, CTO and VP, Technical Strategy for IBM Systems and Technology Group (STG), at last June’s Storage Edge 2012 conference. BottomlineIT, DancingDinosaur’s sister blog, covered it here back in June. BTW, Edge 2013 already is scheduled for June 10-14 in Las Vegas.

Deep Mainframe Storage Dive at Edge Conference

April 9, 2012

The spring SNW conference held in early April offered 19 program tracks covering everything from cloud backup to VDI.  A new storage conference, IBM Edge 2012  (Orlando, FL, June 4-8), covers a similar breadth of storage topics but includes about 15 sessions focused specifically on various aspects of mainframe storage. You won’t find this depth of mainframe storage coverage anywhere else except possibly at SHARE, which is a larger conference.

With the advent of zEnterprise-zBX hybrid computing it is important to have mainframe-specific information on the exploding amount of detail around every aspect of storage.  Today, a mainframe shop can find itself trying to deploy distributed systems storage alongside System z and zEnterprise storage. With many more choices, including cloud storage, it is not always clear what is the best option and how to make the various pieces work together in an optimal way.

The upcoming IBM Edge conference promises to address this need.  At the same time IBM promises to use the event to make major market-shaking announcements, introduce new storage offerings, and showcase the real business outcomes its customers are achieving.

The conference will have two main tracks, both mainframe-oriented. Executive Edge,  for business and IT executives and leaders, will focus on how leading companies are transforming their storage infrastructures to address such challenges as big data or achieving superior business outcomes. Technical Edge for IT professionals and practitioners, will feature cutting-edge education, hands on labs and on-site certification geared for all levels and taught by IBM distinguished engineers, leading product experts,  and clients and partners. A link to the conference sessions is here.

Here is a sample of what you will find:

Delivering High Availability and Disaster Recovery Using IBM GDPS—DancingDinosaur recently addressed this here, but not in nearly the depth. GDPS is IBM’s premier high availability and recovery solution. This session will go over the different GDPS solutions and explore the underlying technology that is exploited to deliver the different solutions.

z FICON and zHPF Operation and Usage—a more technical discussion, this session examines Fibre Connection (FICON) and high performance FICON (zHPF) usage on System z servers. It will describe FICON I/O operation and usage by the System z channel architecture for a FICON channel command word (CCW) channel program, the newer zHPF channel program, and more.

Using Unified Resource Manager to Provision Storage Resources—a hands-on lab demonstrating how a zEnterprise System ensemble running the Unified Resource Manager can provision resources to the ensemble, hypervisor, and virtual servers. Ensembles and ensemble resources such as virtual networking, storage and virtual servers are created and defined using various HMC tasks to identify and define storage resources to the blade hypervisor and virtual servers.

The Top 10 System z Storage Management Problems and How to Address Them—the session looks at the 10 biggest System z Storage Management issues found in most z shops. It will help you understand what can be done in z Storage Management to address these problems, manage your environment for cost savings and increased productivity, and maximize efficiency and effectiveness of both your z resources and personnel.

IBM hasn’t specified its top 10 mainframe storage management challenges, but since this is a Tivoli session you can bet the problems will include storage optimization, data integrity, disaster recovery and reporting capabilities, as well as management of disk and tape storage devices; the stuff Tivoli does well. As for me, I’d like to see it address public/private cloud storage, mixed platform (hybrid) storage management, storing Big Data for real-time analytics, and the perennial challenges the mainframe data center faces: data protection, archiving, cost containment, and the need for skilled mainframe storage staff.

DancingDinosaur will attend the conference. For a chance to win free admission to the conference, watch for upcoming posts here. And now for the legal stuff:  this post is sponsored, meaning I am being compensated, by the Storage Community for covering IBM’s Edge Conference.  However, the opinions and writing here are my own.

 Hope to see you in Orlando.

IBM STG Cashes in on HP-Oracle Woes

August 28, 2011

The System z and the entire IBM Systems and Technology Group (STG) under Rod Adkins is sweeping the field while HP and Oracle bicker over the fate of Itanium (doomed now, no matter what HP and Intel say officially) and HP flounders in search of a meaningful business direction. The latest financials and IDC market sizing trends don’t tell a pretty picture for either company.

Meanwhile IBM continues to ride the zEnterprise 196 and now the z114 to some of the best numbers it has seen in years. And when IBM finally gets its act together around the zBX it could soar to even greater heights. (For that it has to position and market the zBX for more than the 50-100 very largest z shops.)

Still, nobody can quibble if Adkins did some boasting around the latest numbers: z revenue grew 61% in 2Q11 while MIPS grew 86%, which amounted to a 7 point share gain. System z gained 14 new customers in that time, 68 since the z196 launch a year earlier.

The Power Systems numbers also looked good as revenue grew 12% and the group gained 3 points of market share. It had run up over 250 competitive displacements, 2/3 at the expense of Oracle and 1/3 at the expense of HP.

IDC added to Adkins triumph by noting that IBM had finally pulled into a virtual tie with HP for the worldwide server market lead, with 30.5% (IBM) and 29.8% (HP) of factory revenue share respectively for 2Q11. IDC declared it a statistical tie. IBM experienced 24.5% year-over-year growth in factory revenue gaining 1.6 points of share in the quarter on the performance of System x, Power Systems, and System z, a hat trick for Adkins with all three of STG’s server groups contributing.

Behind IBM and HP was Dell, which maintained third place with 13.8% factory revenue market share in 2Q11. Dell’s factory revenue increased 5.1% compared to 2Q10, driven in part by strong demand from SMB customers. Oracle, which talked brashly of taking on the System z following its Sun acquisition, ended up with the number 4 position jointly with Fujitsu. Oracle’s 2Q11 factory revenue increased 4.2% compared to 2Q10, driven in part by improved demand for x86-based Exadata systems, according to IDC.

While some questioned whether STG’s successes would tail off now that the big new products have been successfully introduced and each product group has been refreshed, Adkins clearly doesn’t agree. Just last month the System z group rolled out the z114, giving the hybrid zEnterprise line a fully capable entry-level machine. When packaged as a Solution Edition deal for enterprise Linux or SAP combining hardware, software, middleware, and maintenance for three years at a deeply discounted price it should be very effective in new competitive wins, especially Linux consolidation wins against HP and SAP wins against Oracle.

STG also handles storage and there too it has racked up wins and introduced a flurry of products. Revenue is 2Q11 grew 10%. Maybe more importantly it handled 4,500 XIV installations with 1,100 new customers. And just last month STG announced XIV Gen 3 scalable storage that includes a host of advanced features at no additional costs. In fact, IBM reports it has reduced the total cost of ownership by 60% when compared to its biggest storage competitor, EMC. As important, IBM made it fully autonomic, meaning it can pretty much run itself, allowing for even more cost savings.

In the latest IDC study of worldwide storage disk systems EMC maintained its lead in the external disk market with 26.0% revenue share in the fourth quarter, followed by IBM in second and HP in third with 16.3% and 11.6% market share respectively. Oracle, which acquired a storage business when it acquired Sun, didn’t even register.

While IBM and STG are on a roll, it is increasingly clear that Oracle and HP are sputtering for now. Oracle can’t seem to gain any leverage from its Sun acquisition. HP just recently realized that PCs, tablets, and smartphones aren’t going to bail it out. Printers anyone?


Follow

Get every new post delivered to your Inbox.

Join 572 other followers

%d bloggers like this: