Posts Tagged ‘IBM XIV’

IBM Expands Spectrum Storage in the Cloud with Spectrum Protect

September 18, 2015

IBM is targeting storage for hybrid clouds with Spectrum Protect. Specifically, it brings new cloud backup and a new management dashboard aimed to help businesses back up data to on-premises object storage or the cloud without the expense of cloud-gateway appliances. It also enables advanced data placement across all storage types to maximize performance, availability, and cost efficiency. Spectrum Protect represents the latest part of the IBM Spectrum storage family; which provides advanced software defined storage (SDS) storage capabilities and flexible storage either as software, an appliance, or a cloud service.  IBM announced Spectrum Protect at the end of August.

ibm Spectrum Protect Dashboard dino

Courtesy IBM: Spectrum Protect dashboard (click to enlarge)

Introduced early this year, IBM Spectrum brings a family of optimized SDS solutions designed to work together. It offers SDS file, object, and block storage with common management and a consistent user and administrator experience.  Although it is based on IBM’s existing storage hardware products like XIV, Storwize, IBM FlashSystem, and SVC you can deploy it as software on some non IBM hardware too. It also offers support for VMware environments and includes VMware API support for VASA, VAAI, and VMware SRM. With Spectrum, IBM appears to have come up with a winner; over the last six months, IBM reports more than 1,000 new clients have chosen products from the IBM Spectrum Storage portfolio.

Specifically, IBM Spectrum Protect supports IBM Cloud infrastructure today with plans to expand to other public clouds in future. IBM Spectrum Accelerate (XIV block storage) also can be accessed as a service by IBM Cloud customers via the SoftLayer cloud infrastructure. There it allows companies to deploy block storage on SoftLayer without having to buy new storage hardware or manage appliance farm.

In competitive analysis, IBM found that a single IBM Spectrum Protect server performs the work of up to 15 CommVault servers. This means that large enterprises can consolidate backup servers to reduce cost and complexity while managing data growth from mobile, social, and Internet of Things environments.  Furthermore, SMBs can eliminate the need for a slew of infrastructure devices, including additional backup servers, media servers, and deduplication appliances, thereby reducing complexity and cost. Cost analysis with several beta customers, reports IBM, indicates that the enhanced IBM Spectrum Protect software can help clients reduce backup infrastructure costs on average by up to 53 percent.

IBM reports that the Spectrum Storage portfolio can centrally manage more than 300 different storage devices and yottabytes (yotta=1024 bytes) of data.  Its device interoperability is the broadest in the industry – incorporating both IBM and non-IBM hardware and tape systems.  IBM Spectrum Storage can help reduce storage costs up to 90 percent in certain environments by automatically moving data onto the most economical storage device – either from IBM or non-IBM flash, disk, and tape systems.

IBM Spectrum Storage portfolio packages key storage software from conventional IBM storage products. These include IBM Spectrum Accelerate (IBM XIV), Spectrum Virtualize (IBM SAN Volume Controller along with IBM Storwize), Spectrum Scale (IBM General Parallel File System or GPFS technology, previously referred to as Elastic Storage), Spectrum Control (IBM Virtual Storage Center and IBM Storage Insights), Spectrum Protect (Tivoli Storage Manager family) and Spectrum Archive (various IBM tape backup products).

The portfolio is presented as a software-only product and, presumably, you can run it on IBM and some non-IBM storage hardware if you chose. You will have to compare the cost of the software license with the cost of the IBM and non-IBM hardware to decide which gets you the best deal.  It may turn out that running Spectrum Accelerate (XIV) on low cost, generic disks rather than buying a rack of XIV disk to go with it may be the lowest price. But keep in mind that the lowest cost generic disk may not meet your performance or reliability specifications.

IBM reports it also is enhancing the software-only version of IBM Spectrum Accelerate to reduce costs by consolidating storage and compute resources on the same servers. In effect, IBM is making XIV software available with portable licensing across XIV systems, on- premises servers, and cloud environments to offer greater operational flexibility. Bottom line: Possibly a good deal but be prepared to do some detailed comparative cost analysis to identify the best mix of SDS, cloud storage, and hardware at the best price for your particular needs.

In general, however, DancingDinosaur favors almost anything that increases data center configuration and pricing flexibility. With that in mind consider the IBM Spectrum options the next time you plan storage changes. (BTW, DancingDinosaur also does storage and server cost assessments should you want help.)

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Redefines Software Defined Storage

February 25, 2015

On Feb. 17 IBM unveiled IBM Spectrum Storage, a new storage software portfolio designed to address data storage inefficiencies by changing the economics of storage with a layer of intelligent software; in short, a software defined storage (SDS) initiative.  IBM’s new software creates an efficient data footprint that dynamically stores every bit of data at the optimal cost, helping maximize performance and ensuring security, according to the IBM announcement here.

Jared Lazarus/Feature Photo Service for IBM

Courtesy of IBM: IBM Storage GM demonstrates new Spectrum storage management dashboard

To accelerate the development of next-generation storage software, IBM included plans to invest more than $1 billion in its storage software portfolio over the next five years. The objective: extend its storage technology leadership, having recently been ranked #1 in SDS platforms for the first three quarters of 2014 by leading industry analyst firm IDC. The investment will focus on R&D of new cloud storage software, object storage, and open standard technologies including OpenStack.

“Traditional storage is inefficient in today’s world where the value of each piece of data is changing all the time,” according to Tom Rosamilia, Senior Vice President, IBM Systems, in the announcement. He went on: “IBM is revolutionizing storage with our Spectrum Storage software that helps clients to more efficiently leverage their hardware investments to extract the full business value of data.”

Two days later IBM announced another storage initiative, flash products aimed directly at, EMC. The announcement focused on two new all-flash enterprise storage solutions, FlashSystem V9000 and FlashSystem 900. Each promises industry-leading performance and efficiency, along with outstanding reliability to help lower costs and accelerate data-intensive applications. The new solutions can provide real-time analytical insights with up to 50x better performance than traditional enterprise storage, and up to 4x better capacity in less rack space than EMC XtremIO flash technology.

Driving interest in IBM Spectrum storage is research suggesting that less than 50% of storage is effectively utilized. Storage silos continue to be rampant throughout the enterprise as companies recreate islands of Hadoop-based data along with more islands of storage to support ad hoc cloud usage. Developers create yet more data silos for dev, testing, and deployment.

IBM Storage Spectrum addresses these issues and more through a SDS approach that separates storage capabilities and intelligence from the physical devices. The resulting storage is self-tuning and leverages analytics for efficiency, automation, and optimization. By capitalizing on its automatic data placement capabilities IBM reports it can meet services levels while reducing storage costs by as much as 90%.

Specifically, IBM Spectrum consists of six storage software elements:

  1. IBM Spectrum Control—analytics-driven data management to reduce costs by up to 50%
  2. IBM Spectrum Protect—optimize data protection to reduce backup costs by up to 38%
  3. IBM Spectrum Archive—fast data retention that reduces TCO for archive data by up to 90%
  4. IBM Spectrum Virtualize—virtualization of mixed environment to store up to 5x more data
  5. IBM Spectrum Accelerate—enterprise storage for cloud, which can be deployed in minutes instead of months
  6. IBM Spectrum Scale—high-performance, highly scalable storage for unstructured data

Each of these elements can be mapped back to existing IBM storage solutions.  Spectrum Accelerate, for example, uses IBM’s XIV capabilities. Spectrum virtualization is based on IBM’s San Volume Controller (SVC) technology. Spectrum Scale is based on GPFS, now called Elastic Storage, to handle file and object storage at massive scale yet within a single global name space.  Spectrum Archive, based on IBM’s LTFS, allows an organization to treat tape as a low cost, fully active tier.  In effect, with IBM Spectrum, an organization can go from flash cache to tape, all synced worldwide within a single name space.

A big part of what IBM is doing amounts to repackaging the capabilities it has built into its storage systems and proven in various products like XIV or GPFS or SVC as software components to be used as part of an SDS deployment. This raises some interesting possibilities. For instance, is it cheaper to use Spectrum Accelerate with a commodity storage array or buy the conventional XIV storage product?  The same probably could be asked of Spectrum Virtualize with SVC or Spectrum Archive with LTFS.

DancingDinosaur asked the Spectrum marketing team exactly that question.  Their response: With Accelerate you have the flexibility to size the server to the performance needs of the solution, so while the software cost remains the same regardless of the server you select. The cost of the server will vary depending on what the client needs. We will make available a sizing guide soon so each client’s situation can be modeled based on the solution requirements. In all cases it really depends on the hardware chosen vs. the (IBM) appliance. If the hardware closely matches the hardware of the appliance then costs differences will be minimal. It all depends on the price the client gets, so yes, in theory, a white box may be lower cost.

With Spectrum Accelerate (XIV), IBM continues, the client can also deploy the software on a cluster of just 3 servers (minimum) and leverage existing Ethernet networking.  This minimum configuration will be much lower cost than the minimum XIV system configuration cost. Spectrum Accelerate can also be licensed on a monthly basis, so those clients with variable needs or deploying to the cloud the client can deploy and pay for only what they need when they need it.

It is a little different for the other Spectrum offerings. DancingDinosaur will continue chasing down those details. Stay tuned. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Follow more of his IT writing on Technologywriter.com and here.

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog

IBM Technical Edge 2013 Tackles Flash – Big Data – Cloud & More

June 3, 2013

IBM Edge 2013 kicks off in just one week, 6/10 and runs through 6/14. Still time to register.  This blogger will be there through 6/13.  You can follow me on Twitter for conference updates @Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference. As noted here previously I’ll buy a drink for the first two people who come up to me and say they read DancingDinosaur.  How’s that for motivation!

The previous post looked at the Executive track. Now let’s take a glimpse at the technical track, which ranges considerably wider, beyond the System z to IBM’s other platforms, flash, big data, cloud, virtualization, and more

Here’s a sample of the flash sessions:

Assessing the World of Flash looks at the key competitors, chief innovators, followers, and leaders. You’ll quickly find that not all flash solutions are the same and why IBM’s flash strategy stands at the forefront of this new and strategic technology.

There are many ways to deploy flash. This session examines Where to Put Flash in the Data Center.  It will focus particularly on the new IBM FlashSystem products and other technologies from IBM’s Texas Memory Systems acquisition. However, both storage-based and server-based flash technologies will be covered with an eye toward determining what works best for client performance needs.

The session on IBM’s Flash Storage Future will take a look at how IBM is leveraging its Texas Memory Systems acquisition and other IBM technologies to deliver a flash portfolio that will play a major role across not only IBM’s storage products but its overall solution portfolio and its roadmap moving forward.

The flash sessions also will look at how Banco Azteco, Thompson Reuters, and Sprint are deploying and benefiting from flash.

In the big data track, the Future of Analytics Infrastructure looks interesting. Although most organizations understand the value of business analytics many don’t understand how the infrastructure choices they make will impact the success or failure of their analytics projects.  The session will identify the key requirements of any analytical environment: agility, scalability, multipurpose, compliance, cost-effective, and partner-ready; and how they can be met within a single, future-ready analytics infrastructure to meet the needs of current and future analytics strategies.

Big data looms large at the conference. A session titled Hadoop…It’s Not Just about Internal Storage explores how the Hadoop MapReduce approach is evolving from server internal disks to external storage. Initially, Hadoop provided massively scalable, distributed file storage and analytic capabilities. New thinking, however, has emerged that looks at a tiered approach for implementing the Hadoop framework with external storage. Understanding the workload architectural considerations is important as companies begin to integrate analytic workloads to drive higher business value. The session will review the workload considerations to show why an architectural approach makes sense and offer tips and techniques, and share information about IBM’s latest offerings in this space.

An Overview of IBM’s Big Data Strategy details the company’s industrial-strength big data platform to address the full spectrum of big data business opportunities. This session is ideal for those who are just getting started with big data.

And no conference today can skip the cloud. IBM Edge 2013 offers a rich cloud track. For instance, Building the Cloud Enabled Data Center explains how to get maximum value out of an existing virtualized environment through self-service delivery and optimization along with virtualization optimization capabilities. It also describes how to enable business and infrastructure agility with workload optimized clouds that provide orchestration across the entire data center and accelerate application updates to respond faster to stakeholder demands and competitive threats. Finally it looks at how an open and extensible cloud delivery platform can fully automate application deployment and lifecycle management by integrating compute, network, storage, and server automation.

A pair of sessions focus on IBM Cloud Storage Architectures and Understanding IBM’s Cloud Options. The first session looks at several cloud use cases, such as storage and systems management.  The other session looks at IBM SmartCloud Entry, SmartCloud Provisioning, and ServiceDelivery Manager.  The session promises to be an excellent introduction for the cloud technical expert who desires a quick overview of what IBM has to offer in cloud software and the specific value propositions for its various offerings, along with their architectural features and technical requirements.

A particularly interesting session will examine Desktop Cloud through Virtual Desktop Infrastructure and Mobile Computing. The corporate desktop has long been a costly and frustrating challenge complicated even more by mobile access. The combination of the cloud and Virtual Desktop Infrastructure (VDI) provides a way for companies to connect end users to a virtual server environment that can grow as needed while mitigating the issues that have frustrated desktop computing, such as software upgrades and patching.

There is much more in the technical track. All the main IBM platforms are featured, including PureFlex Systems, the IBM BladeCenter, IBM’s Enterprise X-Architecture, the IBM XIV storage system, and, for DancingDinosaur readers, sessions on the DS8000.

Have you registered for IBM Edge 2013 yet?  There still is time. As noted above, find me in the Social Media Lounge at the conference and in the sessions.  You can follow me on Twitter for conference updates @Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference. I’ll buy a drink for the first two people who come up to me and say they read DancingDinosaur.  How much more motivation do you need?

Oracle’s Tough 3Q and New SPARC Chip

March 29, 2013

Almost like a good news/bad news joke, Oracle announced dismal financials last week along with the next rev of its SPARC processor. The company clearly is hoping that the new processor will revive its rapidly fading hardware business and pose some sort of challenge to IBM’s zEnterprise and Power Systems.

Hardware systems product revenue was $671 million. That’s sounds good for a quarter until you realize it was down 23% over the previous year. Ouch. Hardware systems support didn’t do much better, falling to $570 million even as Oracle’s hardware maintenance prices continued to climb, noted Timothy Sipples, who writes a blog called Mainframe.  Hardware platforms go through refresh cycles, as DancingDinosaur readers know, but Oracle has been struggling at this with Sun for three years.

Note that these figures include what Oracle calls its engineered systems like Exadata and Exalogic. These types of systems combine Oracle’s Sun hardware with its software in an optimized product. Such systems were expected to provide the synergies necessary to justify the initial Sun acquisition. And maybe they will someday, but Oracle stockholders have to be getting impatient. Along with the engineered systems was Oracle’s SPARC SuperCluster.  During that time IBM has been delivering its own highly optimized systems, hybrid systems, a new generation of  HPC systems, and expert-integrated systems.

Oracle’s 3Q report didn’t even mention its storage business, which consists mainly of StorageTek tape products and Oracle’s Sun ZFS Storage Appliance family.  By comparison, IBM has been advancing its storage offerings with products like Storwize, XIV, Real-time Compression, SSD, and more.

About the only bright spot Oracle could point to was its cloud effort. In the 3Q report it declared: “The Oracle Cloud is the most robust and comprehensive cloud platform available with services at the infrastructure (IaaS), platform (PaaS) and application (SaaS) level. In Q3, our SaaS revenue alone grew well over 100% as lots of new customers adopted our Sales, Service, Marketing and Human Capital Management applications in the Cloud,” according to Oracle President, Mark Hurd. And even here IBM has been busily building out its SmartCloud as-a-service offerings and putting them into a slew of SmarterPlanet initiatives.

From the standpoint of DancingDinosaur readers, who tend to focus on the System z, zEnterprise, and Power Systems, the most interesting part of Oracle’s recent activity is the new SPARC processor, the T5. New T5 servers can have up to eight microprocessors while Oracle’s new M5 system can be configured with up to thirty-two microprocessors. The M5 runs the Oracle database 10 times faster than the M9000 it replaces, according to Oracle. For the record, the top end zEC12 includes 101 cores. The zEC12 chip runs at 5.5 GHz.

Elizabeth Stahl, IBM’s chief technical strategist and benchmark guru, wrote this on her blog about Oracle’s T5 claims: Many of the claims are Oracle’s own benchmarks that are not published and audited. For price claims, Oracle, as they’ve done in the past, only factors in the price of the pizza box – make sure you add in the all-important software and storage. Stahl goes on to directly address Oracle’s benchmark claims here.

DancingDinosaur has been waiting for a rebound of the SPARC platform in the hopes that it might revive the Solaris on z initiative led by David Boyes and others. They actually had it working and at least one serious bank was piloting it. Lack of support from Oracle/Sun and IBM killed it. Solaris on z could have attracted Sun customers to the zEnterprise, mainly those in banking and financial services where Solaris and Sun were strong.  In case you are interested, Oracle still offers Solaris, now Oracle Solaris 11, and touts it as the first cloud OS.


%d bloggers like this: