IBM Edge2014 Executive Track Hits Critical Issues

May 14, 2014

Until now this blog has looked at the technology issues being addressed at the upcoming IBM Edge2014 conference starting in Las Vegas on Monday. There also, however, is a business focus through the Edge2014 Executive Track. You can find Executive Track session details here.

DancingDinosaur, a mediocre programmer in a previous life, still gravitates toward technical sessions in the hopes of understanding this amazing new technology. But for the sake of some balance your blogger will hit a few Executive Track sessions. The following look particularly interesting:

Infrastructure Matters Because Business Outcomes Matter—to be given by Tom Rosamilia, Senior Vice President, IBM Systems & Technology Group and IBM Integrated Supply Chain. Your blogger has heard Rosamilia talk on this topic before and he is spot on. He explains why the right IT infrastructure choices are critical for business success and how the demands created by cloud, big data and analytics, mobile, and social are fueling an explosion of data, and spawning new workloads and business models. This has drastically changed expectations for IT infrastructures, which now is expected to be software defined, open, and more secure than ever before. The result is what IBM calls Composable Business, which optimizes technology by building a robust, highly agile infrastructure that can adjust quickly and efficiently to change for better business outcomes.

Along the same lines is Rethinking Your Business Model with Cloud. Here Robert LeBlanc, Senior Vice President, Software and Cloud Solutions, IBM Software Group, describes how new disruptive technologies are pushing companies to rethink how they do business. As a result, a wide range of companies, both new and well established, ranging from Tangerine (formerly ING Direct), OnFarm (farm management) and Kiwi (wearable technology) to Pitney Bowes (reinventing the postage meter business) are redefining themselves and their industries. Not surprisingly business leaders, IT executives, and developers are embracing cloud technology across their organizations to promote innovation, drive business growth, and gain a competitive advantage.  Enterprises are tapping into IBM cloud technologies, such as the IBM Cloud Marketplace and BlueMix, to quickly build applications that transform their businesses.

Improved Economics with Data Virtualization—Jeff Barber, Vice President, Mid-Range/Low End Disk Business Line Executive, IBM Systems & Technology Group discusses how data is the next natural resource; the new currency of business. In today’s mobile and social environment, users expect data to be available anytime, anywhere, and on any device. This has placed great financial pressure on enterprise data fabrics. With budgets under constant scrutiny, companies need to manage storage costs while delivering faster business insights. IBM Storage provides data virtualization that helps enable business agility while reducing both infrastructure and operational costs. Data virtualization also provides the foundation for software defined storage and cloud storage.

Big Data Insights in a Flash–Michael Kuhn, Vice President and Business Line Executive, Flash

Systems, IBM Systems & Technology Group explains how the massive amounts of data being generated each day are making it difficult to capture insights from the data. Through this data explosion businesses can achieve their competitive edge by making data-driven decisions in real time. However, the velocity at which data is growing has made it increasingly difficult for businesses to manage and extract value from their data. Kuhn shows how adding flash storage to a data fabric offers a fast, low cost way to create business value and extract greater insights from data. Flash also enables data analytics in near real time so workers can take appropriate actions even as customers are making buying decisions.

Much of IBM Edge2014 will touch on storage in various ways, which makes sense since data lies at the heart of technology’s value, and storage is how organizations capture, preserve, protect, and leverage data for its maximum impact.

Flexible Storage for Data in the Cloud by Vincent Hsu, IBM Fellow and Chief Technical Officer, Storage Systems and Sidney Chow, Vice President and Business Line Executive, High End Disk, IBM Systems & Technology Group. Hsu and Chow start with the unpredictability of workloads in cloud environments that is driving an ever increasing need for a flexible and scalable infrastructure. Your cloud services, whether provided on-premises or off-site, are only as good as the elasticity and control they can provide. The right data infrastructure matters when creating cloud environments that can optimize data for faster business performance and lower total costs. Software Defined Storage take elasticity to the next level by simplifying interfaces and automating tasks.

Not sure which of these sessions your blogger will attend—all look good. In fact, there is so much at IBM Edge2014 that this blogger would need another week to catch all he likes. And, still need to make time for the Sheryl Crow concert.  Decisions, decisions, decisions…

Look for this blogger 5/19 through 5/22 at Edge2014. You’ll find me in the social media lounge when not attending a session. And follow me on Twitter, @mainframeblog or @IBMEdge.

IBM Edge2014 as Coming out Party for OpenStack

May 7, 2014

IBM didn’t invent OpenStack (Rackspace and NASA did), but IBM’s embrace of OpenStack in March 2013 as its standard for cloud computing made it a legit standard for enterprise computing. Since then IBM has made its intention to enable its product line, from the System z on down, for the OpenStack set of open source technologies.  Judging from the number of sessions at IBM Edge 2014, (Las Vegas, May 19-23 at the Venetian) that address one or another aspect of OpenStack you might think of IBM Edge2014 almost as a coming out celebration for OpenStack and enterprise cloud computing.

OpenStack is a collection of open source technologies. the goal of which is to provide a scalable computing infrastructure for both public and private clouds. As such it has become the foundation of IBM’s cloud strategy, which is another way of saying it has become what IBM sees as its future. An excellent mini-tutorial on OpenStack, IBM, and the System z can be found at mainframe-watch-Belgium here.

At IBM Edge2014 OpenStack is frequently included in sessions on storage, cloud, and storage management.  Let’s take a closer look at a few of those sessions.

IBM Storage and Cloud Technologies

Presenter Christopher Vollmar offers an overview of the IBM storage platforms that contain cloud technologies or provide a foundation for creating a private storage cloud for block and file workloads. This overview includes IBM’s SmartCloud Virtual Storage Center, SmartCloud Storage Access, Active Cloud Engine, and XIV’s Hyper-Scale as well as IBM storage products’ integration with OpenStack.

OpenStack and IBM Storage

Presenters Michael Factor and Funda Eceral explain how OpenStack is rapidly emerging as the de facto platform for Infrastructure as a Service. IBM is working fast to pin down the integration of its storage products with OpenStack. This talk presents a high level overview of OpenStack, with a focus on Cinder, the OpenStack block storage manager. They also will explain how IBM is leading the evolution of Cinder by improving the common base with features such as volume migration and ability to change the SLAs associated with the volume in the OpenStack cloud. Already IBM storage products—Storwize, XIV, DS8000, GPFS and TSM—are integrated with OpenStack, enabling self-provisioning access to features such as EasyTier or Real-time Compression via standard OpenStack interfaces. Eventually, you should expect virtually all IBM products, capabilities, and services to work with and through OpenStack.

IBM XIV and VMware: Best Practices for Your Cloud

Presenters Peter Kisich, Carlos Lizarralde argue that IBM Storage continues to lead in OpenStack integration and development. They then introduce the core services of OpenStack while focusing on how IBM storage provides open source integration with Cinder drivers for Storwize, DS8000 and XIV. They also include key examples and a demonstration of the automation and management IBM Storage offers through the OpenStack cloud platform.

IBM OpenStack Hybrid Cloud on IBM PureFlex and SoftLayer

Presenter Eric Kern explains how IBM’s latest version of OpenStack is used to showcase a hybrid cloud environment. A pair of SoftLayer servers running in IBM’s public cloud are matched with a PureFlex environment locally hosting the OpenStack controller. He covers the architecture used to set up this environment before diving into the details around deploying workloads.

Even if you never get to IBM Edge2014 it should be increasingly clear that OpenStack is quickly gaining traction and destined to emerge as central to Enterprise IT, any style of cloud computing, and IBM. OpenStack will be essential for any private, public, and hybrid cloud deployments. Come to Edge2014 and get up to speed fast on OpenStack.

Alan Radding/DancingDinosaur will be there. Look for me in the bloggers lounge between and after sessions. Also watch for upcoming posts on DancingDinosaur about OpenStack and the System z and on OpenStack on Power Systems.

Please follow DancingDinosaur on Twitter, @mainframeblog.

Best System z TCO in Cloud and Virtualization

May 1, 2014

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines in terms of TCO.  The analysis, which DancingDinosaur will dig into below, was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

DancingDinosaur has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 based systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial z Enterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM and the few IT analysts who talk to z customers have been saying for some time. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual machines compared to the public cloud and a bit more VMs compared to x86 machines.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance. View the IBM z TCO presentation here.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for AWS reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instances.

When IBM applied its analysis to 398 I/O diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM kept the assumptions equivalent across the platforms. If you make different software and middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the overall comparative rankings probably won’t change all that much.

Still time to register for IBM Edge2014 in Las Vegas, May 19-23. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog

3 Big Takeaways at IBM POWER8 Intro

April 24, 2014

POWER8 did not disappoint. IBM unveiled its latest generation of systems built on its new POWER8 technology on Wednesday, April 23.

DancingDinosaur sees three important takeaways from this announcement:

First, the OpenPOWER Foundation. It was introduced months ago and almost immediately forgotten. DancingDinosaur covered it at the time here. It had handful of small partners. Only one was significant, Google, and was it was hard to imagine Google bringing out open source POWER servers. Now the Foundation has several dozen members and it still is not clear what Google is doing there, but the Foundation clearly is gaining traction. You can expect more companies to join the Foundation in the coming weeks and months.

With the Foundation IBM swears it is committed to a true open ecosystem; one where even competitors can license the technology and bring out their own systems. At some point don’t be surprised to see white box Power systems below IBM’s price. More likely in the short term will be specialized Power appliances. What you get as a foundation member is the Power SOC design, Bus Specifications, Reference Designs, FW OS, and Hypervisor Open Source. It also includes access to Little Endian Linux, which will ease the migration of software to POWER. BTW, Google is listed as a member focusing on open source firmware and on the cloud and high performance computing.

Second, the POWER8 processor itself and the new family of systems. The processor, designed for big data, will run more concurrent queries and run them up to 50x fast than x86 with 4x more threads per core than x86. Its I/O bandwidth is 5x faster than POWER7. It can handle 1TB of memory with 4-6x more memory bandwidth and more than 3x more on-chip cache than an x86. The processor itself will utilize 22nm circuits and run 2.5 -5 GHz.

POWER8 sports an eight-threaded processor. That means each of the 12 cores in the CPU will coordinate the processing of eight sets of instructions at a time for a total of 96 processes. Each process consists of a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip, IBM explains. By comparison, IBM reports Intel’s Ivy Bridge E5 Xeon CPUs are double-threaded cores, with up to eight cores, handling 16 processes at a time (compared to 96 with POWER8).  Yes, there is some coordination overhead incurred as more threads are added. Still the POWER8 chip should attract interest among white box manufacturers and users of large numbers of servers processing big data.

Third is CAPI, your newest acronym.  If something is going to be a game-changer, this will be it. The key is to watch for adoption. Coherent Accelerator Processor Interface (CAPI) sits directly on the POWER8 and works with the same memory addresses that the processor uses. Pointers de-referenced same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable interface. In the process, it offloads complexity.

CAPI can reduce the typical seven-step I/O model flow to three steps (shared memory/notify accelerator, acceleration, and shared memory completion). The advantages revolve around virtual addressing and data caching through shared memory and reduced latency for highly referenced data. [see accompanying graphic] It also enables an easier, natural programming model with traditional thread level programming and eliminates the need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing.

 CAPI Picture

It’s too early to determine if CAPI is a game changer but IBM has already started to benchmark some uses. For example, it ran NoSQL on POWER8 with CAPI and achieved a 5x cost reduction. When combined with IBM’s TMI flash it found it could:

  • Attack problem sets otherwise too big for the memory footprint
  • Deliver fast access to small chunks of data
  • Achieve high throughput for data or simplify object addressing through memory semantics.

CAPI brings programming efficiency and simplicity. It uses the PCIe physical interface for the easiest programming and fastest, most direct I/O performance. It enables better virtual addressing and data caching. Although it was intended for acceleration it works well for I/O caching. And it has been shown to deliver a 5x cost reduction with equivalent performance when attaching to flash.  In summary, CAPI enables you to regain infrastructure control and rein in costs to deliver services otherwise not feasible.

It will take time for CAPI to catch on. Developers will need to figure out where and how best to use it. But with CAPI as part of the OpenPOWER Foundation expect to see work taking off in a variety of directions. At a pre-briefing a few weeks ago, DancingDinosaur was able to walk through some very CAPI interesting demos.

As for the new POWER8 Systems lineup, IBM introduced 6 one- or two-socket systems, some for Linux others for all systems.  The systems, reportedly, will start below $8000.

You can follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog. Also, please join me at IBM Edge2014, this May 19-23 at the Venetian in Las Vegas.  Find me in the bloggers lounge.

IBM Edge2014 Explores Software Defined Everything in Depth

April 18, 2014

IBM Edge2014 is coming up fast, May 19-23 in Las Vegas, at the Venetian. Plus there is the Sheryl Crow concert you don’t want to miss.

IBM Edge2014 is bringing over 400 program sessions across more than a dozen topic tracks; choices for everyone from the geekiest techie to the most buttoned down business executive. One that crosses both camps—technical and business—is Software Defined Environments (SDE), which DancingDinosaur of thinks of as software-defined everything.

SDE takes the abstraction, or the decoupling, of software and hardware to the next level. It takes virtualization and moves it higher up in the stack.  There you can virtualize not only servers but network switches, storage, and more. The benefit: efficiency, speed, and flexibility. You can allocate and deploy system resources quickly and easily through software. And add or move capabilities as needed, again through software.

Through software defined virtualization the IT industry can virtualize nearly every resource and capability. If you can encapsulate a set of capabilities as software you have created a virtualized component that can run on almost any network-attached device capable of hosting software.  In short, you have effectively decoupled those capabilities usually embedded as firmware from whatever underlying physical device previously hosted them.

IBM Edge2014 offers numerous software defined sessions. Let’s look at a few:

Software Defined Storage – Storage for Software Defined Environments

Presented by Clodoaldo Barrera

As described in the program, Software Defined Environments (SDE) have become the preferred approach for modern IT operations, combining the values of automation, policy enforcement, and efficiency. Within these SDE operations, storage must be properly configured to support the expected user experience and cost benefits, while still providing performance, availability, and data protection. In this session Barrera will explain how storage is configured and then managed through a stack of virtualization, provisioning, and orchestration software.

IBM Software Defined Storage Vision and Roadmap

Presented by Vincent Hsu, Tom Clark

This session introduces the core technology for IBM Software Defined Storage (SDS) vision and the SDS product roadmap. This includes the control plane technology as well as the data plane technology to address the future software defined data center.

But it is not only about storage. IBM’s Strategy for Software Defined Networking

Presented by Andy Wright

Here Software Defined Networking (SDN) is an emerging framework designed for virtual, dynamic and flexible networking that allows organizations to easily modify, control and manage physical and virtual networks. IBM already is a leader in this space with SDN VE offerings and the roadmap above tells you where it is headed. Wright’s session examines IBM’s Vision, Network Virtualization (Overlay) capabilities for existing networks, and the capabilities of OpenFlow networks. These technologies promise to improve the working lives of system, virtualization, cloud, and network administrators. If you fill one of these roles, you probably don’t want to miss this.

Continuity Requirements for Software Defined Data Centers

Presented by Jon Toigo

One the benefits of software defined resources is the ability to spin up additional resource virtually. That leads to the assumption of an agile and dynamic data center that can turn on a dime in response to business requirements. That is true in theory. Rarely discussed, however, are the inherent risks of combining physical and virtual resources, both locally deployed and sourced from remote cloud-based providers. This presentation will identify some disaster recovery and business continuity requirements to keep in mind as you plan your software defined data center and need to rein in management’s wishful thinking.

And a related topic, It’s All About the Object! IBM Customer Experiences with Object Storage

Presented by Manuel Avalos, Dan Lucky

Travis County in Austin, TX is an early adopter of IBM object storage. This case study positions IBM’s Statement Of Direction (SOD) about object storage based on OpenStack Swift. Here you can learn from six months of Travis County user experiences, values, and next steps.

And there is another entire session on SDE in the federal government. Overall, IBM Edge2014 is delivering considerable real world experiences direct from the user.

Look for DancingDinosaur at IBM Edge2014, Mon-Wed. at sessions or in the bloggers lounge.

And follow DancingDinosaur on Twitter, @mainframeblog

Happy 50th System z

April 11, 2014

IBM threw a delightful anniversary party for the mainframe in NYC last Tuesday, April 8. You can watch video from the event here

About 500 people showed up to meet the next generation of mainframers, the top winners of the global Master of the Mainframe competition. First place went to Yong-Sian Shih, Taiwan; followed by Rijnard van Tonder, South Africa; and Philipp Egli, United Kingdom.  Wouldn’t be surprised if these and the other finalists at the event didn’t have job offers before they walked out of the room.

The System z may be built on 50-year old technology but IBM is rapidly driving the mainframe forward into the future. It had a slew of new announcements ready to go at the anniversary event itself and more will be rolling out in the coming months. Check out all the doings around the Mainframe50 anniversary here.

IBM started the new announcements almost immediately with Hadoop on the System z. Called  zDoop, the industry’s first commercial Hadoop for Linux on System z, puts map reduce big data analytics directly on the z. It also announced Flash for mainframe, consisting of the latest generation of flash storage on the IBM DS8870, which promises to speed time to insight with up to 30X the performance over HDD. Put the two together and the System z should become a potent big data analytics workhorse.

But there was even more. Mobile is hot and the mainframe is ready to play in the mobile arena too. Here the problem z shops experience is cost containment. Mainframe shops are seeing a concurrent rise in their costs related to integrating new mobile applications. The problem revolves around the fact that many mobile activities use mainframe resources but don’t generate immediate income.

The IBM System z Solution for Mobile Computing addresses this with new pricing for mobile workloads on z/OS by reducing the cost of the growth of mobile transaction volumes that can cause a spike in software charges. This new pricing will provide up to a 60% reduction on the processor capacity reported for Mobile activity, which can help normalize the rate of transaction growth that generates software charges. The upshot: much mobile traffic volume won’t increase your software overhead.

And IBM kept rolling out the new announcements:

  • Continuous Integration for System z – Compresses the application delivery cycle from months to weeks or days.   Beyond this IBM suggested upcoming initiatives to deliver full DevOps capabilities for the z
  • New version of IBM CICS Transaction Server – Delivers enhanced mobile and cloud support for CICS, able to handle more than 1 billion transactions per day
  • IBM WebSphere Liberty z/OS Connect—Rapid and secure enablement of web, cloud, and mobile access to z/OS assets
  • IBM Security zSecure SSE – Helps prevent malicious computer attacks with enhanced security intelligence and compliance reporting that delivers security events to QRadar SIEM for integrated enterprise- wide security intelligence dashboarding

Jeff Frey, an IBM Fellow and the former CTO of System z, observed that “this architecture was invented 50 years ago, but it is not an old platform.”  It has evolved over those decades and continues evolve. For example, Frey expects the z to accommodate 22nm chips and a significant increase in the increase in the number of cores per chip. He also expects vector technology, double precision floating point and integer capabilities, and FPGA to be built in. In addition, he expects the z to include next generation virtualization technology for the cloud to support software defined environments.

“This is a modern platform,” Frey emphasized. Other IBMers hinted at even more to come, including ongoing research to move beyond silicon to maintain the steady price/performance gains the computing industry has enjoyed the past number of decades.

Finally, IBM took the anniversary event to introduce a number of what IBM calls first-in-the-enterprise z customers. (DancingDinosaur thinks of them as mainframe virgins).  One is Steel ORCA, a managed service provider putting together what it calls the first full service digital utility center.  Based in Princeton, NJ, Phase 1 will offer connections of less than a millisecond to/from New York and Philadelphia. The base design is 300 watts per square foot and can handle ultra-high density configurations. Behind the operation is a zEC12. Originally the company planned to use an x86 system but the costs were too high. “We could cut those costs in half with the z,” said Dave Crocker, Steel ORCA chairman.

Although the Mainframe50 anniversary event has passed, there will be Mainframe50 events and announcements throughout the rest of the year.  Again, you can follow the action here.

Coming up next for DancingDinosaur is Edge2014, a big infrastructure innovation conference. Next week DancingDinosaur will look at a few more of the most interesting sessions, and there are plenty. There still is time to register. Please come—you’ll find DancingDinosaur in the bloggers lounge, at program sessions, and at the Sheryl Crow concert.

Follow DancingDinosaur on Twitter, @mainframeblog

 

Flash Economics and Implementation Take Front Stage at Edge2014

April 4, 2014

The Edge2014 Guide to the Technical Sessions is now online and accessible to the public, here.  There are more must-see sessions than any human can attend the few days of the conference and still have time for the Sheryl Crow concert. If you haven’t registered for Edge2014 in Las Vegas, May 19-23 at the Venetian, just do it here.

So what’s in the guide? Descriptions of 450+ technical sessions, according from IBM. Over the next few weeks DancingDinosaur will look at a few of the session tracks. Let’s start this week with flash. Flash is a technology that keeps getting better and cheaper and more useful in more and more ways.

Begin with the economics of flash. Initially flash was considered very expensive and it was if you considered it only on the cost/gigabyte basis and compared it to hard disk drives. Since then, flash costs have dropped but, more importantly, organizations are using it in ways where cost/gigabyte isn’t relevant. Instead, there are new ways to understand flash. Let’s look at five flash sessions coming to Edge2014.

The New Cost Metrics of Implementing Flash to Save Money

Presenter: Matt Key—Flash storage can be cheaper to implement than disk storage. This session explores the reasons and cost justification for implementing flash vs. disk without the focus on low cost/ IOPS, which was the initial justification for so-called costly flash. The session also examines the boundaries where other technologies such as RAM, disk, and tape are still a better fit.

After you have learned the metrics to justify an investment in flash here are a couple of sessions that will show you how to best take advantage of it.

Where to Use Flash in the Data Center

Presenters: Woody Hutsell, Chris Breaux—they will review data center economics and then explore the main reasons to use flash in the data center. For example, flash is best used to accelerate applications and infrastructure, reduce cost through less space, meet power and cooling requirements, and create new business opportunities, mainly through its speed and efficiency.  Any workload that can benefit from cheap IOPS is another place to use flash.

Common IBM FlashSystem Implementation Strategies

Presenter: Erik Eyberg—covers similar ground but focuses on the variety of ways flash is being deployed: primary data storage, tiering, mirroring, and many others. Specifically, the session will cover three common FlashSystem deployment strategies for both tactical and strategic flash deployments, plus a few customer stories illustrating their effectiveness.

The next sessions described below don’t fit easy categorization, but they are intriguing nonetheless.

A Business Overview of Software Defined Flash

Presenter: David Gimpl—takes on this newly emerging flash topic, software defined storage (SDS) as applied to all flash storage arrays. In these cases, flash creates a new entity Gimpl refers to as software defined flash. Here he’ll describe the properties of the low latency, high IOPS flash medium coupled with the feature-rich advanced capabilities that provide Tier 1 storage for your business. This session should be cutting edge.

DancingDinosaur has long been a fan of VDI but except for a handful of specialized use cases it hasn’t gained widespread adoption.  Something was missing. The System z should be especially good at VDI workloads, given its ability to support tens of thousands of virtual desktops. Maybe flash will provide the missing ingredient.

Simplifying the desktop virtualization data problem with IBM FlashSystem

Presenter: Rawley Burbridge—IBM offers a wide range of complete solutions for deploying desktop virtualization environments but data storage is still often a costly and complex component to configure and deploy. The macro efficient and high performance data storage offered by the IBM FlashSystem storage portfolio helps to simplify the often complex storage requirements for VDI environments, and reduce data costs to less than those of a physical PC. This session will explore the methods and benefits for utilizing IBM FlashSystem or your desktop virtualization deployments.

So here are five interesting sessions from over 30 in just the flash category alone. Plan to register for Edge2014. You will learn things that should more than pay for your trip and have a good time in the process. And don’t forget the Sheryl Crow concert.

Next week is the kickoff of Mainframe50, the start of the 50th anniversary celebration of the mainframe. The event itself is sold out but you needn’t be left out; it is being streamed live on Livestream, so you can attend from wherever you are.

Follow DancingDinosaur on Twitter, @mainframeblog.  Will be tweeting from the Mainframe50 event and others.

One week to Mainframe50—Be There Virtually

April 1, 2014

Back in February, DancingDinosaur started writing about the upcoming Mainframe50 celebration. Now we’re just one week away from what will be a nearly year-long celebration, introductions of new mainframe advances, and more. It all starts on Tues., April 8 in New York City.

You can join through Livestream for the event and news briefing.  Just click here and join in from wherever you are virtually.

Or you can register to attend the event by clicking here. DancingDinosaur will be there and plans to file a report later that day on this blog and also be tweeting throughout all the Mainframe50 events. Follow it all on Twitter, @mainframeblog.

Later this week, DancingDinosaur will be posting the latest in a series of reports from Edge 2014, being held in Las Vegas, May 19-23. There is still time to register and get a discount. You can find DancingDinosaur there in the Bloggers Lounge after sessions, keynotes, and the Sheryl Crow concert.

And please follow DancingDinosaur on Twitter, @mainframeblog

 

 

 

 

 

Expect Flash to be Big at Edge 2014

March 26, 2014

You can almost hear the tom-toms thumping as IBM picks up the beat for flash storage and its FlashSystem, and for good reason. Almost everything companies want to do these days requires fast, efficient storage. Everything waits for data—applications, servers, algorithms, virtually any IT resource. And fast data, of course, depends on the responsiveness of the storage. Flash’s time has arrived.

To get the responsiveness they need companies previously loaded up with conventional DASD, spinning disks that top out at 15K RPM or cheaper DASD at 5400 RPM. To coax sufficient IO/second (IOPS) they ganged together massive amounts of DASD just to get more parallel spindles to compensate for the low IOPS. Sure the disks were cheap but still the cost per IOPS was sky high, especially considering all the overhead and inefficiency they had to absorb.

But in this era of big data analytics, where an organization’s very competitiveness depends on absorbing massive amounts of data fast that old approach doesn’t work anymore. You can’t aggregate enough spindles to handle the huge amounts of machine-generated or sensor or meter data, not to mention data created by millions, possible even billions, of people on Facebook or Twitter and everywhere else to keep up with the data flow. You can’t possibly come up with meaningful results fast enough to be effective. Opportunities will fly past you.

Furthermore, traditional high performance storage comes at a high price, not just in the acquisition cost of large volumes of spinning disk but also in the inefficiency of its deployment. Sure, the cost per gigabyte may be low but aggregating spindles by the ton while not even utilizing the resulting large chunks of unused capacity will quickly offset any gains from a low cost per gigabyte. In short, traditional storage, especially high performance storage, imposes economic limits on the usefulness and scalability of many analytics environments.

Since data access depends on the response of storage, flash has emerged as the way to achieve high IOPS at a low cost, and with the cost of flash storage dropping steadily it will only become a better deal doing forward. Expect to hear a lot about IBM FlashSystem storage at Edge 2014. As IBM points out, it can eliminate wait times and accelerate critical applications for faster decision making, which translates into faster time to results.

Specifically, IBM reports its FlashSystem delivers:

  • 45x performance improvement with 10x more durability
  • 115x better energy efficiency with 315x superior density
  • 19x more efficient $/IOPS.

Here’s how: both the initial acquisition costs and the ongoing operational costs, such as staffing and environmental costs of FlashSystem storage, according to IBM, can be lower than both performance-optimized hard drive storage solutions and emerging hybrid- or all-flash solutions. In short, IBM FlashSystem delivers the three key attributes data analytics workloads require: compelling data economics, enterprise resiliency, and easy infrastructure integration along with high performance.

As proof, IBM cites a German transport services company that deployed FlashSystem storage to support a critical SAP e- business analytics infrastructure and realized a 50% TCO reduction versus competing solutions.

On top of that, IBM reports FlashSystem storage unlocks additional value from many analytics environments by both turbo-charging response times with its use of MicroLatency technology, effectively multiplying the amount of data that can be analyzed. MicroLatency enables a streamlined high performance data path to accelerate critical applications. The resulting faster response times can yield more business agility and quicker time to value from analytics.

In fact, recent IBM research has found that IBM InfoSphere Identity Insight entity analytics processes can be accelerated by over 6x using FlashSystem storage instead of traditional disk. More data analyzed at once means more potential value streams.

Data has long been considered a valuable asset. For some data has become the most important commodity of all. The infrastructure supporting the analytics environment that converts data as a commodity into valuable business insights must be designed for maximum resiliency. FlashSystem brings a set of data protection features that can help enhance reliability, availability and serviceability while minimizing the impact of failures and down-time due to maintenance. In short it protects what for many is the organization’s data, its most valuable asset.

DancingDinosaur is looking forward to attending Edge 2014 sessions that will drill down into the specifics of how IBM FlashSystem storage works under the cover. It is being held May 19-23 in Las Vegas, at the Venetian. Register now and get a discount. And as much as DancingDinosaur is eager to delve into the details of FlashSystem storage the Sheryl Crow concert is very appealing too. When not in sessions or at the concert look for DancingDinosaur in the bloggers lounge. Please join me.

Follow DancingDinosaur on Twitter: @mainframeblog

Edge 2014 Technical Track Hits Right Hot Buttons

March 21, 2014

The organizers of the Edge 2014 conference, May 19-23  have finally started to roll out some specifics of the program, although still missing are session details.  First released is the program for Technical Edge, the technical track.  There also will be an executive track and another for partners and ISVs. The overall theme of Edge 2014 is Infrastructure Innovation.

Technical Edge will consist of over 450 sessions spread across 14 topic areas. The topic areas can be found here, at the Storage Community website. The technical sessions will hit all the hot topics the industry has been buzzing about for the past year or more. You have much to choose from:

  • Application Infrastructure – ISV
  • Big Data & Analytics
  • Business Continuity
  • Data Efficiency Solutions
  • Dynamic Cloud (Hybrid)
  • Expert Integrated Systems
  • Flash Solutions
  • IT Storage Solutions (Tivoli Storage Manager)
  • Networking Solutions
  • Software Defined Environments (Systems Management)
  • Software Defined Environments (Virtualization)
  • System x and Flex System
  • Technology Partners
  • Technology Update

DancingDinosaur already is planning to attend sessions on Big Data, Flash, and software defined anything. As the organizers post details on the individual sessions, DancingDinosaur will call out some of the most interesting ones in the weeks ahead.

And IBM is bringing its heavy hitters, starting with Tom Rosamilia, Arvind Krishna, and more. But maybe the most exciting event will be a special concert by Sheryl Crow.  Look for DancingDinosaur as close to the front as a middle aged dad can get.

 dino edge 2014 crow image

Otherwise, when not attending sessions look for me in the bloggers lounge. Plan to register soon; you can catch discounts until April 20.

On a related topic, the standings are starting to come in for the global Master the Mainframe contest. See the results here. Looks like a Canadian is holding first for now, followed by a Ukrainian and a Columbian.

BTW, the author of a recent article in Information Week headlined Mainframe Brain Drain Raises Serious Concerns obviously hadn’t been aware of what was going on with IBM’s Academic Initiative for the past half dozen years or the Master the Mainframe competition worldwide.  DancingDinosaur covered this for year, most recentl a few weeks back here. The writer must have seen the light, however, because he just published another piece this week with the opposite message, here. Go figure. Hope to see you at Edge 2014.


Follow

Get every new post delivered to your Inbox.

Join 631 other followers

%d bloggers like this: