Expanding Mainframe Linux and Cloud Computing

June 9, 2014

In case you wondered if IBM is seriously committed to both mainframe Linux and cloud computing on the System z platform you need only look at the June 2 announcement that the company is opening the first dedicated System z Linux and cloud computing competency center in Beijing.  According to the announcement, the new center is specifically intended to help organizations there take advantage of Linux and cloud computing solutions on the mainframe, and help accelerate adoption of Linux on System z in China.

This is just the most recent of a number of developments that boosted the System z profile. Even at the recent IBM Edge 2014 conference, which was not about the System z at all (a System z and Power conference, Enterprise 2014, is coming up in October) still managed to slip in some System z sessions and content, including one about protecting DB2 data on z/OS using tape and other sessions that included the System z and Power enterprise servers in discussions on various aspects of cloud computing or the use of flash.

Following the Mainframe50 announcement earlier in the spring, IBM introduced more System z enhancements including the IBM Enterprise Cloud System, an OpenStack-based converged offering that includes compute, storage, software, and services and built around the zBC12; IBM Wave for z/VM, which simplifies z/VM virtualization management and expedites an organization’s path to the cloud; and a new IBM Cloud Management Suite for System z, which handles dynamic provisioning and performance monitoring.

An interesting aspect of this announcement is the IBM’s focus on Linux. It has taken a decade for Linux to gain traction in System z data centers but patience is finally paying off.  Linux has proven instrumental in bringing new mainframe users to the platform (DancingDinosuar previously reported on Algar, a Brazilian telco) ; according to IBM, more than 50% of all new mainframe accounts since 2010 run Linux. To that end, DancingDinosaur has long recommended the Enterprise Linux Server Solution Edition program, a deeply discounted package hardware, middleware, and software. It represents the best and maybe the only bargain IBM regularly offers.

Linux itself has proven remarkably robust and has achieved widespread acceptance among enterprises running a variety of platforms. According to the IDC, Linux server demand is rising due to demand from cloud infrastructure deployments. The researcher expects that demand to continue into the future. In the first quarter of 2014, Linux server revenue accounted for 30 percent of overall server revenue, an increase of 15.4 percent.

Along with cloud computing, collaborative development appears to be contributing to the continued growth and adoption of Linux. According to the Linux Foundation, a new business model has emerged in which companies are joining together across industries to share development resources and build common open source code bases on which they can differentiate their own products and services. This collaborative approach promises to transform a number of industries, especially those involved with cloud computing, social and mobile. Apparently it provides a fast way to create the next generation of technology products.

In its latest survey, the Linux Foundation identified three drivers or the recent Linux growth:

  1. Collaborative software development—ninety-one percent of business managers and executives surveyed ruled collaborative software development somewhat to very important to their business while nearly 80 percent say collaborative development practices have been seen as more strategic to their organization over the past three years.
  2. Growing investments in collaborative software development—44 percent of business managers said they would increase their investments in collaborative software development in the next six months
  3. The benefits of collaboration—more than 77 percent of managers said collaborative development practices have benefited their organizations through a shorter product development cycle/faster time to market.

The bulk of the world’s critical transaction processing and production data continue to reside on the mainframe, around 70 percent, according to IBM. Similarly, 71% of all Fortune 500 companies have their core businesses on a mainframe. And this has remained remarkably steady over the past decade despite the rise of cloud computing. Of course, all these organizations have extensive multi-platform data centers and are adding growing numbers of on-premise and increasingly hybrid cloud systems.

Far from relying on its core production processing to carry the mainframe forever, the new Beijing mainframe Linux-cloud center demonstrates IBM’s intent to advance the mainframe platform in new markets. It is opening the mainframe up in a variety of ways; from z/OS in the cloud to Hadoop for z to new cloud-like pay-for-use pricing models. Watch DancingDinosaur for an upcoming post on the new pricing discounts for mobile transactions on z/OS.

DancingDinosaur is Alan Radding and can be followed on Twitter, @mainframeblog

New IBM Flash Storage for the Mainframe

June 2, 2014

IBM is serious about flash storage and they are enabling just about everything for flash—the DS8000 family, San Volume Controller, EasyTier, Real-time Compression (RtC), and more.  Of particular interest to DancingDinosaur readers should be the recently announced DS8870 all flash enclosure.

Storage in general is changing fast. Riding Moore’s Law for the past two decades, storage users could assume annual drops in the cost per gigabyte. It was as predictable as passing go in Monopoly and collecting $200. But with that ride coming to an end companies like IBM are looking elsewhere to engineer the continued improvements everyone assumed and benefited from. For example, IBM is combining SVC, RtC, and flash to get significantly more performance out of less actual storage capacity.

The DS8870 is particularly interesting. In terms of reliability, for instance, it delivers not five-nines (99.999%) availability but six-nines (99.9999%) availability. That works out to be about 30 seconds of downtime each year. It works with all IBM servers, not just the z, and it protects data through full disk encryption and advanced access control. With the new flash enclosure packed with IBM’s enhanced flash the DS8870 delivers 4x faster flash performance in 50% less space. That translates into a 3.2x improvement in database performance.

Flash is not cheap when viewed through the traditional cost/gigabyte metric, but the above performance data suggests a different way to gauge the cost of flash, which continues to steadily fall in price. The 3.2x increase in database performance, for example, means you can handle over 300% more transactions.

Let’s start with the assumption that more transactions ultimately translate into more revenue. The same for that extra 9 in availability. The high-performance all flash DS8870 configuration with the High Performance Flash Enclosure also reduces the footprint by 50% and reduces power consumption by 12%, which means lower space and energy costs. It also enables you to shrink batch times by 10%, according to IBM. DancingDinosaur will be happy to help you pull together a TCO analysis for an all-flash DS8870 investment.

The sheer specs of the new system are impressive. IBM reports the product’s up to 8 PCIe enclosures populated with 400 GB flash cards provides 73.6TB of usable capacity. For I/O capacity the 8 I/O bays installed in the base frame provide up to 128 8Gb FC ports. Depending on the internal server you install in the DS8870 you can also get up to 1TB of cache.

all flash rack enclosure

all flash rack enclosure

ds8870 rack

The Flash Enclosure itself is a 1U drawer that can take up to 30 flash cards.  By opting for thirty 400GB flash cards you will end up with 9.2TB Usable (12 TB raw). Since the high-performance all flash DS8870 can take up to 8 Flash Enclosures you can get 96TB raw (73.6TB usable) flash capacity per system.

A hybrid DS8870 system, as opposed to the high-performance all flash version, will allow up to 120 Flash cards in 4 Flash Enclosures for 48TB raw (36.8TB usable), along with 1536 2.5” HDDs/SSDs. Then, connect it all to the DS8870 internal PCIe fabric for impressive performance— 200,000 IOPS (100% read) and 130,000 IOPS (100% write). From there, you can connect it to flash-enabled SVC and Easy Tier.

Later this year, reports Clod Barrera, IBM’s storage CTO, you will be able to add 4 more enclosures in hybrid configurations for boosting flash capacity up to 96TB raw.  Together you can combine the DS8870, flash, SVC, RtC, and EasyTier for a lightning fast and efficient storage infrastructure.

Even the most traditional System z shop will soon find itself confronting mixed workloads consisting of traditional and non-traditional workload. You probably already are as mobile devices initiate requests for mainframe data. Pretty soon you will be faced with incorporating traditional and new workloads. When that happens you will want a fast, efficient, flexible infrastructure like the DS8870.

DancingDinosaur is Alan Radding. Follow him onTwitter, @mainframeblog

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog

IBM Edge2014 Executive Track Hits Critical Issues

May 14, 2014

Until now this blog has looked at the technology issues being addressed at the upcoming IBM Edge2014 conference starting in Las Vegas on Monday. There also, however, is a business focus through the Edge2014 Executive Track. You can find Executive Track session details here.

DancingDinosaur, a mediocre programmer in a previous life, still gravitates toward technical sessions in the hopes of understanding this amazing new technology. But for the sake of some balance your blogger will hit a few Executive Track sessions. The following look particularly interesting:

Infrastructure Matters Because Business Outcomes Matter—to be given by Tom Rosamilia, Senior Vice President, IBM Systems & Technology Group and IBM Integrated Supply Chain. Your blogger has heard Rosamilia talk on this topic before and he is spot on. He explains why the right IT infrastructure choices are critical for business success and how the demands created by cloud, big data and analytics, mobile, and social are fueling an explosion of data, and spawning new workloads and business models. This has drastically changed expectations for IT infrastructures, which now is expected to be software defined, open, and more secure than ever before. The result is what IBM calls Composable Business, which optimizes technology by building a robust, highly agile infrastructure that can adjust quickly and efficiently to change for better business outcomes.

Along the same lines is Rethinking Your Business Model with Cloud. Here Robert LeBlanc, Senior Vice President, Software and Cloud Solutions, IBM Software Group, describes how new disruptive technologies are pushing companies to rethink how they do business. As a result, a wide range of companies, both new and well established, ranging from Tangerine (formerly ING Direct), OnFarm (farm management) and Kiwi (wearable technology) to Pitney Bowes (reinventing the postage meter business) are redefining themselves and their industries. Not surprisingly business leaders, IT executives, and developers are embracing cloud technology across their organizations to promote innovation, drive business growth, and gain a competitive advantage.  Enterprises are tapping into IBM cloud technologies, such as the IBM Cloud Marketplace and BlueMix, to quickly build applications that transform their businesses.

Improved Economics with Data Virtualization—Jeff Barber, Vice President, Mid-Range/Low End Disk Business Line Executive, IBM Systems & Technology Group discusses how data is the next natural resource; the new currency of business. In today’s mobile and social environment, users expect data to be available anytime, anywhere, and on any device. This has placed great financial pressure on enterprise data fabrics. With budgets under constant scrutiny, companies need to manage storage costs while delivering faster business insights. IBM Storage provides data virtualization that helps enable business agility while reducing both infrastructure and operational costs. Data virtualization also provides the foundation for software defined storage and cloud storage.

Big Data Insights in a Flash–Michael Kuhn, Vice President and Business Line Executive, Flash

Systems, IBM Systems & Technology Group explains how the massive amounts of data being generated each day are making it difficult to capture insights from the data. Through this data explosion businesses can achieve their competitive edge by making data-driven decisions in real time. However, the velocity at which data is growing has made it increasingly difficult for businesses to manage and extract value from their data. Kuhn shows how adding flash storage to a data fabric offers a fast, low cost way to create business value and extract greater insights from data. Flash also enables data analytics in near real time so workers can take appropriate actions even as customers are making buying decisions.

Much of IBM Edge2014 will touch on storage in various ways, which makes sense since data lies at the heart of technology’s value, and storage is how organizations capture, preserve, protect, and leverage data for its maximum impact.

Flexible Storage for Data in the Cloud by Vincent Hsu, IBM Fellow and Chief Technical Officer, Storage Systems and Sidney Chow, Vice President and Business Line Executive, High End Disk, IBM Systems & Technology Group. Hsu and Chow start with the unpredictability of workloads in cloud environments that is driving an ever increasing need for a flexible and scalable infrastructure. Your cloud services, whether provided on-premises or off-site, are only as good as the elasticity and control they can provide. The right data infrastructure matters when creating cloud environments that can optimize data for faster business performance and lower total costs. Software Defined Storage take elasticity to the next level by simplifying interfaces and automating tasks.

Not sure which of these sessions your blogger will attend—all look good. In fact, there is so much at IBM Edge2014 that this blogger would need another week to catch all he likes. And, still need to make time for the Sheryl Crow concert.  Decisions, decisions, decisions…

Look for this blogger 5/19 through 5/22 at Edge2014. You’ll find me in the social media lounge when not attending a session. And follow me on Twitter, @mainframeblog or @IBMEdge.

IBM Edge2014 as Coming out Party for OpenStack

May 7, 2014

IBM didn’t invent OpenStack (Rackspace and NASA did), but IBM’s embrace of OpenStack in March 2013 as its standard for cloud computing made it a legit standard for enterprise computing. Since then IBM has made its intention to enable its product line, from the System z on down, for the OpenStack set of open source technologies.  Judging from the number of sessions at IBM Edge 2014, (Las Vegas, May 19-23 at the Venetian) that address one or another aspect of OpenStack you might think of IBM Edge2014 almost as a coming out celebration for OpenStack and enterprise cloud computing.

OpenStack is a collection of open source technologies. the goal of which is to provide a scalable computing infrastructure for both public and private clouds. As such it has become the foundation of IBM’s cloud strategy, which is another way of saying it has become what IBM sees as its future. An excellent mini-tutorial on OpenStack, IBM, and the System z can be found at mainframe-watch-Belgium here.

At IBM Edge2014 OpenStack is frequently included in sessions on storage, cloud, and storage management.  Let’s take a closer look at a few of those sessions.

IBM Storage and Cloud Technologies

Presenter Christopher Vollmar offers an overview of the IBM storage platforms that contain cloud technologies or provide a foundation for creating a private storage cloud for block and file workloads. This overview includes IBM’s SmartCloud Virtual Storage Center, SmartCloud Storage Access, Active Cloud Engine, and XIV’s Hyper-Scale as well as IBM storage products’ integration with OpenStack.

OpenStack and IBM Storage

Presenters Michael Factor and Funda Eceral explain how OpenStack is rapidly emerging as the de facto platform for Infrastructure as a Service. IBM is working fast to pin down the integration of its storage products with OpenStack. This talk presents a high level overview of OpenStack, with a focus on Cinder, the OpenStack block storage manager. They also will explain how IBM is leading the evolution of Cinder by improving the common base with features such as volume migration and ability to change the SLAs associated with the volume in the OpenStack cloud. Already IBM storage products—Storwize, XIV, DS8000, GPFS and TSM—are integrated with OpenStack, enabling self-provisioning access to features such as EasyTier or Real-time Compression via standard OpenStack interfaces. Eventually, you should expect virtually all IBM products, capabilities, and services to work with and through OpenStack.

IBM XIV and VMware: Best Practices for Your Cloud

Presenters Peter Kisich, Carlos Lizarralde argue that IBM Storage continues to lead in OpenStack integration and development. They then introduce the core services of OpenStack while focusing on how IBM storage provides open source integration with Cinder drivers for Storwize, DS8000 and XIV. They also include key examples and a demonstration of the automation and management IBM Storage offers through the OpenStack cloud platform.

IBM OpenStack Hybrid Cloud on IBM PureFlex and SoftLayer

Presenter Eric Kern explains how IBM’s latest version of OpenStack is used to showcase a hybrid cloud environment. A pair of SoftLayer servers running in IBM’s public cloud are matched with a PureFlex environment locally hosting the OpenStack controller. He covers the architecture used to set up this environment before diving into the details around deploying workloads.

Even if you never get to IBM Edge2014 it should be increasingly clear that OpenStack is quickly gaining traction and destined to emerge as central to Enterprise IT, any style of cloud computing, and IBM. OpenStack will be essential for any private, public, and hybrid cloud deployments. Come to Edge2014 and get up to speed fast on OpenStack.

Alan Radding/DancingDinosaur will be there. Look for me in the bloggers lounge between and after sessions. Also watch for upcoming posts on DancingDinosaur about OpenStack and the System z and on OpenStack on Power Systems.

Please follow DancingDinosaur on Twitter, @mainframeblog.

Best System z TCO in Cloud and Virtualization

May 1, 2014

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines in terms of TCO.  The analysis, which DancingDinosaur will dig into below, was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

DancingDinosaur has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 based systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial z Enterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM and the few IT analysts who talk to z customers have been saying for some time. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual machines compared to the public cloud and a bit more VMs compared to x86 machines.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance. View the IBM z TCO presentation here.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for AWS reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instances.

When IBM applied its analysis to 398 I/O diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM kept the assumptions equivalent across the platforms. If you make different software and middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the overall comparative rankings probably won’t change all that much.

Still time to register for IBM Edge2014 in Las Vegas, May 19-23. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog

3 Big Takeaways at IBM POWER8 Intro

April 24, 2014

POWER8 did not disappoint. IBM unveiled its latest generation of systems built on its new POWER8 technology on Wednesday, April 23.

DancingDinosaur sees three important takeaways from this announcement:

First, the OpenPOWER Foundation. It was introduced months ago and almost immediately forgotten. DancingDinosaur covered it at the time here. It had handful of small partners. Only one was significant, Google, and was it was hard to imagine Google bringing out open source POWER servers. Now the Foundation has several dozen members and it still is not clear what Google is doing there, but the Foundation clearly is gaining traction. You can expect more companies to join the Foundation in the coming weeks and months.

With the Foundation IBM swears it is committed to a true open ecosystem; one where even competitors can license the technology and bring out their own systems. At some point don’t be surprised to see white box Power systems below IBM’s price. More likely in the short term will be specialized Power appliances. What you get as a foundation member is the Power SOC design, Bus Specifications, Reference Designs, FW OS, and Hypervisor Open Source. It also includes access to Little Endian Linux, which will ease the migration of software to POWER. BTW, Google is listed as a member focusing on open source firmware and on the cloud and high performance computing.

Second, the POWER8 processor itself and the new family of systems. The processor, designed for big data, will run more concurrent queries and run them up to 50x fast than x86 with 4x more threads per core than x86. Its I/O bandwidth is 5x faster than POWER7. It can handle 1TB of memory with 4-6x more memory bandwidth and more than 3x more on-chip cache than an x86. The processor itself will utilize 22nm circuits and run 2.5 -5 GHz.

POWER8 sports an eight-threaded processor. That means each of the 12 cores in the CPU will coordinate the processing of eight sets of instructions at a time for a total of 96 processes. Each process consists of a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip, IBM explains. By comparison, IBM reports Intel’s Ivy Bridge E5 Xeon CPUs are double-threaded cores, with up to eight cores, handling 16 processes at a time (compared to 96 with POWER8).  Yes, there is some coordination overhead incurred as more threads are added. Still the POWER8 chip should attract interest among white box manufacturers and users of large numbers of servers processing big data.

Third is CAPI, your newest acronym.  If something is going to be a game-changer, this will be it. The key is to watch for adoption. Coherent Accelerator Processor Interface (CAPI) sits directly on the POWER8 and works with the same memory addresses that the processor uses. Pointers de-referenced same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable interface. In the process, it offloads complexity.

CAPI can reduce the typical seven-step I/O model flow to three steps (shared memory/notify accelerator, acceleration, and shared memory completion). The advantages revolve around virtual addressing and data caching through shared memory and reduced latency for highly referenced data. [see accompanying graphic] It also enables an easier, natural programming model with traditional thread level programming and eliminates the need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing.

 CAPI Picture

It’s too early to determine if CAPI is a game changer but IBM has already started to benchmark some uses. For example, it ran NoSQL on POWER8 with CAPI and achieved a 5x cost reduction. When combined with IBM’s TMI flash it found it could:

  • Attack problem sets otherwise too big for the memory footprint
  • Deliver fast access to small chunks of data
  • Achieve high throughput for data or simplify object addressing through memory semantics.

CAPI brings programming efficiency and simplicity. It uses the PCIe physical interface for the easiest programming and fastest, most direct I/O performance. It enables better virtual addressing and data caching. Although it was intended for acceleration it works well for I/O caching. And it has been shown to deliver a 5x cost reduction with equivalent performance when attaching to flash.  In summary, CAPI enables you to regain infrastructure control and rein in costs to deliver services otherwise not feasible.

It will take time for CAPI to catch on. Developers will need to figure out where and how best to use it. But with CAPI as part of the OpenPOWER Foundation expect to see work taking off in a variety of directions. At a pre-briefing a few weeks ago, DancingDinosaur was able to walk through some very CAPI interesting demos.

As for the new POWER8 Systems lineup, IBM introduced 6 one- or two-socket systems, some for Linux others for all systems.  The systems, reportedly, will start below $8000.

You can follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog. Also, please join me at IBM Edge2014, this May 19-23 at the Venetian in Las Vegas.  Find me in the bloggers lounge.

IBM Edge2014 Explores Software Defined Everything in Depth

April 18, 2014

IBM Edge2014 is coming up fast, May 19-23 in Las Vegas, at the Venetian. Plus there is the Sheryl Crow concert you don’t want to miss.

IBM Edge2014 is bringing over 400 program sessions across more than a dozen topic tracks; choices for everyone from the geekiest techie to the most buttoned down business executive. One that crosses both camps—technical and business—is Software Defined Environments (SDE), which DancingDinosaur of thinks of as software-defined everything.

SDE takes the abstraction, or the decoupling, of software and hardware to the next level. It takes virtualization and moves it higher up in the stack.  There you can virtualize not only servers but network switches, storage, and more. The benefit: efficiency, speed, and flexibility. You can allocate and deploy system resources quickly and easily through software. And add or move capabilities as needed, again through software.

Through software defined virtualization the IT industry can virtualize nearly every resource and capability. If you can encapsulate a set of capabilities as software you have created a virtualized component that can run on almost any network-attached device capable of hosting software.  In short, you have effectively decoupled those capabilities usually embedded as firmware from whatever underlying physical device previously hosted them.

IBM Edge2014 offers numerous software defined sessions. Let’s look at a few:

Software Defined Storage – Storage for Software Defined Environments

Presented by Clodoaldo Barrera

As described in the program, Software Defined Environments (SDE) have become the preferred approach for modern IT operations, combining the values of automation, policy enforcement, and efficiency. Within these SDE operations, storage must be properly configured to support the expected user experience and cost benefits, while still providing performance, availability, and data protection. In this session Barrera will explain how storage is configured and then managed through a stack of virtualization, provisioning, and orchestration software.

IBM Software Defined Storage Vision and Roadmap

Presented by Vincent Hsu, Tom Clark

This session introduces the core technology for IBM Software Defined Storage (SDS) vision and the SDS product roadmap. This includes the control plane technology as well as the data plane technology to address the future software defined data center.

But it is not only about storage. IBM’s Strategy for Software Defined Networking

Presented by Andy Wright

Here Software Defined Networking (SDN) is an emerging framework designed for virtual, dynamic and flexible networking that allows organizations to easily modify, control and manage physical and virtual networks. IBM already is a leader in this space with SDN VE offerings and the roadmap above tells you where it is headed. Wright’s session examines IBM’s Vision, Network Virtualization (Overlay) capabilities for existing networks, and the capabilities of OpenFlow networks. These technologies promise to improve the working lives of system, virtualization, cloud, and network administrators. If you fill one of these roles, you probably don’t want to miss this.

Continuity Requirements for Software Defined Data Centers

Presented by Jon Toigo

One the benefits of software defined resources is the ability to spin up additional resource virtually. That leads to the assumption of an agile and dynamic data center that can turn on a dime in response to business requirements. That is true in theory. Rarely discussed, however, are the inherent risks of combining physical and virtual resources, both locally deployed and sourced from remote cloud-based providers. This presentation will identify some disaster recovery and business continuity requirements to keep in mind as you plan your software defined data center and need to rein in management’s wishful thinking.

And a related topic, It’s All About the Object! IBM Customer Experiences with Object Storage

Presented by Manuel Avalos, Dan Lucky

Travis County in Austin, TX is an early adopter of IBM object storage. This case study positions IBM’s Statement Of Direction (SOD) about object storage based on OpenStack Swift. Here you can learn from six months of Travis County user experiences, values, and next steps.

And there is another entire session on SDE in the federal government. Overall, IBM Edge2014 is delivering considerable real world experiences direct from the user.

Look for DancingDinosaur at IBM Edge2014, Mon-Wed. at sessions or in the bloggers lounge.

And follow DancingDinosaur on Twitter, @mainframeblog

Happy 50th System z

April 11, 2014

IBM threw a delightful anniversary party for the mainframe in NYC last Tuesday, April 8. You can watch video from the event here

About 500 people showed up to meet the next generation of mainframers, the top winners of the global Master of the Mainframe competition. First place went to Yong-Sian Shih, Taiwan; followed by Rijnard van Tonder, South Africa; and Philipp Egli, United Kingdom.  Wouldn’t be surprised if these and the other finalists at the event didn’t have job offers before they walked out of the room.

The System z may be built on 50-year old technology but IBM is rapidly driving the mainframe forward into the future. It had a slew of new announcements ready to go at the anniversary event itself and more will be rolling out in the coming months. Check out all the doings around the Mainframe50 anniversary here.

IBM started the new announcements almost immediately with Hadoop on the System z. Called  zDoop, the industry’s first commercial Hadoop for Linux on System z, puts map reduce big data analytics directly on the z. It also announced Flash for mainframe, consisting of the latest generation of flash storage on the IBM DS8870, which promises to speed time to insight with up to 30X the performance over HDD. Put the two together and the System z should become a potent big data analytics workhorse.

But there was even more. Mobile is hot and the mainframe is ready to play in the mobile arena too. Here the problem z shops experience is cost containment. Mainframe shops are seeing a concurrent rise in their costs related to integrating new mobile applications. The problem revolves around the fact that many mobile activities use mainframe resources but don’t generate immediate income.

The IBM System z Solution for Mobile Computing addresses this with new pricing for mobile workloads on z/OS by reducing the cost of the growth of mobile transaction volumes that can cause a spike in software charges. This new pricing will provide up to a 60% reduction on the processor capacity reported for Mobile activity, which can help normalize the rate of transaction growth that generates software charges. The upshot: much mobile traffic volume won’t increase your software overhead.

And IBM kept rolling out the new announcements:

  • Continuous Integration for System z – Compresses the application delivery cycle from months to weeks or days.   Beyond this IBM suggested upcoming initiatives to deliver full DevOps capabilities for the z
  • New version of IBM CICS Transaction Server – Delivers enhanced mobile and cloud support for CICS, able to handle more than 1 billion transactions per day
  • IBM WebSphere Liberty z/OS Connect—Rapid and secure enablement of web, cloud, and mobile access to z/OS assets
  • IBM Security zSecure SSE – Helps prevent malicious computer attacks with enhanced security intelligence and compliance reporting that delivers security events to QRadar SIEM for integrated enterprise- wide security intelligence dashboarding

Jeff Frey, an IBM Fellow and the former CTO of System z, observed that “this architecture was invented 50 years ago, but it is not an old platform.”  It has evolved over those decades and continues evolve. For example, Frey expects the z to accommodate 22nm chips and a significant increase in the increase in the number of cores per chip. He also expects vector technology, double precision floating point and integer capabilities, and FPGA to be built in. In addition, he expects the z to include next generation virtualization technology for the cloud to support software defined environments.

“This is a modern platform,” Frey emphasized. Other IBMers hinted at even more to come, including ongoing research to move beyond silicon to maintain the steady price/performance gains the computing industry has enjoyed the past number of decades.

Finally, IBM took the anniversary event to introduce a number of what IBM calls first-in-the-enterprise z customers. (DancingDinosaur thinks of them as mainframe virgins).  One is Steel ORCA, a managed service provider putting together what it calls the first full service digital utility center.  Based in Princeton, NJ, Phase 1 will offer connections of less than a millisecond to/from New York and Philadelphia. The base design is 300 watts per square foot and can handle ultra-high density configurations. Behind the operation is a zEC12. Originally the company planned to use an x86 system but the costs were too high. “We could cut those costs in half with the z,” said Dave Crocker, Steel ORCA chairman.

Although the Mainframe50 anniversary event has passed, there will be Mainframe50 events and announcements throughout the rest of the year.  Again, you can follow the action here.

Coming up next for DancingDinosaur is Edge2014, a big infrastructure innovation conference. Next week DancingDinosaur will look at a few more of the most interesting sessions, and there are plenty. There still is time to register. Please come—you’ll find DancingDinosaur in the bloggers lounge, at program sessions, and at the Sheryl Crow concert.

Follow DancingDinosaur on Twitter, @mainframeblog

 

Flash Economics and Implementation Take Front Stage at Edge2014

April 4, 2014

The Edge2014 Guide to the Technical Sessions is now online and accessible to the public, here.  There are more must-see sessions than any human can attend the few days of the conference and still have time for the Sheryl Crow concert. If you haven’t registered for Edge2014 in Las Vegas, May 19-23 at the Venetian, just do it here.

So what’s in the guide? Descriptions of 450+ technical sessions, according from IBM. Over the next few weeks DancingDinosaur will look at a few of the session tracks. Let’s start this week with flash. Flash is a technology that keeps getting better and cheaper and more useful in more and more ways.

Begin with the economics of flash. Initially flash was considered very expensive and it was if you considered it only on the cost/gigabyte basis and compared it to hard disk drives. Since then, flash costs have dropped but, more importantly, organizations are using it in ways where cost/gigabyte isn’t relevant. Instead, there are new ways to understand flash. Let’s look at five flash sessions coming to Edge2014.

The New Cost Metrics of Implementing Flash to Save Money

Presenter: Matt Key—Flash storage can be cheaper to implement than disk storage. This session explores the reasons and cost justification for implementing flash vs. disk without the focus on low cost/ IOPS, which was the initial justification for so-called costly flash. The session also examines the boundaries where other technologies such as RAM, disk, and tape are still a better fit.

After you have learned the metrics to justify an investment in flash here are a couple of sessions that will show you how to best take advantage of it.

Where to Use Flash in the Data Center

Presenters: Woody Hutsell, Chris Breaux—they will review data center economics and then explore the main reasons to use flash in the data center. For example, flash is best used to accelerate applications and infrastructure, reduce cost through less space, meet power and cooling requirements, and create new business opportunities, mainly through its speed and efficiency.  Any workload that can benefit from cheap IOPS is another place to use flash.

Common IBM FlashSystem Implementation Strategies

Presenter: Erik Eyberg—covers similar ground but focuses on the variety of ways flash is being deployed: primary data storage, tiering, mirroring, and many others. Specifically, the session will cover three common FlashSystem deployment strategies for both tactical and strategic flash deployments, plus a few customer stories illustrating their effectiveness.

The next sessions described below don’t fit easy categorization, but they are intriguing nonetheless.

A Business Overview of Software Defined Flash

Presenter: David Gimpl—takes on this newly emerging flash topic, software defined storage (SDS) as applied to all flash storage arrays. In these cases, flash creates a new entity Gimpl refers to as software defined flash. Here he’ll describe the properties of the low latency, high IOPS flash medium coupled with the feature-rich advanced capabilities that provide Tier 1 storage for your business. This session should be cutting edge.

DancingDinosaur has long been a fan of VDI but except for a handful of specialized use cases it hasn’t gained widespread adoption.  Something was missing. The System z should be especially good at VDI workloads, given its ability to support tens of thousands of virtual desktops. Maybe flash will provide the missing ingredient.

Simplifying the desktop virtualization data problem with IBM FlashSystem

Presenter: Rawley Burbridge—IBM offers a wide range of complete solutions for deploying desktop virtualization environments but data storage is still often a costly and complex component to configure and deploy. The macro efficient and high performance data storage offered by the IBM FlashSystem storage portfolio helps to simplify the often complex storage requirements for VDI environments, and reduce data costs to less than those of a physical PC. This session will explore the methods and benefits for utilizing IBM FlashSystem or your desktop virtualization deployments.

So here are five interesting sessions from over 30 in just the flash category alone. Plan to register for Edge2014. You will learn things that should more than pay for your trip and have a good time in the process. And don’t forget the Sheryl Crow concert.

Next week is the kickoff of Mainframe50, the start of the 50th anniversary celebration of the mainframe. The event itself is sold out but you needn’t be left out; it is being streamed live on Livestream, so you can attend from wherever you are.

Follow DancingDinosaur on Twitter, @mainframeblog.  Will be tweeting from the Mainframe50 event and others.


Follow

Get every new post delivered to your Inbox.

Join 651 other followers

%d bloggers like this: