Posts Tagged ‘Power Systems’

3 Big Takeaways at IBM POWER8 Intro

April 24, 2014

POWER8 did not disappoint. IBM unveiled its latest generation of systems built on its new POWER8 technology on Wednesday, April 23.

DancingDinosaur sees three important takeaways from this announcement:

First, the OpenPOWER Foundation. It was introduced months ago and almost immediately forgotten. DancingDinosaur covered it at the time here. It had handful of small partners. Only one was significant, Google, and was it was hard to imagine Google bringing out open source POWER servers. Now the Foundation has several dozen members and it still is not clear what Google is doing there, but the Foundation clearly is gaining traction. You can expect more companies to join the Foundation in the coming weeks and months.

With the Foundation IBM swears it is committed to a true open ecosystem; one where even competitors can license the technology and bring out their own systems. At some point don’t be surprised to see white box Power systems below IBM’s price. More likely in the short term will be specialized Power appliances. What you get as a foundation member is the Power SOC design, Bus Specifications, Reference Designs, FW OS, and Hypervisor Open Source. It also includes access to Little Endian Linux, which will ease the migration of software to POWER. BTW, Google is listed as a member focusing on open source firmware and on the cloud and high performance computing.

Second, the POWER8 processor itself and the new family of systems. The processor, designed for big data, will run more concurrent queries and run them up to 50x fast than x86 with 4x more threads per core than x86. Its I/O bandwidth is 5x faster than POWER7. It can handle 1TB of memory with 4-6x more memory bandwidth and more than 3x more on-chip cache than an x86. The processor itself will utilize 22nm circuits and run 2.5 -5 GHz.

POWER8 sports an eight-threaded processor. That means each of the 12 cores in the CPU will coordinate the processing of eight sets of instructions at a time for a total of 96 processes. Each process consists of a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip, IBM explains. By comparison, IBM reports Intel’s Ivy Bridge E5 Xeon CPUs are double-threaded cores, with up to eight cores, handling 16 processes at a time (compared to 96 with POWER8).  Yes, there is some coordination overhead incurred as more threads are added. Still the POWER8 chip should attract interest among white box manufacturers and users of large numbers of servers processing big data.

Third is CAPI, your newest acronym.  If something is going to be a game-changer, this will be it. The key is to watch for adoption. Coherent Accelerator Processor Interface (CAPI) sits directly on the POWER8 and works with the same memory addresses that the processor uses. Pointers de-referenced same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable interface. In the process, it offloads complexity.

CAPI can reduce the typical seven-step I/O model flow to three steps (shared memory/notify accelerator, acceleration, and shared memory completion). The advantages revolve around virtual addressing and data caching through shared memory and reduced latency for highly referenced data. [see accompanying graphic] It also enables an easier, natural programming model with traditional thread level programming and eliminates the need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing.

 CAPI Picture

It’s too early to determine if CAPI is a game changer but IBM has already started to benchmark some uses. For example, it ran NoSQL on POWER8 with CAPI and achieved a 5x cost reduction. When combined with IBM’s TMI flash it found it could:

  • Attack problem sets otherwise too big for the memory footprint
  • Deliver fast access to small chunks of data
  • Achieve high throughput for data or simplify object addressing through memory semantics.

CAPI brings programming efficiency and simplicity. It uses the PCIe physical interface for the easiest programming and fastest, most direct I/O performance. It enables better virtual addressing and data caching. Although it was intended for acceleration it works well for I/O caching. And it has been shown to deliver a 5x cost reduction with equivalent performance when attaching to flash.  In summary, CAPI enables you to regain infrastructure control and rein in costs to deliver services otherwise not feasible.

It will take time for CAPI to catch on. Developers will need to figure out where and how best to use it. But with CAPI as part of the OpenPOWER Foundation expect to see work taking off in a variety of directions. At a pre-briefing a few weeks ago, DancingDinosaur was able to walk through some very CAPI interesting demos.

As for the new POWER8 Systems lineup, IBM introduced 6 one- or two-socket systems, some for Linux others for all systems.  The systems, reportedly, will start below $8000.

You can follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog. Also, please join me at IBM Edge2014, this May 19-23 at the Venetian in Las Vegas.  Find me in the bloggers lounge.

The Future of IBM Lies in the Cloud

March 13, 2014

In her annual letter to stockholders IBM CEO Virginia Rometty made it clear that the world is being forever altered by the explosion of digital data and by the advent of the cloud. So, she intends IBM to “remake the enterprise IT infrastructure for the era of cloud.” This where she is leading IBM.

DancingDinosaur thinks she has it right. But where does that leave this blog, which was built on the System z, Power Systems, and IBM’s enterprise systems? Hmm.

Rometty has an answer for that buried far down in her letter. “We are accelerating the move of our Systems product portfolio—in particular, Power and storage—to growth opportunities and to Linux, following the lead of our successful mainframe business. “

The rapidly emerging imperatives of big data, cloud computing, and mobile/social require enterprise-scale computing in terms of processing power, capacity, availability, security, and all the other ities that have long been the hallmark of the mainframe and IBM’s other enterprise class systems. She goes so far as to emphasize that point:  “Let me be clear—we are not exiting hardware. IBM will remain a leader in high-performance and high-end systems, storage and cognitive computing, and we will continue to invest in R&D for advanced semiconductor technology.”

You can bet that theme will be continued at the upcoming Edge 2014 conference May 19-23 in Las Vegas. The conference will include an Executive program, a Technical program with 550 expert technical sessions across 14 tracks, and a partner program. It’s being billed as an infrastructure innovation event and promises a big storage component too. Expect to see a lot of FlashSystems and XIV, which has a new pay-as-you-go pricing program that will make it easy to get into XIV and scale it fast as you need it. You’ll probably also encounter some other new go-to-market strategies for storage.

As far as getting to the cloud, IBM has been dropping billions to build out about as complete a cloud stack as you can get.  SoftLayer, the key piece, was just the start. BlueMix, an implementation of IBM’s Open Cloud Architecture, leverages Cloud Foundry to enable developers to rapidly build, deploy, and manage their cloud applications while tapping a growing ecosystem of available services and runtime frameworks, many of which are open source. IBM will provide services and runtimes into the ecosystem based on its already extensive and rapidly expanding software portfolio. BlueMix is the IBM PaaS offering that compliments SoftLayer, its IaaS offering. Cloudant, the most recent acquisition, brings database as a service (DBaaS) to the stack. And don’t forget IBM Wave for z/VM, which virtualizes and manages Linux VMs, a critical cloud operation for sure. With this conglomeration of capabilities IBM is poised to offer something cloud-like to just about any organization. Plus, tying WebSphere and its other middleware products to SoftLayer bolsters the cloud stack that much more.

And don’t think IBM is going to stop here. DancingDinosaur expects to see more acquisitions, particularly when it comes to hybrid clouds and what IBM calls systems of engagement. Hybrid clouds, for IBM, link systems of engagement—built on mobile and social technologies where consumers are engaging with organizations—with systems of record, the main workloads of the System z and Power Systems, where data and transactions are processed.

DancingDinosaur intends to be at Edge 2014 where it expects to see IBM detailing a lot of its new infrastructure and demonstrating how to use it. You can register for Edge 2014 here until April 20 and grab a discount.

Follow DancingDinosaur on Twitter: @mainframeblog

Goodbye X6 and IBM System x

January 24, 2014

Seems just last week IBM was touting the new X6-based systems, the latest in its x86 System x server lineup.  Now the X6 and the entire System x line is going to Lenovo, which will acquire IBM’s x86 server business.  Rumors had been circulating about the sale for the last year, so often that you stopped paying attention to them.

The sale includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, and blade networking and maintenance operations. The purchase price is approximately US $2.3 billion, about two billion of which will be paid in cash and the balance in Lenovo stock.

Definitely NOT part of the sale are the System z, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.  These are considered part of the IBM Enterprise Systems portfolio.  This commitment to the z and other enterprise systems is encouraging, especially in light of the latest IBM quarterly financial statement in which all the system hardware platforms did poorly, including System x.

DancingDinosaur’s planned follow up to last week’s X6 column in anticipation of a reported upcoming February briefing on X6 speeds and feeds is now unlikely. IBM pr folks said no such briefing is planned.

Most of the System x team appears to be departing with the products. Approximately 7,500 IBM employees around the world, including those based at major locations such as Raleigh, Shanghai, Shenzhen and Taipei, are expected to be offered employment by Lenovo, according to the announcement.

IBM, however, may become more active than ever.  Recently, IBM announced that it will invest more than $1 billion in the new IBM Watson Group, and $1.2 billion to expand its global cloud computing footprint to 40 data centers worldwide in 15 countries across five continents.  It also announced bolstering the SoftLayer operation, sort of a combined IaaS and global content delivery network, plus earlier investments in Linux, OpenStack, and various other initiatives. DancingDinosaur will try to follow it for you along with the System z and other enterprise IBM platforms.

 Please follow DancingDinosaur on Twitter: @mainframeblog

IBM Leverages High End Server Expertise in New X6 Systems

January 17, 2014

If you hadn’t noticed how x86 systems have been maturing over the past decade you might be surprised at the introduction yesterday of IBM’s newest entry in the x86 world, the X6. The X6 is the latest rev of IBM’s eX5. If you didn’t already think the eX5 was enterprise-class, here’s what IBM says of the X6:  support for demanding mission and business critical workloads, better foundation for virtualization of enterprise applications, infrastructure that facilitates a private or hybrid cloud model. Sound familiar? IBM has often said the same things about its Power Systems and, of course, the zEnterprise.

As the sixth generation of IBM’s EXA x86 technology it promises to be fast (although the actual speeds and feeds won’t be revealed for another month), 3x the memory, high availability features that increase reliability, use of flash to boost on-board memory, and lower cost. IBM hasn’t actually said anything specific about pricing; published reports put X6 systems starting at $10k.

More specifically, the flash boost consists of integrated eXFlash memory-channel storage that provides DIMM-based storage up to 12.8 terabytes in the form of ultrafast flash storage close to the processor.  This should increase application performance by providing the lowest system write latency available, and X6 can enable significantly lower latency for database operations, which can lower licensing costs and reduce storage costs by reducing or eliminating the need for external SAN/NAS storage units. This should deliver almost in-memory performance (although again, we have to wait for the actual speeds and feeds and benchmarks).

The new X6 also borrows from the System z in its adoption of compute book terminology to describe its packaging, adding a storage book too.  The result: a modular, scalable compute book design that supports multiple generations of CPUs that, IBM promises, can reduce acquisition costs, up to 28% in comparison to one competitive offering.  (Finally some details: 28% acquisition cost savings based on pricing of x3850 X6 at announcement on 2/18 vs. current pricing of a comparable x86 based system that includes 2 x Intel Xeon E7-4820 [v1] processors, 1TB of memory [16GB RDIMMs] 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller. x3850 X6 includes 2 Compute Books, 2 x Intel Xeon E7 processors, 1TB of memory [16GB RDIMMs], 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller.)

X6 also provides stability and flexibility through forthcoming technology developments, allowing users to scale up now and upgrade efficiently in the future based on the compute/storage book design that makes it easy to snap books into the chassis as you require more resources. Fast set-up and configuration patterns simplify deployment and life-cycle management.

In short, the book design, long a hallmark of the System z, brings a number of advantages.  For starters, you can put multiple generations of technology in the same chassis, no need to rip-and-replace or re-configure. This lets you stretch and amortize costs in a variety of ways.  IBM also adds RAS capabilities, another hallmark of the z. In the case of X6 it includes features like memory page retire; advanced double chip kill; the IBM MEH algorithm; multiple storage controllers; and double, triple, or quadruple memory options.

Server models supported by the X6 architecture currently include the System x3850 X6 four-socket system, System x3950 X6 eight-socket system, and the IBM Flex System x880 scalable compute nodes. IBM also is introducing the System x3650 M4 BD storage server, a two-socket rack server supporting up to 14 drives delivering up to 56 terabytes of high-density storage — the largest available in the industry, according to IBM.  (More tidbits from the speeds and feeds to come: Compared to HP two-socket servers supporting a maximum of 48 TB storage with 12 x 3.5″ drives, and Dell two-socket servers supporting a maximum of 51.2 TB storage with 12 x 3.5″ and 2 x 2.5″ drives X6 delivers 46% greater performance—based on Intel Internal Test Report #1310, using SPECjbb*2013 benchmark, July 2013.). IBM’s conclusion: X6 is ideally suited for distributed scale-out of big data workloads.

The X6 systems come with a reference architecture that simplifies deployment. To make it even simpler, maybe even bullet-proof, IBM also is introducing the X6 as a set of packaged solutions. These include:

  • IBM System x Solution for SAP HANA on X6
  • IBM System x Solution for SAP Business Suite on X6
  • IBM System x Solution for VMware vCloud Suite on X6
  • IBM System x Solution for Microsoft SQL Data Warehouse on X6
  • IBM System x Solution for Microsoft Hyper-V on X6
  • IBM System x Solution for DB2 with BLU Acceleration on X6

These are optimized and tuned in advance for database, analytics, and cloud workloads.

So, the X6 bottom line according to IBM: More performance at  40%+ lower cost, multiple generations in one chassis; 3X more memory and higher system availability; expanded use of flash and more storage options; integrated solutions for easy and worry-free deployment; and packaged solutions to address data analytics, virtualization, and cloud.

IBM packed a lot of goodies into the X6. DancingDinosaur will take it up again when IBM presents the promised details. Stay tuned.

Follow DancingDinosaur on Twitter: @mainframeblog

IBM Commits $1B to Drive Watson into the Mainstream

January 10, 2014

IBM is ready to propel Watson beyond Jeopardy, its initial proof-of-concept, and into mainstream enterprise computing. To that end, it announced plans to spend more than $1 billion on the recently formed Watson business unit, an amount that includes $100 million in venture investments to build an ecosystem of entrepreneurs developing Watson-powered apps.

In addition, companies won’t need racks of Power servers to run Watson. With a series of announcements yesterday IBM unveiled plans to deliver Watson capabilities as business-ready cloud services. The announcement focused on three Watson services: 1)  Watson Discovery Advisor for research and development projects in industries such as pharmaceutical, publishing and biotechnology; 2) Watson Analytics to deliver visualized big data insights based on questions posed in natural language by any business user; and 3) IBM Watson Explorer to more easily uncover and share data-driven insights across the enterprise.

DancingDinosaur has been following Watson since its Jeopardy days. Having long since gotten over the disappointment that Watson didn’t run on the Power side of a hybrid zEnterprise, it turns out that IBM has managed to shrink Watson considerably.  Today Watson runs 24x faster, boasts a 2,400% improvement in performance, and is 90% smaller.  IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes, and you don’t even need to locate it in your data center; you can run it in the cloud.

Following the introduction of Watson IBM was slow to build on that achievement. It focused on healthcare and financial services, use-cases that appeared to be no-brainers.  Eventually it experienced success, particularly in healthcare, but the initial customers came slowly and the implementations appeared to be cumbersome.

Watson, at least initially, wasn’t going to be a simple deployment. It needed a ton of Power processors. It also needed massive amounts of data; in healthcare IBM collected what amounted to the entire library of the world’s medical research and knowledge. And it needed applications that took advantage of Watson’s formidable yet unusual capabilities.

The recent announcements of delivering Watson via the cloud and committing to underwrite application developers definitely should help. And yesterday’s announcement of what amounts to three packaged Watson services should speed deployment.

For example, Watson Analytics, according to IBM, removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Using sophisticated analytics and aided by Watson’s natural language interface, Watson Analytics automatically prepares the data, finds the most important relationships, and presents the results in an easy to interpret interactive visual format. As a result, business users are no longer limited to predefined views or static data models. Better yet, they can feel empowered to apply their own knowledge of the business to ask and answer new questions as they emerge. They also will be able to quickly understand and make decisions based on Watson Analytics’ data-driven visualizations.

Behind the new Watson services lies IBM Watson Foundations, described as a comprehensive, integrated set of big data and analytics capabilities that enable enterprises to find and capitalize on actionable insights. Basically, it amounts to a set of user tools and capabilities to tap into all relevant data – regardless of source or type – and run analytics to gain fresh insights in real-time. And it does so securely across any part of an enterprise, including revenue generation, marketing, finance, risk, and operations.  Watson Foundations also includes business analytics with predictive and decision management capabilities, information management with in-memory and stream computing, and enterprise content management packaged into modular offerings. As such it enables organizations of any size to address immediate needs for decision support, gain sustainable value from their initial investments, and grow from there.

This apparently sounded good to Singapore’s DBS Bank, which will deploy Watson cognitive computing capabilities to deliver a next- generation client experience.  For starters, DBS intends to apply Watson to its wealth management business to improve the advice and experience delivered to affluent customers.  The bank is counting on cloud-based Watson to process enormous amounts of information with the ability to understand and learn from each interaction at unprecedented speed. This should greatly increase the bank’s ability to quickly analyze, understand and respond to the vast amounts of data it is accumulating.

Specifically, DBS will deploy IBM’s cloud-based Watson Engagement Advisor solution, to be rolled out in the second half of the year. From there the bank reportedly plans to progressively deploy these capabilities to its other businesses over time.

For fans of cognitive computing and Watson, the announcements represent a much awaited evolution in IBM’s strategy. It promises to make cognitive computing and the natural language power of Watson usable for mainstream enterprises. How excited fans should get, however, depends on the specifics of IBM’s pricing and packaging for these offerings.  Still, faced with having to recoup a $1 billion investment, don’t expect loss-leader pricing from IBM.

Follow DancingDinosaur on Twitter: @mainframeblog

Meet the Power 795—the RISC Mainframe

December 16, 2013

The IBM POWER 795 could be considered a RISC mainframe. A deep dive session on the Power 795 at Enterprise 2013 in early October presented by Patrick O’Rourke didn’t call the machine a mainframe. But when he walked attendees through the specifications, features, capabilities, architecture, and design of the machine it certainly looked like what amounted to a RISC mainframe.

Start with the latest enhancements to the POWER7 chip:

  • Eight processor cores with:

12 execution units per core

4 Way SMT per core – up to 4 threads per core

32 Threads per chip

L1: 32 KB I Cache / 32 KB D Cache

 L2: 256 KB per core

 L3: Shared 32MB on chip eDRAM

  • Dual DDR3 Memory Controllers

100 GB/s Memory bandwidth per chip

  • Scalability up to 32 Sockets

360 GB/s SMP bandwidth/chip

20,000 coherent operations in flight

Built on POWER7 and slated to be upgraded to POWER8 by the end of 2014 the Power 795 boasts a number of new features:

  • New Memory Options
  • New 64GB DIMM enable up to 16TB of memory
  • New hybrid I/O adapters will deliver Gen2 I/O connections
  • No-charge Elastic processor and memory days
  • PowerVM will enable up an 20 LPARs per core

And running at 4.2 GHz, the Power 795 clock speed starts to approach the zEC12 at 5.5 GHz while matching the clock speed of the zBC12.

IBM has also built increased flexibility into the Power 795, starting with turbo mode which allows users to turn on and off cores as they manage power consumption and performance. IBM also has enhanced the concept of Power pools, which allows users to group systems into compute clusters by setting up and moving processor and memory activations within a defined pool of systems, at the user’s convenience. With the Power 795 pool activations can be moved at any time by the user without contacting IBM, and the movement of the activations is instant, dynamic, and non-disruptive. Finally, there is no limit to the number of times activations can be moved. Enterprise pools can include the Power 795, 780, and 770 and systems with different clock speeds can coexist in the same pool. The activation assignment and movement is controlled by the HMC, which also determines the maximum number of system in any given pool.

The Power 795 provides three flavors of capacity of demand (CoD). One flavor for organizations that know they will need the extra capacity that can be turned on through easy activation over time. Another is intended for organizations that know they will need extra capacity at predictable times, such as the end of the quarter, and want to pay for the added capacity on a daily basis. Finally, there is a flavor for organizations that experience unpredictable short bursts of activity and prefer to pay for the additional capacity by the minute. Actually, there are more than the three basic flavors of CoD above but these three will cover the needs of most organizations.

And like a mainframe, the Power 795 comes with extensive hardware redundancy.  OK, the Power 795 isn’t a mainframe. It doesn’t run z/OS and it doesn’t do hybrid computing. But if you don’t run z/OS workloads and you’re not planning on running hybrid workloads yet still want the scalability, flexibility, reliability, and performance of a System z the Power 795 might prove very interesting indeed. And when the POWER8 processor is added to the mix the performance should go off the charts. This is a worthy candidate for enterprise systems consolidation.

The zEnterprise as a Hybrid Data Center

November 21, 2013

There is no doubt that the zEnterprise enables hybrid computing. Just attach a zBX to it and start plugging in Linux and x86 blades; presto, you’ve got hybrid computing.  You can manage this entire hybrid infrastructure via the Unified Resource Manager.

The zEnterprise also has a sister hybrid computing platform, IBM PureSystems. Here, too, you can add in System x and Linux or even Power and System i and do hybrid computing. You can also manage the hybrid environment through a single console, albeit a different console—the Flex System Manager—and manage this second IBM hybrid platform as a unified environment.  DancingDinosaur has noted the irony of IBM having two different, incompatible hybrid systems; IBM has reassured this blogger several times that it is trying to converge the two. Whenever it happens DancingDinosaur will be the first to report it.

The zEnterprise or even PureSystems as a hybrid computing platform, however, is not the same as a hybrid data center.  Apparently there is no definition of a hybrid data center despite all the talk about hybrid computing, hybrid clouds, and hybrid systems.  As best DancingDinosaur can piece it together, the hybrid data center is multiplatform like the zEnterprise, but it also is multi-location, often using co-location facilities or factory-built containerized data centers (IBM calls them Portable Modular Data Centers, PMDC). More often, however, hybrid data centers are associated with cloud computing as the third of the three flavors of cloud (private, public, hybrid).

Gartner recently described some architecture options for a hybrid data center. In one case you could have a zEnterprise acting as, say, a private cloud using a co-location facility as a DMZ between the private cloud and a public cloud like Amazon. Not sure, however, you would need the DMZ if your private cloud was running on the highly secure zEnterprise but Gartner included it. Go figure.

Hybrid showed up in numerous Enterprise 2013 sessions this past October. You can catch some video highlights from it here. The conference made frequent mention of hybrid in numerous sessions, some noted in previous DancingDinosaur posts, such as Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The session introduced the Unified Resource Manager and described how it would allow an IT shop to manage a collection of one or more zEnterprise nodes including any optionally attached zBX cabinets as a single logical virtualized system through a Hardware Management Console (HMC). In short, it was about providing a single point of control through which data center personnel can deploy, configure, monitor, manage and maintain the integrated System z and zBX blades based on heterogeneous architectures in a unified manner. But it wasn’t talking about the hybrid enterprise data center described in the previous paragraph.

Similarly, Application Performance Management and Capacity Planning for the IBM zEnterprise Hybrid Workload focused on extending the Unified Resource Manager to goal-oriented performance management for both traditional System z and BladeCenter applications. It was about applying WLM, RMF, and Platform Performance Management to cross-platform hybrid applications. Again, this really wasn’t about the hybrid data center described above.

BTW, plans apparently already are underway for Enterprise 2014. Looks like it will be Oct. 6-10 at the Venetian in Las Vegas. It should be quite an event given that IBM will be celebrating the 50th anniversary of the mainframe in 2014.

And there is much more on z hybrid computing and hybrid clouds. The zEnterprise has its own page on cloud computing here, and last month the zEnterprise zBC12 won CRN Tech Innovator Award for the most Innovative cloud solution.  You can also click here to see how a dozen IBM customers used various IBM platforms to build hybrid clouds.

IBM has already used the zEnterprise to consolidate over 30,000 servers around the world for an 84% improvement in data center efficiency and a 50% reduction in power and cooling. This effectively freed $1 billion to spend on innovative new projects that drive business growth across the company. And IBM is about as hybrid a data center as you can find.

Follow DancingDinosaur on Twitter, @mainframeblog

Open KVM Adds Kimchi to Speed Ramp Up

November 15, 2013

The Linux Foundation, the group trying to drive the growth of Linux and collaborative development recently brought the Open Virtualization Alliance (OVA) under its umbrella as a Linux Foundation Collaborative Project.  The change should help KVM take better advantage of the marketing and administrative capabilities of the Linux Foundation and enable tighter affinity with the Linux community at large.

The immediate upshot of the Oct. 21 announcement was increased exposure for open KVM.  Over 150 media stories appeared, Facebook hits jumped 33%, and the OVA website saw a big surge of traffic, 82% of which from first time visitors. First up on the agenda should be tapping the expansive ecosystem of the Linux Foundation in service of Kimchi, OVA’s new easy to deploy and use administrative tool for KVM.  Mike Day, an IBM Distinguished Engineer and Chief Virtualization Architect for Open Systems Development described Kimchi as the “fastest on-ramp to using KVM.

Kimchi is about as lightweight as a management tool can get. It offers stateless installation (no server), brings a graphical and mobile interface, and comes bundled with KVM for Power but does not require HMC, IBM’s primary tool for planning, deploying, and managing IBM Power System servers. It also is based on open, standard components, including the RESTful API, and it is part of the OpenStack community.

What Kimchi does is to provide a mobile- and Windows-friendly virtualization manager for KVM. It delivers point-to-point management, thereby avoiding the need to invest in yet more management server hardware, training, or installation. Promised to be simple to use, it was designed to appeal to a VMware administrator.

So what can you actually do with Kimchi? At the moment only the basics.  You can use it to manage all KVM guests, although it does has special support for some Linux guests at this point. Also, you can use it without Linux skills.

To figure out the path going forward the OVA and Linux Foundation are really seeking community participation and feedback.  Some of the Kimchi options coming under consideration first:

  • Federation versus export to OpenStack
  • Further storage and networking configurations; how advanced does it need to get?
  • Automation and tuning – how far should it go?
  • RESTful API development and usage
  • Addition of knobs and dials or keep sparse

Today Kimchi supports most basic networking and configurations.  There is yet no VLAN or clustering with Kimchi.

Kimchi is poised to fulfill a central position in the KVM environment—able to speed adoption.  What is most needed, however, is an active ecosystem of developers who can build out this sparse but elegant open source tool. To do that, IBM will need to give some attention to Kimchi to make sure it doesn’t get overlooked or lost in the slew of its sister open source initiatives like OpenStack, Linux itself, and even Eclipse. OpenStack, it appears, will be most critical, and it is a good sign that it already is at the top of the Kimchi to-do list.

And speaking of IBM opening up development, in an announcement earlier this week IBM said it will make its IBM Watson technology available as a development platform in the cloud to enable a worldwide community of software application providers who might build a new generation of apps infused with Watson’s cognitive computing intelligence.  Watson badly needed this; until now Watson has been an impressive toy for a very small club.

The move, according to IBM, aims to spur innovation and fuel a new ecosystem of entrepreneurial software application providers – ranging from start-ups and emerging, venture capital backed businesses to established players. To make this work IBM will be launching the IBM Watson Developers Cloud, a cloud-hosted marketplace where application providers of all sizes and industries will be able to tap into resources for developing Watson-powered apps. This will include a developer toolkit, educational materials, and access to Watson’s application programming interface (API). And they should do the same with Kimchi.

Enterprise 2013 Details System z and Power Technology and New Capabilities

October 25, 2013

IBM announced a lot of goodies for z and Power users at Enterprise 2013 wrapping up in Orlando today. There were no blockbuster announcements, like a new z machine—we’re probably 12-18 months away from that and even then the first will likely focus on Power8—but it brought a slew of announcements nonetheless. For a full rundown on what was announced click here.

Cloud and analytics—not surprisingly—loom large. For example, Hadoop and a variety of other capabilities have been newly cobbled together, integrated, optimized, and presented as new big data offerings or as new cloud solutions.  This was exemplified by a new Cognos offering for CFOs needing to create, analyze and manage sophisticated financial plans that can provide greater visibility into enterprise profitability or the lack thereof.

Another announcement featured a new IBM Entry Cloud Configuration for SAP on zEnterprise. This is a cloud-enablement offering combining high-performance technology and services to automate, standardize and accelerate day-to-day SAP operations for reduced operational costs and increased ROI. Services also were big at the conference.

Kicking off the event was a dive into data center economics by Steve Mills, Senior Vice President & Group Executive, IBM Software & Systems. Part of the challenge of optimizing IT economics, he noted, was that the IT environment is cumulative. Enterprises keep picking up more systems, hardware and software, as new needs arise but nothing goes away or gets rationalized in any meaningful way.

Between 2000 and 2010, Mills noted, servers had grown at a 6x rate while storage grew at a 69x rate. Virtual machines, meanwhile, were multiplying at the rate of 42% per year. Does anyone see a potential problem here?

Mills’ suggestion: virtualize and consolidate. Specifically, large servers are better for consolidation. His argument goes like this: Most workloads experience variance in demand. But when you consolidate workloads with variance on a virtualized server the variance of the sum is less due to statistical multiplexing (which fits workloads into the gaps created by the variances). Furthermore, the more workloads you can consolidate, the smaller the variance of the sum. His conclusion: bigger servers with capacity to run more workloads can be driven to higher average utilization levels without violating service level agreements, thereby reducing the cost per workload. Finally, the larger the shared processor pool is the more statistical benefit you get.

On the basis of statistical multiplexing, the zEnterprise and the Power 795 are ideal choices for this. Depending on your workloads, just load up the host server, a System z or a big Power box, with as many cores as you can afford and consolidate as many workloads as practical.

Mills’ other cost savings tips: use flash to avoid the cost and complexity of disk storage. Also, eliminate duplicate applications—the fewer you run, the lower the cost. In short, elimination is the clearest path to saving money in the data center.

To illustrate the point, Jim Tussing from Nationwide described how the company virtualized and consolidated 60% on their 10,500 servers on a few mainframes and saved $46 million over five years. It also allowed the company to delay the need for an additional data center for 4 years.

See, if DancingDinosaur was an actual data center manager it could have justified attendance at the entire conference based on the economic tips from just one of the opening keynotes and spent the rest of the conference playing golf. Of course, DancingDinosaur doesn’t play golf so it sat in numerous program sessions instead, which you will hear more about in coming weeks.

You can follow DancingDinosaur on twitter, @mainframeblog

Open POWER Consortium Aims to Expand the POWER Ecosystem beyond IBM

August 7, 2013

With IBM’s August 6 announcement of new POWER partners, including Google, not only is IBM aiming to expand the variety of POWER workloads but establish an alternative ecosystem to Intel/ x86 that continues to dominate general corporate computing.  Through the new Open POWER Consortium, IBM will make  POWER hardware and software available for open development for the first time as well as offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can enable innovative customization in creating new styles of server hardware for a variety of computing workloads.

IBM has a long history of using open consortiums to grab a foothold in different markets;  as it did with Eclipse (open software development tools), Linux (open portable operating system), KVM (open hypervisor and virtualization), and OpenStack (open cloud interoperability). In each case, IBM had proprietary technologies but could use the open source consortium strategy to expand market opportunities at the expense of entrenched proprietary competitors like Microsoft or VMware.  The Open POWER Consortium opens a new front against Intel, which already is scrambling to fend off ARM-based systems and other lightweight processors.

The establishment of the Open POWER Consortium also reinforces IBM’s commitment to the POWER platform in the face of several poor quarters. The commitment to POWER has never really wavered, insists an IBM manager, despite what financial analysts might hint at. Even stronger evidence of that commitment to POWER is POWER8, which is on track for 2014 if not sooner, and POWER9, which is currently in development, he confirmed.

As part of its initial collaboration within the consortium, IBM reported it and NVIDIA will integrate NVIDIA’s CUDA GPU and POWER.  CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).  GPUs increasingly are being used to boost overall system performance, not just graphics performance. The two companies envision powerful computing systems based on NVIDIA GPUs and IBM’s POWER CPUs  and represent an example of the new kind of systems the open consortium can produce.

However, don’t expect immediate results.  The IBM manager told DancingDinosaur that the fruits of any collaboration won’t start showing up until sometime next year. Even the Open POWER Collaboration website has yet to post anything. The consortium is just forming up; IBM expects the public commitment of Google to attract other players, which IBM describes as the next generation of data-center innovators.

As for POWER users, this can only be a good thing. IBM is not reducing its commitment to the POWER roadmap, plus users will be able to enjoy whatever the new players bring to the POWER party, which could be considerable. In the meantime, the Open POWER Consortium welcomes any firm that wants to innovate on the POWER platform and participate in an open, collaborative effort.

An even more interesting question may be where else will IBM’s interest in open systems and open consortiums take it. IBM remains “very focused on open and it’s a safe bet that IBM will continue to support open technologies and groups that support that,” the IBM manager told DancingDinosaur.  IBM, however, has nothing to announce after the Open POWER Consortium. Hmm, might a z/OS open collaborative consortium someday be in the works?

SHARE will be in Boston next week. DancingDinosaur expects to be there and will report on the goings-on. Hope to see some of you there.  There also are plans for a big IBM System z/Power conference, Enterprise Systems 2013, toward to end of October in Florida.  Haven’t seen many details yet, but will keep you posted as they come in.


Follow

Get every new post delivered to your Inbox.

Join 573 other followers

%d bloggers like this: