Posts Tagged ‘Power Systems’

System z Takes BackOffice Role in IBM-Apple Deal

July 21, 2014

DancingDinosaur didn’t have to cut short his vacation and race back last week to cover the IBM-Apple agreement. Yes, it’s a big deal, but as far as System z shops go it won’t have much impact on their data center operations until late this year or 2015 when new mobile enterprise applications apparently will begin to roll out.

The deal, announced last Tuesday, promises “a new class of made-for-business apps targeting specific industry issues or opportunities in retail, healthcare, banking, travel and transportation, telecommunications, and insurance among others,” according to IBM. The mainframe’s role will continue to be what it has been for decades, the backoffice processing workhorse. IBM is not porting iOS to the z or Power or i or any enterprise platform.

Rather, the z will handle transaction processing, security, and data management as it always has. With this deal, however, analytics appears to be assuming a larger role. IBM’s big data and analytics capabilities is one of the jewels it is bringing to the party to be fused with Apple’s legendary consumer experience. IBM expects this combination—big data analytics and consumer experience—to produce apps that can transform specific aspects of how businesses and employees work using iPhone and iPad devices and ultimately, as IBM puts it, enable companies to achieve new levels of efficiency, effectiveness and customer satisfaction—faster and easier than ever before.

In case you missed the point, this deal, or alliance as IBM seems to prefer, is about software and services. If any hardware gets sold as a result, it will be iPhones and iPads. Of course, IBM’s MobileFirst constellation of products and services stand to gain. Mainframe shops have been reporting a steady uptick in transactions originating from mobile devices for several years. This deal won’t slow that trend and might even accelerate it. The IBM-Apple alliance also should streamline and simplify working with and managing Apple’s mobile devices on an enterprise-wide basis.

According to IBM its MobileFirst Platform for iOS will deliver the services required for an end-to-end enterprise capability, from analytics, workflow and cloud storage to enterprise-scale device management, security and integration. Enhanced mobile management includes a private app catalog, data and transaction security services, and a productivity suite for all IBM MobileFirst for iOS offerings. In addition to on premise software solutions, all these services will be available on Bluemix—IBM’s development platform available through the IBM Cloud Marketplace.

One hope from this deal is that IBM will learn from Apple how to design user-friendly software and apply those lessons to the software it subsequently develops for the z and Power Systems. Would be interesting see what Apple software designers might do to simplify using CICS.

Given the increasing acceptance of BYOD when it comes to mobile, data centers will still have to cope with the proliferation of operating systems and devices in the mobile sphere. Nobody is predicting that Android, Amazon, Google, or Microsoft will be exiting the mobile arena as a result, at least not anytime soon.

Finally, a lot of commentators weighed in on who wins or loses in the mobile market. In terms of IBM’s primary enterprise IT competitors Oracle offers the Oracle Mobile Platform. This includes mobile versions of Siebel CRM, JD Edwards, PeopleSoft, and a few more. HP offers mobile app development and testing and a set of mobile application services that include planning, architecture, design, build, integration, and testing.

But if you are thinking in terms of enterprise platform winners and losers IBM is the clear winner; the relationship with Apple is an IBM exclusive partnership. No matter how good HP, Oracle, or any of IBM’s other enterprise rivals might be at mobile computing without the tight Apple connection they are at a distinct disadvantage. And that’s before you even consider Bluemix, SoftLayer, MobileFirst, and IBM’s other mobile assets.

BTW, it’s not too early to start planning for IBM Enterprise 2014. Mark your calendar, Oct 6-10 at the Venetian in Las Vegas. This event should be heavily z and Power.

DancingDinosaur is Alan Radding. Follow him on Twitter @mainframeblog or at Technologywriter.com.

Bringing the System z into the Cloud via OpenStack

July 11, 2014

Last week DancingDinosaur looked at how organizations can extend the System z into the cloud and especially hybrid clouds.  One key component, the IBM SmartCloud Entry for the z remained a bit unclear. DancingDinosaur had turned up conflicting reports as to whether the z was supported by SmartCloud Entry or not.

As you read last week: The easiest way to get started should be through IBM’s SmartCloud Entry and Linux on z. Good idea but just one catch: in the spring, IBM SmartCloud entry for z, was still only a statement of direction: “IBM intends to update IBM SmartCloud Entry to support the System z platform…” The product apparently didn’t exist. Or did it? DancingDinosaur found a Starter kit of IBM SmartCloud Entry for IBM System z. Go figure. (2 years ago DancingDinosaur wrote that SmartCloud Entry for z was imminent based on an IBM announcement that was later pulled).

IBM just got back to DancingDinosaur with a clarification. It turns out IBM rebranded the product. The rebranded product family is now IBM Cloud Manager with OpenStack, announced in May. It provides support for the latest OpenStack operating system release, Icehouse, and full access to the complete core OpenStack API set to help clients ensure application portability and avoid vendor lock-in.

Most importantly to DancingDinosaur readers,  it unequivocally extends cloud management support to System z, in addition to Power Systems, PureFlex/Flex Systems, System x, or any other x86 environment. The new solution also supports IBM z/VM on System z, as well as PowerVC for PowerVM on Power Systems to add more scalability and security to Linux environments. As of this writing, the Starter kit for IBM SmartCloud Entry for IBM System z was still live at the link above but don’t expect it to stay up for long.

IBM goes on to explain that the rebranded product is built on the foundation of IBM SmartCloud Entry. It offers a modular, flexible design that enables rapid innovation, vendor interoperability, and faster time-to-value. According to IBM it is an easy-to-deploy, simple-to use-cloud management offering that can deliver improved private cloud and Service Provider solutions with features like security, automation, usage tracking metering, and multi-architecture management. You can access the technology through the OpenStack Marketplace here.

Expect to hear more about the z, hybrid clouds, and OpenStack at IBM Enterprise 2014 this coming October in Las Vegas.

DancingDinosaur is Alan Radding. Follow on Twitter, @mainframeblog and at Technologywriter.com.

SoftLayer Direct Link Brings Hybrid Cloud to System z and Power

June 26, 2014

Back in February, IBM announced that SoftLayer was integrating IBM Power Systems into its cloud infrastructure, a move that promised to deliver a level and breadth of services beyond what has traditionally been available over the cloud. Combined with new services and tools announced at the same time, this would help organizations deploy hybrid and private cloud environments.

Back then IBM included the System z in the announcement as well by bolstering its System z cloud portfolio with IBM Wave for z/VM. IBM Wave promises to provide rapid insight into an organization’s virtualized infrastructure with intelligent visualization, simplified monitoring and unified management. Specifically, Wave helps the organization more easily manage large numbers of virtual machines.

Now it is June, the snow has finally melted and IBM’s SoftLayer is introducing Direct Link to the computing public. Direct Link had previously been available to only a select few customers. Direct Link, in effect, is a specialized content delivery network for creating hybrid clouds. Organizations would connect their private IT infrastructure to public cloud resources by going directly to the SoftLayer platform, which streamlines delivery over the network. Direct Link users avoid the need to traverse the public Internet.

The focus here is on hybrid clouds. When an organization with a private cloud, say a mainframe hosting a large amount of IT resources and services behind the firewall, needs resources such as extra capacity or services it doesn’t have, it can turn to the public cloud for those extra resources or services. The combination of the private cloud and tightly connected public cloud resources form a hybrid cloud.  If you’re attending a webinar on hybrid clouds at this point the speaker usually says …and then you just punch out to the public cloud to get x, y, or z resource or service. It always sounds so simple, right?

As far as the System z goes, SoftLayer was not actually integrated with the z in the February announcement, although DancingDinosaur expects it will be eventually if IBM is serious about enterprise cloud computing. For now, the z sits in the on-premise data center, a private cloud so to speak. It runs CICS and DB2 and all the systems it is known for and, especially, security. From there, however, it can connect to an application server, dedicated or virtual, on the SoftLayer Cloud Server to form a Hybrid System z-Enterprise Cloud. As presented at SHARE this past spring, the resulting Hybrid System z-Cloud Enterprise Architecture (slides 46-49) provides the best of both worlds, secure transactions combined with the dynamics of the cloud.

Direct Link itself consists of a physical, dedicated network connection from your data center, on-premise private cloud, office, or co-location facility to SoftLayer’s data centers and private network through one of the company’s 18 network Points of Presence (PoPs) around the world. These PoPs reside within facilities operated by SoftLayer partners including Equinix, Telx, Coresite, Terremark, Pacnet, InterXion and TelecityGroup, which provide access for SoftLayer customers, especially those with infrastructure co-located in the same facilities.

Direct Link, essentially an appliance, eliminates the need to traverse the public Internet to connect to the SoftLayer private network. Direct Link enables organizations to completely control access to their infrastructure and services, the speed of their connection to SoftLayer, and how data is routed. In the process, IBM promises:

  • Higher network performance consistency and predictability
  • Streamlined and accelerated workload and data migration
  • Improved data and operational security

If you are not co-located in any of the above facilities operated by one of SoftLayer’s POP partners then it appears you will have will to set up an arrangement with one of them. SoftLayer promises to hold your hand and walk you through the set up process.

When you do have it set up Direct Link pricing appears quite reasonable. Available immediately, Direct Link pricing starts at $147/month for a 1Gbps network connection and $997/month for a 10Gbps network connection.

According to Trevor Jones, writing for Tech Target, IBM’s pricing undercuts AWS slightly and Microsoft’s by far. Next month Microsoft, on a discounted rate for its comparable Express Route service, will charge $600 per month for 1 Gbps and $10,000 for 10 Bbps per month. Amazon uses its Direct Connect service priced at $0.30 per hour for 1 Gbps and 10 Gbps at $2.25 per hour.

Your System z or new Power server integrated with SoftLayer can provide a solid foundation for hybrid cloud nirvana. Just add Direct Link and make arrangements with public cloud resources and services. Presto, you have a hybrid cloud.

BTW, IBM Enterprise 2014 is coming in Oct. to Las Vegas. DancingDinosaur expects to hear a lot of the z and Power, SoftLayer, and hybrid clouds there.

DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog

IBM Edge2014 as Coming out Party for OpenStack

May 7, 2014

IBM didn’t invent OpenStack (Rackspace and NASA did), but IBM’s embrace of OpenStack in March 2013 as its standard for cloud computing made it a legit standard for enterprise computing. Since then IBM has made its intention to enable its product line, from the System z on down, for the OpenStack set of open source technologies.  Judging from the number of sessions at IBM Edge 2014, (Las Vegas, May 19-23 at the Venetian) that address one or another aspect of OpenStack you might think of IBM Edge2014 almost as a coming out celebration for OpenStack and enterprise cloud computing.

OpenStack is a collection of open source technologies. the goal of which is to provide a scalable computing infrastructure for both public and private clouds. As such it has become the foundation of IBM’s cloud strategy, which is another way of saying it has become what IBM sees as its future. An excellent mini-tutorial on OpenStack, IBM, and the System z can be found at mainframe-watch-Belgium here.

At IBM Edge2014 OpenStack is frequently included in sessions on storage, cloud, and storage management.  Let’s take a closer look at a few of those sessions.

IBM Storage and Cloud Technologies

Presenter Christopher Vollmar offers an overview of the IBM storage platforms that contain cloud technologies or provide a foundation for creating a private storage cloud for block and file workloads. This overview includes IBM’s SmartCloud Virtual Storage Center, SmartCloud Storage Access, Active Cloud Engine, and XIV’s Hyper-Scale as well as IBM storage products’ integration with OpenStack.

OpenStack and IBM Storage

Presenters Michael Factor and Funda Eceral explain how OpenStack is rapidly emerging as the de facto platform for Infrastructure as a Service. IBM is working fast to pin down the integration of its storage products with OpenStack. This talk presents a high level overview of OpenStack, with a focus on Cinder, the OpenStack block storage manager. They also will explain how IBM is leading the evolution of Cinder by improving the common base with features such as volume migration and ability to change the SLAs associated with the volume in the OpenStack cloud. Already IBM storage products—Storwize, XIV, DS8000, GPFS and TSM—are integrated with OpenStack, enabling self-provisioning access to features such as EasyTier or Real-time Compression via standard OpenStack interfaces. Eventually, you should expect virtually all IBM products, capabilities, and services to work with and through OpenStack.

IBM XIV and VMware: Best Practices for Your Cloud

Presenters Peter Kisich, Carlos Lizarralde argue that IBM Storage continues to lead in OpenStack integration and development. They then introduce the core services of OpenStack while focusing on how IBM storage provides open source integration with Cinder drivers for Storwize, DS8000 and XIV. They also include key examples and a demonstration of the automation and management IBM Storage offers through the OpenStack cloud platform.

IBM OpenStack Hybrid Cloud on IBM PureFlex and SoftLayer

Presenter Eric Kern explains how IBM’s latest version of OpenStack is used to showcase a hybrid cloud environment. A pair of SoftLayer servers running in IBM’s public cloud are matched with a PureFlex environment locally hosting the OpenStack controller. He covers the architecture used to set up this environment before diving into the details around deploying workloads.

Even if you never get to IBM Edge2014 it should be increasingly clear that OpenStack is quickly gaining traction and destined to emerge as central to Enterprise IT, any style of cloud computing, and IBM. OpenStack will be essential for any private, public, and hybrid cloud deployments. Come to Edge2014 and get up to speed fast on OpenStack.

Alan Radding/DancingDinosaur will be there. Look for me in the bloggers lounge between and after sessions. Also watch for upcoming posts on DancingDinosaur about OpenStack and the System z and on OpenStack on Power Systems.

Please follow DancingDinosaur on Twitter, @mainframeblog.

3 Big Takeaways at IBM POWER8 Intro

April 24, 2014

POWER8 did not disappoint. IBM unveiled its latest generation of systems built on its new POWER8 technology on Wednesday, April 23.

DancingDinosaur sees three important takeaways from this announcement:

First, the OpenPOWER Foundation. It was introduced months ago and almost immediately forgotten. DancingDinosaur covered it at the time here. It had handful of small partners. Only one was significant, Google, and was it was hard to imagine Google bringing out open source POWER servers. Now the Foundation has several dozen members and it still is not clear what Google is doing there, but the Foundation clearly is gaining traction. You can expect more companies to join the Foundation in the coming weeks and months.

With the Foundation IBM swears it is committed to a true open ecosystem; one where even competitors can license the technology and bring out their own systems. At some point don’t be surprised to see white box Power systems below IBM’s price. More likely in the short term will be specialized Power appliances. What you get as a foundation member is the Power SOC design, Bus Specifications, Reference Designs, FW OS, and Hypervisor Open Source. It also includes access to Little Endian Linux, which will ease the migration of software to POWER. BTW, Google is listed as a member focusing on open source firmware and on the cloud and high performance computing.

Second, the POWER8 processor itself and the new family of systems. The processor, designed for big data, will run more concurrent queries and run them up to 50x fast than x86 with 4x more threads per core than x86. Its I/O bandwidth is 5x faster than POWER7. It can handle 1TB of memory with 4-6x more memory bandwidth and more than 3x more on-chip cache than an x86. The processor itself will utilize 22nm circuits and run 2.5 -5 GHz.

POWER8 sports an eight-threaded processor. That means each of the 12 cores in the CPU will coordinate the processing of eight sets of instructions at a time for a total of 96 processes. Each process consists of a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip, IBM explains. By comparison, IBM reports Intel’s Ivy Bridge E5 Xeon CPUs are double-threaded cores, with up to eight cores, handling 16 processes at a time (compared to 96 with POWER8).  Yes, there is some coordination overhead incurred as more threads are added. Still the POWER8 chip should attract interest among white box manufacturers and users of large numbers of servers processing big data.

Third is CAPI, your newest acronym.  If something is going to be a game-changer, this will be it. The key is to watch for adoption. Coherent Accelerator Processor Interface (CAPI) sits directly on the POWER8 and works with the same memory addresses that the processor uses. Pointers de-referenced same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable interface. In the process, it offloads complexity.

CAPI can reduce the typical seven-step I/O model flow to three steps (shared memory/notify accelerator, acceleration, and shared memory completion). The advantages revolve around virtual addressing and data caching through shared memory and reduced latency for highly referenced data. [see accompanying graphic] It also enables an easier, natural programming model with traditional thread level programming and eliminates the need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing.

 CAPI Picture

It’s too early to determine if CAPI is a game changer but IBM has already started to benchmark some uses. For example, it ran NoSQL on POWER8 with CAPI and achieved a 5x cost reduction. When combined with IBM’s TMI flash it found it could:

  • Attack problem sets otherwise too big for the memory footprint
  • Deliver fast access to small chunks of data
  • Achieve high throughput for data or simplify object addressing through memory semantics.

CAPI brings programming efficiency and simplicity. It uses the PCIe physical interface for the easiest programming and fastest, most direct I/O performance. It enables better virtual addressing and data caching. Although it was intended for acceleration it works well for I/O caching. And it has been shown to deliver a 5x cost reduction with equivalent performance when attaching to flash.  In summary, CAPI enables you to regain infrastructure control and rein in costs to deliver services otherwise not feasible.

It will take time for CAPI to catch on. Developers will need to figure out where and how best to use it. But with CAPI as part of the OpenPOWER Foundation expect to see work taking off in a variety of directions. At a pre-briefing a few weeks ago, DancingDinosaur was able to walk through some very CAPI interesting demos.

As for the new POWER8 Systems lineup, IBM introduced 6 one- or two-socket systems, some for Linux others for all systems.  The systems, reportedly, will start below $8000.

You can follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog. Also, please join me at IBM Edge2014, this May 19-23 at the Venetian in Las Vegas.  Find me in the bloggers lounge.

The Future of IBM Lies in the Cloud

March 13, 2014

In her annual letter to stockholders IBM CEO Virginia Rometty made it clear that the world is being forever altered by the explosion of digital data and by the advent of the cloud. So, she intends IBM to “remake the enterprise IT infrastructure for the era of cloud.” This where she is leading IBM.

DancingDinosaur thinks she has it right. But where does that leave this blog, which was built on the System z, Power Systems, and IBM’s enterprise systems? Hmm.

Rometty has an answer for that buried far down in her letter. “We are accelerating the move of our Systems product portfolio—in particular, Power and storage—to growth opportunities and to Linux, following the lead of our successful mainframe business. “

The rapidly emerging imperatives of big data, cloud computing, and mobile/social require enterprise-scale computing in terms of processing power, capacity, availability, security, and all the other ities that have long been the hallmark of the mainframe and IBM’s other enterprise class systems. She goes so far as to emphasize that point:  “Let me be clear—we are not exiting hardware. IBM will remain a leader in high-performance and high-end systems, storage and cognitive computing, and we will continue to invest in R&D for advanced semiconductor technology.”

You can bet that theme will be continued at the upcoming Edge 2014 conference May 19-23 in Las Vegas. The conference will include an Executive program, a Technical program with 550 expert technical sessions across 14 tracks, and a partner program. It’s being billed as an infrastructure innovation event and promises a big storage component too. Expect to see a lot of FlashSystems and XIV, which has a new pay-as-you-go pricing program that will make it easy to get into XIV and scale it fast as you need it. You’ll probably also encounter some other new go-to-market strategies for storage.

As far as getting to the cloud, IBM has been dropping billions to build out about as complete a cloud stack as you can get.  SoftLayer, the key piece, was just the start. BlueMix, an implementation of IBM’s Open Cloud Architecture, leverages Cloud Foundry to enable developers to rapidly build, deploy, and manage their cloud applications while tapping a growing ecosystem of available services and runtime frameworks, many of which are open source. IBM will provide services and runtimes into the ecosystem based on its already extensive and rapidly expanding software portfolio. BlueMix is the IBM PaaS offering that compliments SoftLayer, its IaaS offering. Cloudant, the most recent acquisition, brings database as a service (DBaaS) to the stack. And don’t forget IBM Wave for z/VM, which virtualizes and manages Linux VMs, a critical cloud operation for sure. With this conglomeration of capabilities IBM is poised to offer something cloud-like to just about any organization. Plus, tying WebSphere and its other middleware products to SoftLayer bolsters the cloud stack that much more.

And don’t think IBM is going to stop here. DancingDinosaur expects to see more acquisitions, particularly when it comes to hybrid clouds and what IBM calls systems of engagement. Hybrid clouds, for IBM, link systems of engagement—built on mobile and social technologies where consumers are engaging with organizations—with systems of record, the main workloads of the System z and Power Systems, where data and transactions are processed.

DancingDinosaur intends to be at Edge 2014 where it expects to see IBM detailing a lot of its new infrastructure and demonstrating how to use it. You can register for Edge 2014 here until April 20 and grab a discount.

Follow DancingDinosaur on Twitter: @mainframeblog

Goodbye X6 and IBM System x

January 24, 2014

Seems just last week IBM was touting the new X6-based systems, the latest in its x86 System x server lineup.  Now the X6 and the entire System x line is going to Lenovo, which will acquire IBM’s x86 server business.  Rumors had been circulating about the sale for the last year, so often that you stopped paying attention to them.

The sale includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, and blade networking and maintenance operations. The purchase price is approximately US $2.3 billion, about two billion of which will be paid in cash and the balance in Lenovo stock.

Definitely NOT part of the sale are the System z, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.  These are considered part of the IBM Enterprise Systems portfolio.  This commitment to the z and other enterprise systems is encouraging, especially in light of the latest IBM quarterly financial statement in which all the system hardware platforms did poorly, including System x.

DancingDinosaur’s planned follow up to last week’s X6 column in anticipation of a reported upcoming February briefing on X6 speeds and feeds is now unlikely. IBM pr folks said no such briefing is planned.

Most of the System x team appears to be departing with the products. Approximately 7,500 IBM employees around the world, including those based at major locations such as Raleigh, Shanghai, Shenzhen and Taipei, are expected to be offered employment by Lenovo, according to the announcement.

IBM, however, may become more active than ever.  Recently, IBM announced that it will invest more than $1 billion in the new IBM Watson Group, and $1.2 billion to expand its global cloud computing footprint to 40 data centers worldwide in 15 countries across five continents.  It also announced bolstering the SoftLayer operation, sort of a combined IaaS and global content delivery network, plus earlier investments in Linux, OpenStack, and various other initiatives. DancingDinosaur will try to follow it for you along with the System z and other enterprise IBM platforms.

 Please follow DancingDinosaur on Twitter: @mainframeblog

IBM Leverages High End Server Expertise in New X6 Systems

January 17, 2014

If you hadn’t noticed how x86 systems have been maturing over the past decade you might be surprised at the introduction yesterday of IBM’s newest entry in the x86 world, the X6. The X6 is the latest rev of IBM’s eX5. If you didn’t already think the eX5 was enterprise-class, here’s what IBM says of the X6:  support for demanding mission and business critical workloads, better foundation for virtualization of enterprise applications, infrastructure that facilitates a private or hybrid cloud model. Sound familiar? IBM has often said the same things about its Power Systems and, of course, the zEnterprise.

As the sixth generation of IBM’s EXA x86 technology it promises to be fast (although the actual speeds and feeds won’t be revealed for another month), 3x the memory, high availability features that increase reliability, use of flash to boost on-board memory, and lower cost. IBM hasn’t actually said anything specific about pricing; published reports put X6 systems starting at $10k.

More specifically, the flash boost consists of integrated eXFlash memory-channel storage that provides DIMM-based storage up to 12.8 terabytes in the form of ultrafast flash storage close to the processor.  This should increase application performance by providing the lowest system write latency available, and X6 can enable significantly lower latency for database operations, which can lower licensing costs and reduce storage costs by reducing or eliminating the need for external SAN/NAS storage units. This should deliver almost in-memory performance (although again, we have to wait for the actual speeds and feeds and benchmarks).

The new X6 also borrows from the System z in its adoption of compute book terminology to describe its packaging, adding a storage book too.  The result: a modular, scalable compute book design that supports multiple generations of CPUs that, IBM promises, can reduce acquisition costs, up to 28% in comparison to one competitive offering.  (Finally some details: 28% acquisition cost savings based on pricing of x3850 X6 at announcement on 2/18 vs. current pricing of a comparable x86 based system that includes 2 x Intel Xeon E7-4820 [v1] processors, 1TB of memory [16GB RDIMMs] 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller. x3850 X6 includes 2 Compute Books, 2 x Intel Xeon E7 processors, 1TB of memory [16GB RDIMMs], 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller.)

X6 also provides stability and flexibility through forthcoming technology developments, allowing users to scale up now and upgrade efficiently in the future based on the compute/storage book design that makes it easy to snap books into the chassis as you require more resources. Fast set-up and configuration patterns simplify deployment and life-cycle management.

In short, the book design, long a hallmark of the System z, brings a number of advantages.  For starters, you can put multiple generations of technology in the same chassis, no need to rip-and-replace or re-configure. This lets you stretch and amortize costs in a variety of ways.  IBM also adds RAS capabilities, another hallmark of the z. In the case of X6 it includes features like memory page retire; advanced double chip kill; the IBM MEH algorithm; multiple storage controllers; and double, triple, or quadruple memory options.

Server models supported by the X6 architecture currently include the System x3850 X6 four-socket system, System x3950 X6 eight-socket system, and the IBM Flex System x880 scalable compute nodes. IBM also is introducing the System x3650 M4 BD storage server, a two-socket rack server supporting up to 14 drives delivering up to 56 terabytes of high-density storage — the largest available in the industry, according to IBM.  (More tidbits from the speeds and feeds to come: Compared to HP two-socket servers supporting a maximum of 48 TB storage with 12 x 3.5″ drives, and Dell two-socket servers supporting a maximum of 51.2 TB storage with 12 x 3.5″ and 2 x 2.5″ drives X6 delivers 46% greater performance—based on Intel Internal Test Report #1310, using SPECjbb*2013 benchmark, July 2013.). IBM’s conclusion: X6 is ideally suited for distributed scale-out of big data workloads.

The X6 systems come with a reference architecture that simplifies deployment. To make it even simpler, maybe even bullet-proof, IBM also is introducing the X6 as a set of packaged solutions. These include:

  • IBM System x Solution for SAP HANA on X6
  • IBM System x Solution for SAP Business Suite on X6
  • IBM System x Solution for VMware vCloud Suite on X6
  • IBM System x Solution for Microsoft SQL Data Warehouse on X6
  • IBM System x Solution for Microsoft Hyper-V on X6
  • IBM System x Solution for DB2 with BLU Acceleration on X6

These are optimized and tuned in advance for database, analytics, and cloud workloads.

So, the X6 bottom line according to IBM: More performance at  40%+ lower cost, multiple generations in one chassis; 3X more memory and higher system availability; expanded use of flash and more storage options; integrated solutions for easy and worry-free deployment; and packaged solutions to address data analytics, virtualization, and cloud.

IBM packed a lot of goodies into the X6. DancingDinosaur will take it up again when IBM presents the promised details. Stay tuned.

Follow DancingDinosaur on Twitter: @mainframeblog

IBM Commits $1B to Drive Watson into the Mainstream

January 10, 2014

IBM is ready to propel Watson beyond Jeopardy, its initial proof-of-concept, and into mainstream enterprise computing. To that end, it announced plans to spend more than $1 billion on the recently formed Watson business unit, an amount that includes $100 million in venture investments to build an ecosystem of entrepreneurs developing Watson-powered apps.

In addition, companies won’t need racks of Power servers to run Watson. With a series of announcements yesterday IBM unveiled plans to deliver Watson capabilities as business-ready cloud services. The announcement focused on three Watson services: 1)  Watson Discovery Advisor for research and development projects in industries such as pharmaceutical, publishing and biotechnology; 2) Watson Analytics to deliver visualized big data insights based on questions posed in natural language by any business user; and 3) IBM Watson Explorer to more easily uncover and share data-driven insights across the enterprise.

DancingDinosaur has been following Watson since its Jeopardy days. Having long since gotten over the disappointment that Watson didn’t run on the Power side of a hybrid zEnterprise, it turns out that IBM has managed to shrink Watson considerably.  Today Watson runs 24x faster, boasts a 2,400% improvement in performance, and is 90% smaller.  IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes, and you don’t even need to locate it in your data center; you can run it in the cloud.

Following the introduction of Watson IBM was slow to build on that achievement. It focused on healthcare and financial services, use-cases that appeared to be no-brainers.  Eventually it experienced success, particularly in healthcare, but the initial customers came slowly and the implementations appeared to be cumbersome.

Watson, at least initially, wasn’t going to be a simple deployment. It needed a ton of Power processors. It also needed massive amounts of data; in healthcare IBM collected what amounted to the entire library of the world’s medical research and knowledge. And it needed applications that took advantage of Watson’s formidable yet unusual capabilities.

The recent announcements of delivering Watson via the cloud and committing to underwrite application developers definitely should help. And yesterday’s announcement of what amounts to three packaged Watson services should speed deployment.

For example, Watson Analytics, according to IBM, removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Using sophisticated analytics and aided by Watson’s natural language interface, Watson Analytics automatically prepares the data, finds the most important relationships, and presents the results in an easy to interpret interactive visual format. As a result, business users are no longer limited to predefined views or static data models. Better yet, they can feel empowered to apply their own knowledge of the business to ask and answer new questions as they emerge. They also will be able to quickly understand and make decisions based on Watson Analytics’ data-driven visualizations.

Behind the new Watson services lies IBM Watson Foundations, described as a comprehensive, integrated set of big data and analytics capabilities that enable enterprises to find and capitalize on actionable insights. Basically, it amounts to a set of user tools and capabilities to tap into all relevant data – regardless of source or type – and run analytics to gain fresh insights in real-time. And it does so securely across any part of an enterprise, including revenue generation, marketing, finance, risk, and operations.  Watson Foundations also includes business analytics with predictive and decision management capabilities, information management with in-memory and stream computing, and enterprise content management packaged into modular offerings. As such it enables organizations of any size to address immediate needs for decision support, gain sustainable value from their initial investments, and grow from there.

This apparently sounded good to Singapore’s DBS Bank, which will deploy Watson cognitive computing capabilities to deliver a next- generation client experience.  For starters, DBS intends to apply Watson to its wealth management business to improve the advice and experience delivered to affluent customers.  The bank is counting on cloud-based Watson to process enormous amounts of information with the ability to understand and learn from each interaction at unprecedented speed. This should greatly increase the bank’s ability to quickly analyze, understand and respond to the vast amounts of data it is accumulating.

Specifically, DBS will deploy IBM’s cloud-based Watson Engagement Advisor solution, to be rolled out in the second half of the year. From there the bank reportedly plans to progressively deploy these capabilities to its other businesses over time.

For fans of cognitive computing and Watson, the announcements represent a much awaited evolution in IBM’s strategy. It promises to make cognitive computing and the natural language power of Watson usable for mainstream enterprises. How excited fans should get, however, depends on the specifics of IBM’s pricing and packaging for these offerings.  Still, faced with having to recoup a $1 billion investment, don’t expect loss-leader pricing from IBM.

Follow DancingDinosaur on Twitter: @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 631 other followers

%d bloggers like this: