Posts Tagged ‘zEnterprise’

The Next Generation of Mainframers

March 6, 2014

With seemingly every young person with any technology inclinations aiming to become the next WhatsApp and walk away with some of Facebook’s millions it is fair to wonder: Where is the next generation of mainframers going to come from and who are they going to be?

The answer: IBM is lining them up now. As the mainframe turns 50 you’ll have a chance to meet some of these up and coming mainframers as part of IBM’s 50th Mainframe Anniversary celebration in New York, April 8, when IBM announces winners of the World Championship round of its popular Master of the Mainframe competition.

According to IBM, the Championship is designed to assemble the best university students from around the globe who have demonstrated superior technical skills through participation in their regional IBM Master the Mainframe Contests. Out of the 20,000 students who have engaged in country-level Master the Mainframe Contests over the last three years, the top 44 students from 22 countries have been invited to participate in the inaugural IBM Master the Mainframe World Championship.

These students will spend the month of March working through the Systems of Engagement concept, an expansion of the traditional Systems of Record—core transaction systems—that have been the primary workload of mainframe computing. The students will deploy Systems of Record mainframe business applications written with Java and COBOL using DB2 for z/OS API’s to demonstrate how the Systems of Engagement concept takes full advantage of the mainframe’s advanced capabilities. In short, the mainframe is designed to support tomorrow’s most demanded complex workloads  Big Data, Cloud, and Mobile computing workloads and do them all with the most effective enterprise-class security. The students will showcase their applications on April 7, 2014 in New York City where judges will determine which student earns the distinction of “Master the Mainframe World Champion.”

Representing the United States are Mugdha Kadam from the University of Florida, Elton Cheng from the University of California San Diego, and Rudolfs Dambis from the University of Nevada Las Vegas. You can follow the progress of the competitors here.  After March 17 the site will include a leaderboard so you can follow your favorites. No rumors of betting pools being formed yet but it wouldn’t surprise DancingDinosaur.  Win or not, each competitor should be a prime candidate if your organization needs mainframe talent.

This is part of IBM’s longstanding System z Academic Initiative, which has been expanding worldwide and now encompasses over 64,000 students at more than 1000 schools across 67 countries.  And now high school students are participating in the Master the Mainframe competition. Over 360 companies are actively recruiting from these students, including Baldor, Dillards, JB Hunt, Wal-mart, Cigna, Compuware, EMC, Fidelity, JP Morgan Chase, and more.

Said Jeff Gill, at VISA: “Discovering IBM’s Academic Initiative has been a critical success factor in building a lifeline to our future—a new base of Systems Engineers and Applications Developers who will continue to evolve our mainframe applications into flexible open enterprise solutions while maintaining high volume / high availability demands. Without the IBM Academic Initiative, perhaps we could have found students with aptitude – but participation in the Academic Initiative demonstrates a student’s interest in mainframe technology which, to us, translates to a wise long-term investment.“ Gill is one of the judges of the Masters the Mainframe World Championship.

Added Martin Kennedy of Citigroup: “IBM’s Master the Mainframe Contest offers a great resource to secure candidates and helps the company get critical skills as quickly as possible.”

The Master of the Mainframe Championship and even the entire 50th Anniversary celebration that will continue all year are not really IBM’s primary mainframe thrust this year.  IBM’s real focus is on emphasizing the forward-moving direction of the mainframe. As IBM puts in: “By continually adapting to trends and evolving IT, we’re driving new approaches to cloud, analytics, security and mobile computing to help tackle challenges never before thought possible.  The pioneering innovations of the mainframe all serve one mission—deliver game-changing technology that makes the extraordinary possible and improves the way the world works.

DancingDinosaur covers the mainframe and other enterprise-class technology. Watch this blog for more news on the mainframe and other enterprise systems including Power, enterprise storage, and enterprise-scale cloud computing.

With that noted, please plan to attend Edge 2014, May 19-23 in Las Vegas. Being billed as an infrastructure and storage technology conference, it promises to be an excellent follow-on to last year’s Edge conference.  DancingDinosaur will be there, no doubt hanging out in the blogger’s lounge where everyone is welcome. Watch this blog for upcoming details on the most interesting sessions.

And follow DancingDinosaur on Twitter, @mainframeblog

February 25, 2014

How the 50 Year-Old Mainframe Remains Relevant

The mainframe turns 50 years old this year and the many pundits and experts who predicted it would be long gone by now must be scratching their heads.  Yes, it is still around and has acquired over 260 new accounts just since zEnterprise launch. It also has shipped over 320 hybrid computing units (not to be confused with zBX chassis only) since the zBX was introduced and kicked off hybrid mainframe computing.

As for MIPS, although IBM experienced a MIPS decline last quarter that follows the largest MIPS shipment in mainframe history a year ago resulting in a 2-year CGR of +11%.  (Mainframe sales follow the new product release cycle in a predictable pattern.) IBM brought out the last System z release, the zEC12, faster than the mainframe’s historic release cycle. Let’s hope IBM repeats the quick turnaround with the next release.

Here’s what IBM is doing to keep the mainframe relevant:

  • Delivered steady price/performance improvements with each release. And with entry-level BC-class pricing and the System z Solution Edition programs you can end up with a mainframe system that is as competitive or better than x86-based systems while being more secure and more reliable out of the box.
  • Adopted Linux early, before it had gained the widespread acceptance it has today. Last year over three-quarters of the top 100 enterprises had IFLs installed. This year IBM reports a 31% increase in IFL MIPS. In at least two cases where DancingDinosaur recently interviewed IT managers, Linux on z was instrumental in bringing their shops to the mainframe.
  • Supported for SOA, Java, Web services, and cloud, mobile, and social computing continues to put the System z at the front of the hot trends. It also prominently plays with big data and analytics.  Who ever thought that the mainframe would be interacting with RESTful APIs? Certainly not DancingDinosaur’s computer teacher back in the dark ages.
  • Continued delivery of unprecedented scalability, reliability, and security at a time when the volumes of transactions, data, workloads, and users are skyrocketing.  (IDC predicts millions of apps, billions of users, and trillions of things connected by 2020.)
  • Built a global System z ecosystem of tools and technologies to support cloud, mobile, big data/analytics, social and non-traditional mainframe workloads. This includes acquisitions like SoftLayer and CSL Wave to deliver IBM Wave for z/VM, a simplified and cost effective way to harness the consolidation capabilities of the IBM System z platform along with its ability to host the workloads of tens of thousands of commodity servers. The mainframe today can truly be a fully fledged cloud player.

And that just touches on the mainframe platform advantages. While others boast of virtualization capabilities, the mainframe comes 100% virtualized out of the box with virtualization at every level.  It also comes with a no-fail redundant architecture and built-in networking. 

Hybrid computing is another aspect of the mainframe that organizations are just beginning to tap.  Today’s multi-platform compound workloads are inherently hybrid, and the System z can manage the entire multi-platform workload from a single console.

The mainframe anniversary celebration, called Mainframe50, officially kicks off in April but a report from the Pulse conference suggests that Mainframe50 interest already is ramping up. A report from Pulse 2014 this week suggests IBM jumped the gun by emphasizing how the z provides new ways never before thought possible to innovate while tackling challenges previously out of reach.

Pulse 2014, it turns out, offered 38 sessions on System z topics, of which 27 will feature analysts or IBM clients. These sessions promise to address key opportunities and challenges for today’s mainframe environments and the latest technology solutions for meeting them, including OMEGAMON, System Automation, NetView, GDPS, Workload Automation Tivoli Asset Discovery for z/OS and Cloud.

One session featured analyst Phil Murphy, Vice President and Principal Analyst from Forrester Research, discussing the critical importance of a robust infrastructure in a mixed mainframe/distributed cloud environment—which is probably the future most DancingDinosaur readers face—and how it can help fulfill the promise of value for cloud real time.

Another featured mainframe analyst Dot Alexander from Wintergreen Research who looked at how mainframe shops view executing cloud workloads on System z. The session focused on the opportunities and challenges, private and hybrid cloud workload environments, and the impact of scalability, standards, and security.

But the big celebration is planned for April 8 in NYC. There IBM promises to make new announcements, launch new research projects, and generally focus on the mainframe’s future.  A highlight promises to be Showcase 20, which will focus on 20 breakthrough areas referred to by IBM as engines of progress.  The event promises to be a sellout; you should probably talk to your System z rep if you want to attend. And it won’t stop on April 8. IBM expects to continue the Mainframe50 drumbeat all year with new announcements, deliverables, and initiatives. Already in February alone IBM has made a slew of acquisitions and cloud announcements that will touch every mainframe shop with any cloud interests (which should be every mainframe shop at one point or another).

In coming weeks stay tuned to DancingDinosaur for more on Mainframe50. Also watch this space for details of the upcoming Edge 2014 conference, with an emphasis on infrastructure innovation coming to Las Vegas in May.

Please follow DancingDinosaur on Twitter, @mainframeblog

2014 to be Landmark Year for the Mainframe

February 10, 2014

The official announcement is still a few weeks away and the big event won’t take place until April, but the Internet is full of items about the 50th anniversary of the mainframe. Check some out here, here, and here.

In 1991 InfoWorld editor Stewart Alsop, predicted that on March 15, 1996 an InfoWorld reader would unplug the last mainframe.  Alsop wrote many brilliant things about computing over the years, but this statement will forever stand out as one of the least informed, as subsequent events amply demonstrated.  That statement, however, later became part of the inspiration for the name of this blog, DancingDinosaur. The mainframe did not march inexorably to extinction like the dinosaur as many, many pundits predicted.

It might have, but IBM made some smart moves over the years that ensured the mainframe’s continued relevance for years to come.  DancingDinosaur marks 2000 as a key year in the ongoing relevance of the mainframe; that was the year IBM got serious about Linux on the System z. It was not clear then that Linux would become the widely accepted mainstream operating system it is today.  Last year over three-quarters of the top 100 enterprises had IFLs installed.  There is no question that Linux on the System z has become mainstream.

But it wasn’t Linux alone that ensured the mainframe’s continued relevance. Java enables the development of distributed type of workloads on the System z, which is only further advanced by WebSphere on z, and SOA on z. Today’s hottest trends—cloud, big data/analytics, mobile, and social—can be handled on the z too: cloud computing on z, big data/analytics/real-time analytics on z, mobile computing on z, and even social on z.

Finally, there is the Internet of things. This is a natural for the System z., especially if you combine it with MQTT, an open source transport protocol that enables minimized pub/sub messaging across mobile networks. With the z you probably will also want to combine it with the Really Small Message Broker (RSMB). Anyway, this will be the subject of an upcoming DancingDinosaur piece.

The net net:  anything you can do on a distributed system you can do on the System z and benefit from better resiliency and security built in. Even when it comes to cost, particularly TCO and cost per workload, between IBM’s deeply discounted System z Solution Editions and the introduction of the zBC12, which delivers twice the entry capacity for the same low cost ($75k) as the previous entry-level machine (z114), the mainframe is competitive.

Also coming up is Edge 2014, which focuses on Infrastructure Innovation this year. Please plan to attend, May 19-23 in Las Vegas.  Previous Edge conferences were worthwhile and this should be equally so. Watch DancingDinosaur for more details on the specific Edge programs.

And follow DancingDinosaur on Twitter: @mainframeblog

A Maturity Model for the New Mainframe Normal

February 3, 2014

Last week Compuware introduced its new mainframe maturity model designed to address what is emerging as the new mainframe normal. DancingDinosaur played a central role in the creation of this model.

A new mainframe maturity model is needed because the world of the mainframe is changing rapidly.  Did your data center team ever think they would be processing mainframe transactions from mobile phones? Your development team probably never imagined they would be architecting compound workloads across the mainframe and multiple distributed systems running both Windows and Linux? What about the prospect of your mainframe serving up millions or even billions of customer-facing transactions a day?  But that’s the mainframe story today.

Even IBM, the most stalwart of the mainframe vendors, repeats the driving trends—cloud, mobile, social, big data, analytics, Internet of things—like a mantra. As the mainframe celebrates its 50th anniversary year, it is fitting that a new maturity model be introduced because there is, indeed, a new mainframe normal rapidly evolving.

Things certainly are changing in ways most mainframe data center managers wouldn’t have anticipated 10 years ago, probably not even five years ago. Of those, perhaps the most disconcerting change for traditional mainframe shops is the need to accommodate distributed, open systems (systems of engagement) alongside the traditional mainframe environment (systems of record).

Since the rise of distributed systems two decades ago, there has existed both a technical and cultural gap between the mainframe and distributed teams. The emergence of technologies like hybrid computing, middleware, and the cloud have gone far to alleviate the technical gap. The cultural gap is not so amenable to immediate fixes. Still, navigating that divide is no longer optional – it has become a business imperative.  Crossing the gap is what the new maturity model addresses.

Many factors contribute to the gap; the largest of which appears to be that most organizations still approach the mainframe and distributed environments as separate worlds. One large financial company, for example, recently reported that they view the mainframe as simply MQ messages to distributed developers.

The new mainframe maturity model can be used as a guide to bridging both the technical and cultural gaps.  Specifically, the new model defines five levels of maturity. In the process, it incorporates distributed systems alongside the mainframe and recognizes the new workloads, processes and challenges that will be encountered. The five levels are:

  1. Ad-hoc:  The mainframe runs core systems and applications; these represent the traditional mainframe workloads and the green-screen approach to mainframe computing.
  2. Technology-centric:  An advanced mainframe is focused on ever-increasing volumes, higher capacity, and complex workload and transaction processing while keeping a close watch on MIPS consumption.
  3. Internal services-centric:  The focus shifts to mainframe-based services through a service delivery approach that strives to meet internal service level agreements (SLAs).
  4. External services-centric:  Mainframe and non-mainframe systems interoperate through a services approach that encompasses end-user expectations and tracks external SLAs.
  5. Business revenue-centric:  Business needs and the end-user experience are addressed through interoperability with cloud and mobile systems, services- and API-driven interactions, and real-time analytics to support revenue initiatives revolving around complex, multi-platform workloads.

Complicating things is the fact that most IT organizations will likely find themselves straddling different maturity levels. For example, although many have achieved levels 4 and 5 when it comes to technology the IT culture remains at levels 1 or 2. Such disconnects mean IT still faces many obstacles preventing it from reaching optimal levels of service delivery and cost management. And this doesn’t just impact IT; there can be ramifications for the business itself, such as decreased customer satisfaction and slower revenue growth.

DancingDinosaur’s hope is that as the technical cultures come closer through technologies like Java, Linux, SOA, REST, hybrid computing, mobile, and such to allow organizations to begin to close the cultural gap too.

Follow DancingDinosaur on Twitter: @mainframeblog

SOA Software Enables New Ways to Tap Mainframe Code

January 30, 2014

Is the core enterprise processing role handled by the mainframe enough? Yet, enterprises today often are running different types of workloads built using different app dev styles. These consist of compound applications encompassing the mainframe and a variety of distributed systems (Linux, UNIX, Windows) and different programming models, data schema, services, and more. Pieces of these workloads may be running on the public cloud, a partner’s private cloud, and a host of other servers. The pieces are pulled together at runtime to support the particular workload.  Mainframe shops should want to play a big role in this game too.

“Mainframe applications still sit at heart of enterprise operations, but mainframe managers also want to take advantage of these applications in new ways,” says Brent Carlson, SVP at SOA Software. The primary way of doing this is through SOA services, and mainframes have been playing in the SOA arena for years. But it has never been as seamless, easy, and flexible as it should. And as social and mobile and other new types of workloads get added to the services mix, the initial mainframe SOA approach has started to show its age. (Over the years, DancingDinosaur has written considerably on mainframe SOA and done numerous SOA studies.)

That’s why DancingDinosaur welcomes SOA Software’s Lifecycle Manager to the mainframe party.  It enables what the company calls a “RESTful Mainframe,” through governance of REST APIs that front zOS-based web services. This amounts to a unified platform from a governance perspective to manage both APIs as well as existing SOA assets. As Carlson explained: applying development governance to mainframe assets helps mainframe shops overcome the architectural challenges inherent in bringing legacy systems into the new API economy, where mobile apps need rapid, agile access to backend systems.

The company is aiming to make Lifecycle Manager into the system-of-record for all enterprise assets including mainframe-based SOAP services and RESTful APIs that expose legacy software functionality. The promise: seamless access to service discovery and impact analysis whether on mainframe, distributed systems, or partner systems. Both architects and developers should be able to map dependencies between APIs and mainframe assets at the development stage and manage those APIs across their full lifecycles.

Lifecycle Manager integrates with SOA’s Policy Manager to work either top down or bottom up.  The top down approach relies on a service wrapping of existing mainframe programs. Think of this as the WSDL first approach to designing web services and then developing programs on mainframe to implement it.  The bottom up approach starts with the copy book.  Either way, it is automated and intended to be seamless. It also promises to guide services developers on best practices like encryption, assign and enforce correct policies, and more.

“Our point: automate whatever we can, and guide developers into good practices,” said Carlson.  In the process, it simplifies the task of exposing mainframe capabilities to a broader set of applications while not interfering with mainframe developers.  To distributed developers the mainframe is just another service endpoint that is accessed as a service or API.  Nobody has to learn new things; it’s just a browser-based IDE using copy books.

For performance, the Lifecycle Manager-based runtime environment is written in assembler, which makes it fast while minimizing MIPS consumption. It also comes with the browser-based IDE, copybook tool, and import mappings.

The initial adopters have come from financial services and the airlines.  The expectation is that usage will expand beyond that as mainframe shops and distributed developers seek to leverage core mainframe code for a growing array of workloads that weren’t on anybody’s radar screen even a few years ago.

There are other ways to do this on the mainframe, starting with basic SOA and web services tools and protocols, like WSDL. Many mainframe SOA efforts leverage CICS, and IBM offers additional tools, most recently SoftLayer, that address the new app dev styles.

This is healthy for mainframe data centers. If nothing else SOA- and API-driven services workloads that include the mainframe help lower the cost per workload of the mainframe. It also puts the mainframe at the center of today’s IT action.

Follow DancingDinosaur on Twitter: @mainframeblog

Goodbye X6 and IBM System x

January 24, 2014

Seems just last week IBM was touting the new X6-based systems, the latest in its x86 System x server lineup.  Now the X6 and the entire System x line is going to Lenovo, which will acquire IBM’s x86 server business.  Rumors had been circulating about the sale for the last year, so often that you stopped paying attention to them.

The sale includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, and blade networking and maintenance operations. The purchase price is approximately US $2.3 billion, about two billion of which will be paid in cash and the balance in Lenovo stock.

Definitely NOT part of the sale are the System z, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.  These are considered part of the IBM Enterprise Systems portfolio.  This commitment to the z and other enterprise systems is encouraging, especially in light of the latest IBM quarterly financial statement in which all the system hardware platforms did poorly, including System x.

DancingDinosaur’s planned follow up to last week’s X6 column in anticipation of a reported upcoming February briefing on X6 speeds and feeds is now unlikely. IBM pr folks said no such briefing is planned.

Most of the System x team appears to be departing with the products. Approximately 7,500 IBM employees around the world, including those based at major locations such as Raleigh, Shanghai, Shenzhen and Taipei, are expected to be offered employment by Lenovo, according to the announcement.

IBM, however, may become more active than ever.  Recently, IBM announced that it will invest more than $1 billion in the new IBM Watson Group, and $1.2 billion to expand its global cloud computing footprint to 40 data centers worldwide in 15 countries across five continents.  It also announced bolstering the SoftLayer operation, sort of a combined IaaS and global content delivery network, plus earlier investments in Linux, OpenStack, and various other initiatives. DancingDinosaur will try to follow it for you along with the System z and other enterprise IBM platforms.

 Please follow DancingDinosaur on Twitter: @mainframeblog

IBM Leverages High End Server Expertise in New X6 Systems

January 17, 2014

If you hadn’t noticed how x86 systems have been maturing over the past decade you might be surprised at the introduction yesterday of IBM’s newest entry in the x86 world, the X6. The X6 is the latest rev of IBM’s eX5. If you didn’t already think the eX5 was enterprise-class, here’s what IBM says of the X6:  support for demanding mission and business critical workloads, better foundation for virtualization of enterprise applications, infrastructure that facilitates a private or hybrid cloud model. Sound familiar? IBM has often said the same things about its Power Systems and, of course, the zEnterprise.

As the sixth generation of IBM’s EXA x86 technology it promises to be fast (although the actual speeds and feeds won’t be revealed for another month), 3x the memory, high availability features that increase reliability, use of flash to boost on-board memory, and lower cost. IBM hasn’t actually said anything specific about pricing; published reports put X6 systems starting at $10k.

More specifically, the flash boost consists of integrated eXFlash memory-channel storage that provides DIMM-based storage up to 12.8 terabytes in the form of ultrafast flash storage close to the processor.  This should increase application performance by providing the lowest system write latency available, and X6 can enable significantly lower latency for database operations, which can lower licensing costs and reduce storage costs by reducing or eliminating the need for external SAN/NAS storage units. This should deliver almost in-memory performance (although again, we have to wait for the actual speeds and feeds and benchmarks).

The new X6 also borrows from the System z in its adoption of compute book terminology to describe its packaging, adding a storage book too.  The result: a modular, scalable compute book design that supports multiple generations of CPUs that, IBM promises, can reduce acquisition costs, up to 28% in comparison to one competitive offering.  (Finally some details: 28% acquisition cost savings based on pricing of x3850 X6 at announcement on 2/18 vs. current pricing of a comparable x86 based system that includes 2 x Intel Xeon E7-4820 [v1] processors, 1TB of memory [16GB RDIMMs] 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller. x3850 X6 includes 2 Compute Books, 2 x Intel Xeon E7 processors, 1TB of memory [16GB RDIMMs], 3.6TB of HDD storage, and Dual Port 10GBe SFP+ controller.)

X6 also provides stability and flexibility through forthcoming technology developments, allowing users to scale up now and upgrade efficiently in the future based on the compute/storage book design that makes it easy to snap books into the chassis as you require more resources. Fast set-up and configuration patterns simplify deployment and life-cycle management.

In short, the book design, long a hallmark of the System z, brings a number of advantages.  For starters, you can put multiple generations of technology in the same chassis, no need to rip-and-replace or re-configure. This lets you stretch and amortize costs in a variety of ways.  IBM also adds RAS capabilities, another hallmark of the z. In the case of X6 it includes features like memory page retire; advanced double chip kill; the IBM MEH algorithm; multiple storage controllers; and double, triple, or quadruple memory options.

Server models supported by the X6 architecture currently include the System x3850 X6 four-socket system, System x3950 X6 eight-socket system, and the IBM Flex System x880 scalable compute nodes. IBM also is introducing the System x3650 M4 BD storage server, a two-socket rack server supporting up to 14 drives delivering up to 56 terabytes of high-density storage — the largest available in the industry, according to IBM.  (More tidbits from the speeds and feeds to come: Compared to HP two-socket servers supporting a maximum of 48 TB storage with 12 x 3.5″ drives, and Dell two-socket servers supporting a maximum of 51.2 TB storage with 12 x 3.5″ and 2 x 2.5″ drives X6 delivers 46% greater performance—based on Intel Internal Test Report #1310, using SPECjbb*2013 benchmark, July 2013.). IBM’s conclusion: X6 is ideally suited for distributed scale-out of big data workloads.

The X6 systems come with a reference architecture that simplifies deployment. To make it even simpler, maybe even bullet-proof, IBM also is introducing the X6 as a set of packaged solutions. These include:

  • IBM System x Solution for SAP HANA on X6
  • IBM System x Solution for SAP Business Suite on X6
  • IBM System x Solution for VMware vCloud Suite on X6
  • IBM System x Solution for Microsoft SQL Data Warehouse on X6
  • IBM System x Solution for Microsoft Hyper-V on X6
  • IBM System x Solution for DB2 with BLU Acceleration on X6

These are optimized and tuned in advance for database, analytics, and cloud workloads.

So, the X6 bottom line according to IBM: More performance at  40%+ lower cost, multiple generations in one chassis; 3X more memory and higher system availability; expanded use of flash and more storage options; integrated solutions for easy and worry-free deployment; and packaged solutions to address data analytics, virtualization, and cloud.

IBM packed a lot of goodies into the X6. DancingDinosaur will take it up again when IBM presents the promised details. Stay tuned.

Follow DancingDinosaur on Twitter: @mainframeblog

IBM Commits $1B to Drive Watson into the Mainstream

January 10, 2014

IBM is ready to propel Watson beyond Jeopardy, its initial proof-of-concept, and into mainstream enterprise computing. To that end, it announced plans to spend more than $1 billion on the recently formed Watson business unit, an amount that includes $100 million in venture investments to build an ecosystem of entrepreneurs developing Watson-powered apps.

In addition, companies won’t need racks of Power servers to run Watson. With a series of announcements yesterday IBM unveiled plans to deliver Watson capabilities as business-ready cloud services. The announcement focused on three Watson services: 1)  Watson Discovery Advisor for research and development projects in industries such as pharmaceutical, publishing and biotechnology; 2) Watson Analytics to deliver visualized big data insights based on questions posed in natural language by any business user; and 3) IBM Watson Explorer to more easily uncover and share data-driven insights across the enterprise.

DancingDinosaur has been following Watson since its Jeopardy days. Having long since gotten over the disappointment that Watson didn’t run on the Power side of a hybrid zEnterprise, it turns out that IBM has managed to shrink Watson considerably.  Today Watson runs 24x faster, boasts a 2,400% improvement in performance, and is 90% smaller.  IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes, and you don’t even need to locate it in your data center; you can run it in the cloud.

Following the introduction of Watson IBM was slow to build on that achievement. It focused on healthcare and financial services, use-cases that appeared to be no-brainers.  Eventually it experienced success, particularly in healthcare, but the initial customers came slowly and the implementations appeared to be cumbersome.

Watson, at least initially, wasn’t going to be a simple deployment. It needed a ton of Power processors. It also needed massive amounts of data; in healthcare IBM collected what amounted to the entire library of the world’s medical research and knowledge. And it needed applications that took advantage of Watson’s formidable yet unusual capabilities.

The recent announcements of delivering Watson via the cloud and committing to underwrite application developers definitely should help. And yesterday’s announcement of what amounts to three packaged Watson services should speed deployment.

For example, Watson Analytics, according to IBM, removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Using sophisticated analytics and aided by Watson’s natural language interface, Watson Analytics automatically prepares the data, finds the most important relationships, and presents the results in an easy to interpret interactive visual format. As a result, business users are no longer limited to predefined views or static data models. Better yet, they can feel empowered to apply their own knowledge of the business to ask and answer new questions as they emerge. They also will be able to quickly understand and make decisions based on Watson Analytics’ data-driven visualizations.

Behind the new Watson services lies IBM Watson Foundations, described as a comprehensive, integrated set of big data and analytics capabilities that enable enterprises to find and capitalize on actionable insights. Basically, it amounts to a set of user tools and capabilities to tap into all relevant data – regardless of source or type – and run analytics to gain fresh insights in real-time. And it does so securely across any part of an enterprise, including revenue generation, marketing, finance, risk, and operations.  Watson Foundations also includes business analytics with predictive and decision management capabilities, information management with in-memory and stream computing, and enterprise content management packaged into modular offerings. As such it enables organizations of any size to address immediate needs for decision support, gain sustainable value from their initial investments, and grow from there.

This apparently sounded good to Singapore’s DBS Bank, which will deploy Watson cognitive computing capabilities to deliver a next- generation client experience.  For starters, DBS intends to apply Watson to its wealth management business to improve the advice and experience delivered to affluent customers.  The bank is counting on cloud-based Watson to process enormous amounts of information with the ability to understand and learn from each interaction at unprecedented speed. This should greatly increase the bank’s ability to quickly analyze, understand and respond to the vast amounts of data it is accumulating.

Specifically, DBS will deploy IBM’s cloud-based Watson Engagement Advisor solution, to be rolled out in the second half of the year. From there the bank reportedly plans to progressively deploy these capabilities to its other businesses over time.

For fans of cognitive computing and Watson, the announcements represent a much awaited evolution in IBM’s strategy. It promises to make cognitive computing and the natural language power of Watson usable for mainstream enterprises. How excited fans should get, however, depends on the specifics of IBM’s pricing and packaging for these offerings.  Still, faced with having to recoup a $1 billion investment, don’t expect loss-leader pricing from IBM.

Follow DancingDinosaur on Twitter: @mainframeblog

Meet the Power 795—the RISC Mainframe

December 16, 2013

The IBM POWER 795 could be considered a RISC mainframe. A deep dive session on the Power 795 at Enterprise 2013 in early October presented by Patrick O’Rourke didn’t call the machine a mainframe. But when he walked attendees through the specifications, features, capabilities, architecture, and design of the machine it certainly looked like what amounted to a RISC mainframe.

Start with the latest enhancements to the POWER7 chip:

  • Eight processor cores with:

12 execution units per core

4 Way SMT per core – up to 4 threads per core

32 Threads per chip

L1: 32 KB I Cache / 32 KB D Cache

 L2: 256 KB per core

 L3: Shared 32MB on chip eDRAM

  • Dual DDR3 Memory Controllers

100 GB/s Memory bandwidth per chip

  • Scalability up to 32 Sockets

360 GB/s SMP bandwidth/chip

20,000 coherent operations in flight

Built on POWER7 and slated to be upgraded to POWER8 by the end of 2014 the Power 795 boasts a number of new features:

  • New Memory Options
  • New 64GB DIMM enable up to 16TB of memory
  • New hybrid I/O adapters will deliver Gen2 I/O connections
  • No-charge Elastic processor and memory days
  • PowerVM will enable up an 20 LPARs per core

And running at 4.2 GHz, the Power 795 clock speed starts to approach the zEC12 at 5.5 GHz while matching the clock speed of the zBC12.

IBM has also built increased flexibility into the Power 795, starting with turbo mode which allows users to turn on and off cores as they manage power consumption and performance. IBM also has enhanced the concept of Power pools, which allows users to group systems into compute clusters by setting up and moving processor and memory activations within a defined pool of systems, at the user’s convenience. With the Power 795 pool activations can be moved at any time by the user without contacting IBM, and the movement of the activations is instant, dynamic, and non-disruptive. Finally, there is no limit to the number of times activations can be moved. Enterprise pools can include the Power 795, 780, and 770 and systems with different clock speeds can coexist in the same pool. The activation assignment and movement is controlled by the HMC, which also determines the maximum number of system in any given pool.

The Power 795 provides three flavors of capacity of demand (CoD). One flavor for organizations that know they will need the extra capacity that can be turned on through easy activation over time. Another is intended for organizations that know they will need extra capacity at predictable times, such as the end of the quarter, and want to pay for the added capacity on a daily basis. Finally, there is a flavor for organizations that experience unpredictable short bursts of activity and prefer to pay for the additional capacity by the minute. Actually, there are more than the three basic flavors of CoD above but these three will cover the needs of most organizations.

And like a mainframe, the Power 795 comes with extensive hardware redundancy.  OK, the Power 795 isn’t a mainframe. It doesn’t run z/OS and it doesn’t do hybrid computing. But if you don’t run z/OS workloads and you’re not planning on running hybrid workloads yet still want the scalability, flexibility, reliability, and performance of a System z the Power 795 might prove very interesting indeed. And when the POWER8 processor is added to the mix the performance should go off the charts. This is a worthy candidate for enterprise systems consolidation.

The zEnterprise as a Hybrid Data Center

November 21, 2013

There is no doubt that the zEnterprise enables hybrid computing. Just attach a zBX to it and start plugging in Linux and x86 blades; presto, you’ve got hybrid computing.  You can manage this entire hybrid infrastructure via the Unified Resource Manager.

The zEnterprise also has a sister hybrid computing platform, IBM PureSystems. Here, too, you can add in System x and Linux or even Power and System i and do hybrid computing. You can also manage the hybrid environment through a single console, albeit a different console—the Flex System Manager—and manage this second IBM hybrid platform as a unified environment.  DancingDinosaur has noted the irony of IBM having two different, incompatible hybrid systems; IBM has reassured this blogger several times that it is trying to converge the two. Whenever it happens DancingDinosaur will be the first to report it.

The zEnterprise or even PureSystems as a hybrid computing platform, however, is not the same as a hybrid data center.  Apparently there is no definition of a hybrid data center despite all the talk about hybrid computing, hybrid clouds, and hybrid systems.  As best DancingDinosaur can piece it together, the hybrid data center is multiplatform like the zEnterprise, but it also is multi-location, often using co-location facilities or factory-built containerized data centers (IBM calls them Portable Modular Data Centers, PMDC). More often, however, hybrid data centers are associated with cloud computing as the third of the three flavors of cloud (private, public, hybrid).

Gartner recently described some architecture options for a hybrid data center. In one case you could have a zEnterprise acting as, say, a private cloud using a co-location facility as a DMZ between the private cloud and a public cloud like Amazon. Not sure, however, you would need the DMZ if your private cloud was running on the highly secure zEnterprise but Gartner included it. Go figure.

Hybrid showed up in numerous Enterprise 2013 sessions this past October. You can catch some video highlights from it here. The conference made frequent mention of hybrid in numerous sessions, some noted in previous DancingDinosaur posts, such as Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The session introduced the Unified Resource Manager and described how it would allow an IT shop to manage a collection of one or more zEnterprise nodes including any optionally attached zBX cabinets as a single logical virtualized system through a Hardware Management Console (HMC). In short, it was about providing a single point of control through which data center personnel can deploy, configure, monitor, manage and maintain the integrated System z and zBX blades based on heterogeneous architectures in a unified manner. But it wasn’t talking about the hybrid enterprise data center described in the previous paragraph.

Similarly, Application Performance Management and Capacity Planning for the IBM zEnterprise Hybrid Workload focused on extending the Unified Resource Manager to goal-oriented performance management for both traditional System z and BladeCenter applications. It was about applying WLM, RMF, and Platform Performance Management to cross-platform hybrid applications. Again, this really wasn’t about the hybrid data center described above.

BTW, plans apparently already are underway for Enterprise 2014. Looks like it will be Oct. 6-10 at the Venetian in Las Vegas. It should be quite an event given that IBM will be celebrating the 50th anniversary of the mainframe in 2014.

And there is much more on z hybrid computing and hybrid clouds. The zEnterprise has its own page on cloud computing here, and last month the zEnterprise zBC12 won CRN Tech Innovator Award for the most Innovative cloud solution.  You can also click here to see how a dozen IBM customers used various IBM platforms to build hybrid clouds.

IBM has already used the zEnterprise to consolidate over 30,000 servers around the world for an 84% improvement in data center efficiency and a 50% reduction in power and cooling. This effectively freed $1 billion to spend on innovative new projects that drive business growth across the company. And IBM is about as hybrid a data center as you can find.

Follow DancingDinosaur on Twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 631 other followers

%d bloggers like this: