Posts Tagged ‘System z’

IBM Edge2014 as Coming out Party for OpenStack

May 7, 2014

IBM didn’t invent OpenStack (Rackspace and NASA did), but IBM’s embrace of OpenStack in March 2013 as its standard for cloud computing made it a legit standard for enterprise computing. Since then IBM has made its intention to enable its product line, from the System z on down, for the OpenStack set of open source technologies.  Judging from the number of sessions at IBM Edge 2014, (Las Vegas, May 19-23 at the Venetian) that address one or another aspect of OpenStack you might think of IBM Edge2014 almost as a coming out celebration for OpenStack and enterprise cloud computing.

OpenStack is a collection of open source technologies. the goal of which is to provide a scalable computing infrastructure for both public and private clouds. As such it has become the foundation of IBM’s cloud strategy, which is another way of saying it has become what IBM sees as its future. An excellent mini-tutorial on OpenStack, IBM, and the System z can be found at mainframe-watch-Belgium here.

At IBM Edge2014 OpenStack is frequently included in sessions on storage, cloud, and storage management.  Let’s take a closer look at a few of those sessions.

IBM Storage and Cloud Technologies

Presenter Christopher Vollmar offers an overview of the IBM storage platforms that contain cloud technologies or provide a foundation for creating a private storage cloud for block and file workloads. This overview includes IBM’s SmartCloud Virtual Storage Center, SmartCloud Storage Access, Active Cloud Engine, and XIV’s Hyper-Scale as well as IBM storage products’ integration with OpenStack.

OpenStack and IBM Storage

Presenters Michael Factor and Funda Eceral explain how OpenStack is rapidly emerging as the de facto platform for Infrastructure as a Service. IBM is working fast to pin down the integration of its storage products with OpenStack. This talk presents a high level overview of OpenStack, with a focus on Cinder, the OpenStack block storage manager. They also will explain how IBM is leading the evolution of Cinder by improving the common base with features such as volume migration and ability to change the SLAs associated with the volume in the OpenStack cloud. Already IBM storage products—Storwize, XIV, DS8000, GPFS and TSM—are integrated with OpenStack, enabling self-provisioning access to features such as EasyTier or Real-time Compression via standard OpenStack interfaces. Eventually, you should expect virtually all IBM products, capabilities, and services to work with and through OpenStack.

IBM XIV and VMware: Best Practices for Your Cloud

Presenters Peter Kisich, Carlos Lizarralde argue that IBM Storage continues to lead in OpenStack integration and development. They then introduce the core services of OpenStack while focusing on how IBM storage provides open source integration with Cinder drivers for Storwize, DS8000 and XIV. They also include key examples and a demonstration of the automation and management IBM Storage offers through the OpenStack cloud platform.

IBM OpenStack Hybrid Cloud on IBM PureFlex and SoftLayer

Presenter Eric Kern explains how IBM’s latest version of OpenStack is used to showcase a hybrid cloud environment. A pair of SoftLayer servers running in IBM’s public cloud are matched with a PureFlex environment locally hosting the OpenStack controller. He covers the architecture used to set up this environment before diving into the details around deploying workloads.

Even if you never get to IBM Edge2014 it should be increasingly clear that OpenStack is quickly gaining traction and destined to emerge as central to Enterprise IT, any style of cloud computing, and IBM. OpenStack will be essential for any private, public, and hybrid cloud deployments. Come to Edge2014 and get up to speed fast on OpenStack.

Alan Radding/DancingDinosaur will be there. Look for me in the bloggers lounge between and after sessions. Also watch for upcoming posts on DancingDinosaur about OpenStack and the System z and on OpenStack on Power Systems.

Please follow DancingDinosaur on Twitter, @mainframeblog.

Best System z TCO in Cloud and Virtualization

May 1, 2014

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines in terms of TCO.  The analysis, which DancingDinosaur will dig into below, was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

DancingDinosaur has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 based systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial z Enterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM and the few IT analysts who talk to z customers have been saying for some time. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual machines compared to the public cloud and a bit more VMs compared to x86 machines.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance. View the IBM z TCO presentation here.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for AWS reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instances.

When IBM applied its analysis to 398 I/O diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM kept the assumptions equivalent across the platforms. If you make different software and middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the overall comparative rankings probably won’t change all that much.

Still time to register for IBM Edge2014 in Las Vegas, May 19-23. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog

Happy 50th System z

April 11, 2014

IBM threw a delightful anniversary party for the mainframe in NYC last Tuesday, April 8. You can watch video from the event here

About 500 people showed up to meet the next generation of mainframers, the top winners of the global Master of the Mainframe competition. First place went to Yong-Sian Shih, Taiwan; followed by Rijnard van Tonder, South Africa; and Philipp Egli, United Kingdom.  Wouldn’t be surprised if these and the other finalists at the event didn’t have job offers before they walked out of the room.

The System z may be built on 50-year old technology but IBM is rapidly driving the mainframe forward into the future. It had a slew of new announcements ready to go at the anniversary event itself and more will be rolling out in the coming months. Check out all the doings around the Mainframe50 anniversary here.

IBM started the new announcements almost immediately with Hadoop on the System z. Called  zDoop, the industry’s first commercial Hadoop for Linux on System z, puts map reduce big data analytics directly on the z. It also announced Flash for mainframe, consisting of the latest generation of flash storage on the IBM DS8870, which promises to speed time to insight with up to 30X the performance over HDD. Put the two together and the System z should become a potent big data analytics workhorse.

But there was even more. Mobile is hot and the mainframe is ready to play in the mobile arena too. Here the problem z shops experience is cost containment. Mainframe shops are seeing a concurrent rise in their costs related to integrating new mobile applications. The problem revolves around the fact that many mobile activities use mainframe resources but don’t generate immediate income.

The IBM System z Solution for Mobile Computing addresses this with new pricing for mobile workloads on z/OS by reducing the cost of the growth of mobile transaction volumes that can cause a spike in software charges. This new pricing will provide up to a 60% reduction on the processor capacity reported for Mobile activity, which can help normalize the rate of transaction growth that generates software charges. The upshot: much mobile traffic volume won’t increase your software overhead.

And IBM kept rolling out the new announcements:

  • Continuous Integration for System z – Compresses the application delivery cycle from months to weeks or days.   Beyond this IBM suggested upcoming initiatives to deliver full DevOps capabilities for the z
  • New version of IBM CICS Transaction Server – Delivers enhanced mobile and cloud support for CICS, able to handle more than 1 billion transactions per day
  • IBM WebSphere Liberty z/OS Connect—Rapid and secure enablement of web, cloud, and mobile access to z/OS assets
  • IBM Security zSecure SSE – Helps prevent malicious computer attacks with enhanced security intelligence and compliance reporting that delivers security events to QRadar SIEM for integrated enterprise- wide security intelligence dashboarding

Jeff Frey, an IBM Fellow and the former CTO of System z, observed that “this architecture was invented 50 years ago, but it is not an old platform.”  It has evolved over those decades and continues evolve. For example, Frey expects the z to accommodate 22nm chips and a significant increase in the increase in the number of cores per chip. He also expects vector technology, double precision floating point and integer capabilities, and FPGA to be built in. In addition, he expects the z to include next generation virtualization technology for the cloud to support software defined environments.

“This is a modern platform,” Frey emphasized. Other IBMers hinted at even more to come, including ongoing research to move beyond silicon to maintain the steady price/performance gains the computing industry has enjoyed the past number of decades.

Finally, IBM took the anniversary event to introduce a number of what IBM calls first-in-the-enterprise z customers. (DancingDinosaur thinks of them as mainframe virgins).  One is Steel ORCA, a managed service provider putting together what it calls the first full service digital utility center.  Based in Princeton, NJ, Phase 1 will offer connections of less than a millisecond to/from New York and Philadelphia. The base design is 300 watts per square foot and can handle ultra-high density configurations. Behind the operation is a zEC12. Originally the company planned to use an x86 system but the costs were too high. “We could cut those costs in half with the z,” said Dave Crocker, Steel ORCA chairman.

Although the Mainframe50 anniversary event has passed, there will be Mainframe50 events and announcements throughout the rest of the year.  Again, you can follow the action here.

Coming up next for DancingDinosaur is Edge2014, a big infrastructure innovation conference. Next week DancingDinosaur will look at a few more of the most interesting sessions, and there are plenty. There still is time to register. Please come—you’ll find DancingDinosaur in the bloggers lounge, at program sessions, and at the Sheryl Crow concert.

Follow DancingDinosaur on Twitter, @mainframeblog


One week to Mainframe50—Be There Virtually

April 1, 2014

Back in February, DancingDinosaur started writing about the upcoming Mainframe50 celebration. Now we’re just one week away from what will be a nearly year-long celebration, introductions of new mainframe advances, and more. It all starts on Tues., April 8 in New York City.

You can join through Livestream for the event and news briefing.  Just click here and join in from wherever you are virtually.

Or you can register to attend the event by clicking here. DancingDinosaur will be there and plans to file a report later that day on this blog and also be tweeting throughout all the Mainframe50 events. Follow it all on Twitter, @mainframeblog.

Later this week, DancingDinosaur will be posting the latest in a series of reports from Edge 2014, being held in Las Vegas, May 19-23. There is still time to register and get a discount. You can find DancingDinosaur there in the Bloggers Lounge after sessions, keynotes, and the Sheryl Crow concert.

And please follow DancingDinosaur on Twitter, @mainframeblog






The Future of IBM Lies in the Cloud

March 13, 2014

In her annual letter to stockholders IBM CEO Virginia Rometty made it clear that the world is being forever altered by the explosion of digital data and by the advent of the cloud. So, she intends IBM to “remake the enterprise IT infrastructure for the era of cloud.” This where she is leading IBM.

DancingDinosaur thinks she has it right. But where does that leave this blog, which was built on the System z, Power Systems, and IBM’s enterprise systems? Hmm.

Rometty has an answer for that buried far down in her letter. “We are accelerating the move of our Systems product portfolio—in particular, Power and storage—to growth opportunities and to Linux, following the lead of our successful mainframe business. “

The rapidly emerging imperatives of big data, cloud computing, and mobile/social require enterprise-scale computing in terms of processing power, capacity, availability, security, and all the other ities that have long been the hallmark of the mainframe and IBM’s other enterprise class systems. She goes so far as to emphasize that point:  “Let me be clear—we are not exiting hardware. IBM will remain a leader in high-performance and high-end systems, storage and cognitive computing, and we will continue to invest in R&D for advanced semiconductor technology.”

You can bet that theme will be continued at the upcoming Edge 2014 conference May 19-23 in Las Vegas. The conference will include an Executive program, a Technical program with 550 expert technical sessions across 14 tracks, and a partner program. It’s being billed as an infrastructure innovation event and promises a big storage component too. Expect to see a lot of FlashSystems and XIV, which has a new pay-as-you-go pricing program that will make it easy to get into XIV and scale it fast as you need it. You’ll probably also encounter some other new go-to-market strategies for storage.

As far as getting to the cloud, IBM has been dropping billions to build out about as complete a cloud stack as you can get.  SoftLayer, the key piece, was just the start. BlueMix, an implementation of IBM’s Open Cloud Architecture, leverages Cloud Foundry to enable developers to rapidly build, deploy, and manage their cloud applications while tapping a growing ecosystem of available services and runtime frameworks, many of which are open source. IBM will provide services and runtimes into the ecosystem based on its already extensive and rapidly expanding software portfolio. BlueMix is the IBM PaaS offering that compliments SoftLayer, its IaaS offering. Cloudant, the most recent acquisition, brings database as a service (DBaaS) to the stack. And don’t forget IBM Wave for z/VM, which virtualizes and manages Linux VMs, a critical cloud operation for sure. With this conglomeration of capabilities IBM is poised to offer something cloud-like to just about any organization. Plus, tying WebSphere and its other middleware products to SoftLayer bolsters the cloud stack that much more.

And don’t think IBM is going to stop here. DancingDinosaur expects to see more acquisitions, particularly when it comes to hybrid clouds and what IBM calls systems of engagement. Hybrid clouds, for IBM, link systems of engagement—built on mobile and social technologies where consumers are engaging with organizations—with systems of record, the main workloads of the System z and Power Systems, where data and transactions are processed.

DancingDinosaur intends to be at Edge 2014 where it expects to see IBM detailing a lot of its new infrastructure and demonstrating how to use it. You can register for Edge 2014 here until April 20 and grab a discount.

Follow DancingDinosaur on Twitter: @mainframeblog

The Next Generation of Mainframers

March 6, 2014

With seemingly every young person with any technology inclinations aiming to become the next WhatsApp and walk away with some of Facebook’s millions it is fair to wonder: Where is the next generation of mainframers going to come from and who are they going to be?

The answer: IBM is lining them up now. As the mainframe turns 50 you’ll have a chance to meet some of these up and coming mainframers as part of IBM’s 50th Mainframe Anniversary celebration in New York, April 8, when IBM announces winners of the World Championship round of its popular Master of the Mainframe competition.

According to IBM, the Championship is designed to assemble the best university students from around the globe who have demonstrated superior technical skills through participation in their regional IBM Master the Mainframe Contests. Out of the 20,000 students who have engaged in country-level Master the Mainframe Contests over the last three years, the top 44 students from 22 countries have been invited to participate in the inaugural IBM Master the Mainframe World Championship.

These students will spend the month of March working through the Systems of Engagement concept, an expansion of the traditional Systems of Record—core transaction systems—that have been the primary workload of mainframe computing. The students will deploy Systems of Record mainframe business applications written with Java and COBOL using DB2 for z/OS API’s to demonstrate how the Systems of Engagement concept takes full advantage of the mainframe’s advanced capabilities. In short, the mainframe is designed to support tomorrow’s most demanded complex workloads  Big Data, Cloud, and Mobile computing workloads and do them all with the most effective enterprise-class security. The students will showcase their applications on April 7, 2014 in New York City where judges will determine which student earns the distinction of “Master the Mainframe World Champion.”

Representing the United States are Mugdha Kadam from the University of Florida, Elton Cheng from the University of California San Diego, and Rudolfs Dambis from the University of Nevada Las Vegas. You can follow the progress of the competitors here.  After March 17 the site will include a leaderboard so you can follow your favorites. No rumors of betting pools being formed yet but it wouldn’t surprise DancingDinosaur.  Win or not, each competitor should be a prime candidate if your organization needs mainframe talent.

This is part of IBM’s longstanding System z Academic Initiative, which has been expanding worldwide and now encompasses over 64,000 students at more than 1000 schools across 67 countries.  And now high school students are participating in the Master the Mainframe competition. Over 360 companies are actively recruiting from these students, including Baldor, Dillards, JB Hunt, Wal-mart, Cigna, Compuware, EMC, Fidelity, JP Morgan Chase, and more.

Said Jeff Gill, at VISA: “Discovering IBM’s Academic Initiative has been a critical success factor in building a lifeline to our future—a new base of Systems Engineers and Applications Developers who will continue to evolve our mainframe applications into flexible open enterprise solutions while maintaining high volume / high availability demands. Without the IBM Academic Initiative, perhaps we could have found students with aptitude – but participation in the Academic Initiative demonstrates a student’s interest in mainframe technology which, to us, translates to a wise long-term investment.“ Gill is one of the judges of the Masters the Mainframe World Championship.

Added Martin Kennedy of Citigroup: “IBM’s Master the Mainframe Contest offers a great resource to secure candidates and helps the company get critical skills as quickly as possible.”

The Master of the Mainframe Championship and even the entire 50th Anniversary celebration that will continue all year are not really IBM’s primary mainframe thrust this year.  IBM’s real focus is on emphasizing the forward-moving direction of the mainframe. As IBM puts in: “By continually adapting to trends and evolving IT, we’re driving new approaches to cloud, analytics, security and mobile computing to help tackle challenges never before thought possible.  The pioneering innovations of the mainframe all serve one mission—deliver game-changing technology that makes the extraordinary possible and improves the way the world works.

DancingDinosaur covers the mainframe and other enterprise-class technology. Watch this blog for more news on the mainframe and other enterprise systems including Power, enterprise storage, and enterprise-scale cloud computing.

With that noted, please plan to attend Edge 2014, May 19-23 in Las Vegas. Being billed as an infrastructure and storage technology conference, it promises to be an excellent follow-on to last year’s Edge conference.  DancingDinosaur will be there, no doubt hanging out in the blogger’s lounge where everyone is welcome. Watch this blog for upcoming details on the most interesting sessions.

And follow DancingDinosaur on Twitter, @mainframeblog

February 25, 2014

How the 50 Year-Old Mainframe Remains Relevant

The mainframe turns 50 years old this year and the many pundits and experts who predicted it would be long gone by now must be scratching their heads.  Yes, it is still around and has acquired over 260 new accounts just since zEnterprise launch. It also has shipped over 320 hybrid computing units (not to be confused with zBX chassis only) since the zBX was introduced and kicked off hybrid mainframe computing.

As for MIPS, although IBM experienced a MIPS decline last quarter that follows the largest MIPS shipment in mainframe history a year ago resulting in a 2-year CGR of +11%.  (Mainframe sales follow the new product release cycle in a predictable pattern.) IBM brought out the last System z release, the zEC12, faster than the mainframe’s historic release cycle. Let’s hope IBM repeats the quick turnaround with the next release.

Here’s what IBM is doing to keep the mainframe relevant:

  • Delivered steady price/performance improvements with each release. And with entry-level BC-class pricing and the System z Solution Edition programs you can end up with a mainframe system that is as competitive or better than x86-based systems while being more secure and more reliable out of the box.
  • Adopted Linux early, before it had gained the widespread acceptance it has today. Last year over three-quarters of the top 100 enterprises had IFLs installed. This year IBM reports a 31% increase in IFL MIPS. In at least two cases where DancingDinosaur recently interviewed IT managers, Linux on z was instrumental in bringing their shops to the mainframe.
  • Supported for SOA, Java, Web services, and cloud, mobile, and social computing continues to put the System z at the front of the hot trends. It also prominently plays with big data and analytics.  Who ever thought that the mainframe would be interacting with RESTful APIs? Certainly not DancingDinosaur’s computer teacher back in the dark ages.
  • Continued delivery of unprecedented scalability, reliability, and security at a time when the volumes of transactions, data, workloads, and users are skyrocketing.  (IDC predicts millions of apps, billions of users, and trillions of things connected by 2020.)
  • Built a global System z ecosystem of tools and technologies to support cloud, mobile, big data/analytics, social and non-traditional mainframe workloads. This includes acquisitions like SoftLayer and CSL Wave to deliver IBM Wave for z/VM, a simplified and cost effective way to harness the consolidation capabilities of the IBM System z platform along with its ability to host the workloads of tens of thousands of commodity servers. The mainframe today can truly be a fully fledged cloud player.

And that just touches on the mainframe platform advantages. While others boast of virtualization capabilities, the mainframe comes 100% virtualized out of the box with virtualization at every level.  It also comes with a no-fail redundant architecture and built-in networking. 

Hybrid computing is another aspect of the mainframe that organizations are just beginning to tap.  Today’s multi-platform compound workloads are inherently hybrid, and the System z can manage the entire multi-platform workload from a single console.

The mainframe anniversary celebration, called Mainframe50, officially kicks off in April but a report from the Pulse conference suggests that Mainframe50 interest already is ramping up. A report from Pulse 2014 this week suggests IBM jumped the gun by emphasizing how the z provides new ways never before thought possible to innovate while tackling challenges previously out of reach.

Pulse 2014, it turns out, offered 38 sessions on System z topics, of which 27 will feature analysts or IBM clients. These sessions promise to address key opportunities and challenges for today’s mainframe environments and the latest technology solutions for meeting them, including OMEGAMON, System Automation, NetView, GDPS, Workload Automation Tivoli Asset Discovery for z/OS and Cloud.

One session featured analyst Phil Murphy, Vice President and Principal Analyst from Forrester Research, discussing the critical importance of a robust infrastructure in a mixed mainframe/distributed cloud environment—which is probably the future most DancingDinosaur readers face—and how it can help fulfill the promise of value for cloud real time.

Another featured mainframe analyst Dot Alexander from Wintergreen Research who looked at how mainframe shops view executing cloud workloads on System z. The session focused on the opportunities and challenges, private and hybrid cloud workload environments, and the impact of scalability, standards, and security.

But the big celebration is planned for April 8 in NYC. There IBM promises to make new announcements, launch new research projects, and generally focus on the mainframe’s future.  A highlight promises to be Showcase 20, which will focus on 20 breakthrough areas referred to by IBM as engines of progress.  The event promises to be a sellout; you should probably talk to your System z rep if you want to attend. And it won’t stop on April 8. IBM expects to continue the Mainframe50 drumbeat all year with new announcements, deliverables, and initiatives. Already in February alone IBM has made a slew of acquisitions and cloud announcements that will touch every mainframe shop with any cloud interests (which should be every mainframe shop at one point or another).

In coming weeks stay tuned to DancingDinosaur for more on Mainframe50. Also watch this space for details of the upcoming Edge 2014 conference, with an emphasis on infrastructure innovation coming to Las Vegas in May.

Please follow DancingDinosaur on Twitter, @mainframeblog

2014 to be Landmark Year for the Mainframe

February 10, 2014

The official announcement is still a few weeks away and the big event won’t take place until April, but the Internet is full of items about the 50th anniversary of the mainframe. Check some out here, here, and here.

In 1991 InfoWorld editor Stewart Alsop, predicted that on March 15, 1996 an InfoWorld reader would unplug the last mainframe.  Alsop wrote many brilliant things about computing over the years, but this statement will forever stand out as one of the least informed, as subsequent events amply demonstrated.  That statement, however, later became part of the inspiration for the name of this blog, DancingDinosaur. The mainframe did not march inexorably to extinction like the dinosaur as many, many pundits predicted.

It might have, but IBM made some smart moves over the years that ensured the mainframe’s continued relevance for years to come.  DancingDinosaur marks 2000 as a key year in the ongoing relevance of the mainframe; that was the year IBM got serious about Linux on the System z. It was not clear then that Linux would become the widely accepted mainstream operating system it is today.  Last year over three-quarters of the top 100 enterprises had IFLs installed.  There is no question that Linux on the System z has become mainstream.

But it wasn’t Linux alone that ensured the mainframe’s continued relevance. Java enables the development of distributed type of workloads on the System z, which is only further advanced by WebSphere on z, and SOA on z. Today’s hottest trends—cloud, big data/analytics, mobile, and social—can be handled on the z too: cloud computing on z, big data/analytics/real-time analytics on z, mobile computing on z, and even social on z.

Finally, there is the Internet of things. This is a natural for the System z., especially if you combine it with MQTT, an open source transport protocol that enables minimized pub/sub messaging across mobile networks. With the z you probably will also want to combine it with the Really Small Message Broker (RSMB). Anyway, this will be the subject of an upcoming DancingDinosaur piece.

The net net:  anything you can do on a distributed system you can do on the System z and benefit from better resiliency and security built in. Even when it comes to cost, particularly TCO and cost per workload, between IBM’s deeply discounted System z Solution Editions and the introduction of the zBC12, which delivers twice the entry capacity for the same low cost ($75k) as the previous entry-level machine (z114), the mainframe is competitive.

Also coming up is Edge 2014, which focuses on Infrastructure Innovation this year. Please plan to attend, May 19-23 in Las Vegas.  Previous Edge conferences were worthwhile and this should be equally so. Watch DancingDinosaur for more details on the specific Edge programs.

And follow DancingDinosaur on Twitter: @mainframeblog

A Maturity Model for the New Mainframe Normal

February 3, 2014

Last week Compuware introduced its new mainframe maturity model designed to address what is emerging as the new mainframe normal. DancingDinosaur played a central role in the creation of this model.

A new mainframe maturity model is needed because the world of the mainframe is changing rapidly.  Did your data center team ever think they would be processing mainframe transactions from mobile phones? Your development team probably never imagined they would be architecting compound workloads across the mainframe and multiple distributed systems running both Windows and Linux? What about the prospect of your mainframe serving up millions or even billions of customer-facing transactions a day?  But that’s the mainframe story today.

Even IBM, the most stalwart of the mainframe vendors, repeats the driving trends—cloud, mobile, social, big data, analytics, Internet of things—like a mantra. As the mainframe celebrates its 50th anniversary year, it is fitting that a new maturity model be introduced because there is, indeed, a new mainframe normal rapidly evolving.

Things certainly are changing in ways most mainframe data center managers wouldn’t have anticipated 10 years ago, probably not even five years ago. Of those, perhaps the most disconcerting change for traditional mainframe shops is the need to accommodate distributed, open systems (systems of engagement) alongside the traditional mainframe environment (systems of record).

Since the rise of distributed systems two decades ago, there has existed both a technical and cultural gap between the mainframe and distributed teams. The emergence of technologies like hybrid computing, middleware, and the cloud have gone far to alleviate the technical gap. The cultural gap is not so amenable to immediate fixes. Still, navigating that divide is no longer optional – it has become a business imperative.  Crossing the gap is what the new maturity model addresses.

Many factors contribute to the gap; the largest of which appears to be that most organizations still approach the mainframe and distributed environments as separate worlds. One large financial company, for example, recently reported that they view the mainframe as simply MQ messages to distributed developers.

The new mainframe maturity model can be used as a guide to bridging both the technical and cultural gaps.  Specifically, the new model defines five levels of maturity. In the process, it incorporates distributed systems alongside the mainframe and recognizes the new workloads, processes and challenges that will be encountered. The five levels are:

  1. Ad-hoc:  The mainframe runs core systems and applications; these represent the traditional mainframe workloads and the green-screen approach to mainframe computing.
  2. Technology-centric:  An advanced mainframe is focused on ever-increasing volumes, higher capacity, and complex workload and transaction processing while keeping a close watch on MIPS consumption.
  3. Internal services-centric:  The focus shifts to mainframe-based services through a service delivery approach that strives to meet internal service level agreements (SLAs).
  4. External services-centric:  Mainframe and non-mainframe systems interoperate through a services approach that encompasses end-user expectations and tracks external SLAs.
  5. Business revenue-centric:  Business needs and the end-user experience are addressed through interoperability with cloud and mobile systems, services- and API-driven interactions, and real-time analytics to support revenue initiatives revolving around complex, multi-platform workloads.

Complicating things is the fact that most IT organizations will likely find themselves straddling different maturity levels. For example, although many have achieved levels 4 and 5 when it comes to technology the IT culture remains at levels 1 or 2. Such disconnects mean IT still faces many obstacles preventing it from reaching optimal levels of service delivery and cost management. And this doesn’t just impact IT; there can be ramifications for the business itself, such as decreased customer satisfaction and slower revenue growth.

DancingDinosaur’s hope is that as the technical cultures come closer through technologies like Java, Linux, SOA, REST, hybrid computing, mobile, and such to allow organizations to begin to close the cultural gap too.

Follow DancingDinosaur on Twitter: @mainframeblog

SOA Software Enables New Ways to Tap Mainframe Code

January 30, 2014

Is the core enterprise processing role handled by the mainframe enough? Yet, enterprises today often are running different types of workloads built using different app dev styles. These consist of compound applications encompassing the mainframe and a variety of distributed systems (Linux, UNIX, Windows) and different programming models, data schema, services, and more. Pieces of these workloads may be running on the public cloud, a partner’s private cloud, and a host of other servers. The pieces are pulled together at runtime to support the particular workload.  Mainframe shops should want to play a big role in this game too.

“Mainframe applications still sit at heart of enterprise operations, but mainframe managers also want to take advantage of these applications in new ways,” says Brent Carlson, SVP at SOA Software. The primary way of doing this is through SOA services, and mainframes have been playing in the SOA arena for years. But it has never been as seamless, easy, and flexible as it should. And as social and mobile and other new types of workloads get added to the services mix, the initial mainframe SOA approach has started to show its age. (Over the years, DancingDinosaur has written considerably on mainframe SOA and done numerous SOA studies.)

That’s why DancingDinosaur welcomes SOA Software’s Lifecycle Manager to the mainframe party.  It enables what the company calls a “RESTful Mainframe,” through governance of REST APIs that front zOS-based web services. This amounts to a unified platform from a governance perspective to manage both APIs as well as existing SOA assets. As Carlson explained: applying development governance to mainframe assets helps mainframe shops overcome the architectural challenges inherent in bringing legacy systems into the new API economy, where mobile apps need rapid, agile access to backend systems.

The company is aiming to make Lifecycle Manager into the system-of-record for all enterprise assets including mainframe-based SOAP services and RESTful APIs that expose legacy software functionality. The promise: seamless access to service discovery and impact analysis whether on mainframe, distributed systems, or partner systems. Both architects and developers should be able to map dependencies between APIs and mainframe assets at the development stage and manage those APIs across their full lifecycles.

Lifecycle Manager integrates with SOA’s Policy Manager to work either top down or bottom up.  The top down approach relies on a service wrapping of existing mainframe programs. Think of this as the WSDL first approach to designing web services and then developing programs on mainframe to implement it.  The bottom up approach starts with the copy book.  Either way, it is automated and intended to be seamless. It also promises to guide services developers on best practices like encryption, assign and enforce correct policies, and more.

“Our point: automate whatever we can, and guide developers into good practices,” said Carlson.  In the process, it simplifies the task of exposing mainframe capabilities to a broader set of applications while not interfering with mainframe developers.  To distributed developers the mainframe is just another service endpoint that is accessed as a service or API.  Nobody has to learn new things; it’s just a browser-based IDE using copy books.

For performance, the Lifecycle Manager-based runtime environment is written in assembler, which makes it fast while minimizing MIPS consumption. It also comes with the browser-based IDE, copybook tool, and import mappings.

The initial adopters have come from financial services and the airlines.  The expectation is that usage will expand beyond that as mainframe shops and distributed developers seek to leverage core mainframe code for a growing array of workloads that weren’t on anybody’s radar screen even a few years ago.

There are other ways to do this on the mainframe, starting with basic SOA and web services tools and protocols, like WSDL. Many mainframe SOA efforts leverage CICS, and IBM offers additional tools, most recently SoftLayer, that address the new app dev styles.

This is healthy for mainframe data centers. If nothing else SOA- and API-driven services workloads that include the mainframe help lower the cost per workload of the mainframe. It also puts the mainframe at the center of today’s IT action.

Follow DancingDinosaur on Twitter: @mainframeblog


Get every new post delivered to your Inbox.

Join 651 other followers

%d bloggers like this: