Posts Tagged ‘Java’

IBM Continues to Bolster Bluemix PaaS

September 10, 2015

In the last 10 years the industry, led by IBM, has gotten remarkably better at enabling nearly coding-free development. This is important given how critical app development has become. Today it is impossible to launch any product without sufficient app dev support.  At a minimum you need a mobile app and maybe a few micro-services. To that end, since May IBM has spent the summer introducing a series of Bluemix enhancements. Find them here and here and here and here.  DancingDinosaur, at best a mediocre programmer, hasn’t written any code for decades but in this new coding environment he has started to get the urge to participate in a hack-a-thon. Doesn’t that (below) look like fun?

Bluemix Garage Toronto 1

IBM’s Bluemix Garage in Toronto (click to enlarge)

The essential role of software today cannot be overestimated. Even companies introducing non-technical products have to support them with apps and digital services that must be continually refreshed.  When IoT really starts to ramp up bits and pieces of code everywhere will be needed to handle the disparate pieces, get everything to interoperate, collect the data, and then use it or analyze it and initiate the next action.

Bluemix, a cloud-based PaaS product, comes as close to an all-in-one Swiss army knife development and deployment platform for today’s kind of applications as you will find. Having only played around with a demo it appears about as intuitive as an enterprise-class product can get.

The most recent of IBM’s summer Bluemix announcement promises more flexibility to integrate Java-based resources into Bluemix.  It offers a set of services to more seamlessly integrate Java-based resources into cloud-based applications. For instance, according to IBM, it is now possible to test and run applications in Bluemix with Java 8. Additionally, among other improvements, the jsp-2.3, el-3.0, and jdbc-4.1 Liberty features, previously in beta, are now available as production-ready. Plus, Eclipse Tools for Bluemix now includes JavaScript Debug, support for Node.js applications, Java 8 Liberty for Java integration, and Eclipse Mars support for the latest Eclipse Mars version as well as an improved trust self-signed certificates capability. Incremental publish support for JEE applications also has been expanded to handle web fragment projects.

In mid-August IBM announced the use of streaming analytics and data warehouse services on Bluemix. This should enable developers to expand the capabilities of their applications to give users a more robust cloud experience by facilitating the integration of data analytics and visualization seamlessly in their apps. Specifically, according to IBM, a new streaming analytics capability was put into open beta; the service provides the capability to instantaneously analyze data while scaling to thousands of sources on the cloud. IBM also added MPP (massively parallel processing) capabilities to enable faster query processing and overall scalability. The announcement also introduces built-in Netezza analytics libraries integrated with Watson Analytics, and more.

Earlier in August, IBM announced the Bluemix Garage opening in Toronto (pictured above). Toronto is just the latest in a series coding workspaces IBM intends to open worldwide. Next up appear to be Nice, France and Melbourne, Australia later this year.  According to IBM, Bluemix Garages create a bridge between the scale of enterprises and the culture of startups by establishing physical collaboration spaces housed in the heart of thriving entrepreneurial communities around the world. Toronto marks the third Bluemix Garage. The Toronto Bluemix Garage is located at the DMZ at Ryerson University, described as the top-ranked university-based incubator in Canada. Experts there will mentor the rising numbers of developers and startups in the region to create of the next generation of cloud apps and services using IBM’s Bluemix.

Members of the Toronto Bluemix Garage include Tangerine, a bank based in Canada that is using Bluemix to implement its mobile strategy. Through the IBM Mobile Quality Assurance for Bluemix service, Tangerine gathers customer feedback and actionable insight on its mobile banking app, effectively streamlining its implementation and development processes.

Finally, back in May IBM introduced new Bluemix Services to help developers create analytics-driven cloud applications. Bluemix, according to IBM, is now the largest Cloud Foundry deployment in the world. And the services the company announced promise to make it easier for developers to create cloud applications for mobile, IoT, supply chain analytics, and intelligent infrastructure solutions. The new capabilities will be added to over 100 services already available in the Bluemix catalog.

At the May announcement, IBM reported bringing more of its own technology into Bluemix, including:

  • Bluemix API Management, which allows developers to rapidly create, deploy, and share large-scale APIs and provides a simple and consumable way of controlling critical APIs not possible with simpler connector services
  • New mobile capabilities available on Bluemix for the IBM MobileFirst Platform, which provide the ability to develop location-based mobile apps that connect insights from digital engagement and physical presence

It also announced a handful of ecosystem and third-party services being added into Bluemix, including several that will facilitate working with .NET capabilities. In short, it will enable Bluemix developers to take advantage of Microsoft development approaches, which should make it easier to integrate multiple mixed-platform cloud workloads.

Finally, as a surprise note at the end of the May announcement IBM added that the company’s total cloud revenue—covering public, private and hybrid engagements—was $7.7 billion over the previous 12 months as of the end of March 2015, growing more than 60% in first quarter 2015.  Hope you’ve noticed that IBM is serious about putting its efforts into the cloud and openness. And it’s starting to pay off.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM LinuxONE and Open Mainframe Project Expand the z System

August 20, 2015

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Primary LinuxOne emperor

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement:  IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine.  IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z.  As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities.  Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs.  LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

  • Distributions: Red Hat, SuSE and Ubuntu
  • Hypervisors: PR/SM, z/VM, and KVM
  • Languages: Python, Perl, Ruby, Rails, Erlang, Java, Node.js
  • Management: WAVE, IBM Cloud Manager, Urban Code Openstack, Docker, Chef, Puppet, VMware vRealize Automation
  • Database: Oracle, DB2LUW, MariaDB, MongoDB, PostgreSQL
  • Analytics: Hadoop, Big Insights, DB2BLU and Spark

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

IBM zSystem for Social—Far From Forgotten at Edge2015

May 28, 2015

Dexter Doyle and Chris Gamin (z System Middleware) titled their session at Edge2015 IBM z Systems: The Forgotten Platform in Your Social Business. They were only half joking. As systems of engagement play bigger roles in the enterprise the z is not quite as forgotten as it may once have been.  In fact, at IBM the z runs the company’s own deployment of IBM Connections, the company’s flagship social business product.

Doyle used the graphic below (copyright John Atkinson, Wrong Hands) to make the point that new tools replace familiar conventional tools in a social business world.

 social desktop

 (copyright John Atkinson, Wrong Hands, click to enlarge)

Looks almost familiar, huh? Social business is not so radical. The elements of social business have been with us all along. It’s not exactly a one-to-one mapping, but Twitter and Pinterest instead of post-it notes, LinkedIn replaces the rolodex, Instagram instead of photos on your desk, and more.  Social business done right with the appropriate tools enables efficiency, Doyle observed. You don’t see the z in this picture, but it is there connecting all the dots in the social sphere

Many traditional mainframe data centers are struggling to come to grips with social business even as mobile and social workloads increasingly flow through the z. “The biggest thing with social is the change in culture,” said Doyle in his Forgotten Platform session. You end up using different tools to do business in a more social way. Even email appears antiquated in social business.

For data centers still balking at the notion of social business, Doyle noted that by 2016, 50% of large organizations will have internal Facebook-like social networks, a widely reported Gartner finding, and 30% of these will be considered as essential as email and telephones are today. The message: social business is real and z data centers should be a big part of it.

So what parts of social business will engage with the z? Doyle suggested five to start:

  1. Social media analytics
  2. Customer sentiment
  3. Customer and new market opportunity identification
  4. Identification of illegal or suspicious activities
  5. Employee and customer experiences

And the z System’s role? Same as it has always been:

  • Build an agile approach to deliver applications
  • Make every transaction secure
  • Use analytics to improve outcomes at every moment

These are things every z data center should be good at. To get started with social business on z visit the IBM Connections webpage here. There happens to be an offer for the 60-day free trail (it’s a cloud app) here. Easy and free, at least should be worth a try.

IBM Connections delivers a handful of social business capabilities. The main components are home, profiles, communities, and social analytics. Other capabilities include blogs, wikis, bookmarks, and forums for idea generation and sharing. You can use the activities capability to organize your work and that of a team, and another lets you vote on ideas. Finally, it brings a media library, content management capabilities, and file management.

Along with Connections you also might want to deploy WebSphere and Java, if you haven’t already. Then, if you are serious about building out a social business around the z you’ll want to check out Bluemix and MobileFirst. Already there is an IBM Red Book out for mobile app dev on the z13. The idea, of course, is to create engaging mobile and social business apps with the z as the back end.

The biggest payoff from social business on the z comes when you add analytics, especially real-time analytics. DancingDinosaur attended a session on that topic at Edge2015 and will be taking it up in a coming post.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

Compuware Aims for Mainframe Literacy in CIOs

November 13, 2014

Many IT professionals, especially younger ones, are clueless about the mainframe. Chris O’Malley, president of the mainframe business at Compuware, has met CIOs who are versed in everything about IT and have seemingly done everything there is with computers, but “they are not literate about the mainframe.” That means the mainframe never comes to mind. IBM could give away a zEnterprise for free, which it comes close to doing today through the System z Solution Edition program and these CIOs would ignore it. O’Malley wants to address that.

compuware MainframeExcellence2025_cover

In response, Compuware is following the path of the IBM System z Academic Initiative, but without the extensive global involvement of colleges and universities, with a program called Mainframe Excellence 2025, which it describes as a generational call for strategic platform stewardship. “We’re also trying to debunk a lot of issues around the mainframe,” O’Malley continues.

compuware O'Malley headshot

Chris O’Malley, Pres. Mainframe, Compuware

Compuware refers to Mainframe Excellence 2025 as a manifesto, something of a call to arms for millennials to storm the IT gates and liberate IT management from enslavement to x86 computing. Somehow DancingDinosaur doesn’t see it happening exactly that way; it envisions coexistence and synergy.

Most of the Mainframe Excellence document goes over ground DancingDinosaur and many others have covered before. It is delightful, however, to see others refreshing the arguments. And, the document adds some interesting data. For instance, over 1.15 million CICS transactions are executed on System z every second of every day! That’s more than all Google searches, YouTube views, Facebook likes, and Twitter tweets combined.

It also pays homage to what it refers to as the mainframe’s culture of excellence. It characterizes this culture by rigorous adherence to a standard of excellence demonstrably higher than that associated with other platforms, notably x86. IT organizations actually expect, accept, and plan for problems and patches in other platforms (think Microsoft Patch Tuesday). Mainframe professionals, on the other hand, have zero-tolerance for downtime and system failures and the mainframe generally lives up to those high expectations.

Ironically, the document points out that the culture of excellence has created a certain chasm between mainframe professionals and the rest of IT. In fact, this ingrained zero-failure culture of the mainframe community—including both vendors and enterprise IT staffs—can sometimes put it at odds with the very spirit of innovation that allows the mainframe to deliver the repeated advances in price/performance and new capabilities that consistently produce tremendous value.

Compuware concludes its report with an action checklist:

  • Fully inventory existing mainframe data, applications (including business rules), capacity, utilization/MSUs and management tools, a veritable trove a value embedded in mainframe code and business rules.
  • Build a fact-based skills plan with a realistic timeline.
  • Ramp up current and road-mapped mainframe capabilities.
  • Rightsize investments in mainframe application stewardship.
  • Institute an immediate moratorium on short-term cost-cutting that carries long-term negative consequences.
  • Combat denial and hype in regards to non-mainframe platform capabilities, costs and risks.

And Compuware’s final thought should give encouragement to all those who must respond to the mainframe-costs-too-much complaint:  IT has a long history of under-estimating real TCO and marginal costs for new platforms while over-estimating their benefits. A more sober assessment of these platforms will make the strategic value and economic advantages of the mainframe much more evident in comparison.

Compuware certainly is on the right track with Mainframe Excellence 2025. Would like, however, to see the company coordinate its efforts with the System z Academic Initiative, the Master the Mainframe effort, and such.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at and here.

Mainframe Appeal Continues in 9th BMC Survey

October 30, 2014

With most of the over 1100 respondents (91%) reporting that the mainframe remains a viable long-term platform for them and a clear majority (60%) expecting to increase MIPS due to the normal growth of legacy applications and new application workloads the z continues to remain well entrenched. Check out the results for yourself here.

Maybe even more reassurance comes from almost half the respondents who reported that they expect the mainframe to attract and grow new workloads.  Most likely these will be Java and Linux workloads but one-third of the respondents listed cloud as a priority, jumping it up to sixth on the list of mainframe priorities. Mobile was cited as priority by 27% of the respondents followed by big data with 26% respondents.

ibm zec12

Apparently IBM’s steady promotion of cloud, mobile, and big data for the z over the past year is working. At Enterprise2014 IBM even made a big news with real time analytics and Hadoop on the z along with a slew of related announcements.

That new workloads like cloud, mobile, and big data made it into the respondents’ top 10 IT priorities for the year didn’t surprise Jonathan Adams, BMC vice president/general manager for z solutions.  The ease of developing in Java and its portability make it a natural for new workloads today, he noted.

In the survey IT cost reduction/optimization tops the list of IT priorities for 2014 by a large margin, 70% of respondents, followed by application availability, 52%.  Rounding out the top five are application modernization with 48%, data privacy, 47%, and business/IT alignment, 44%. Outsourcing finished out the top 10 priorities with 16%.

When asked to look ahead in terms of MIPS growth, the large majority of respondents expected growth to continue or at least remain steady. Only 9% expected MIPS to decline and 6% expected to eliminate the mainframe.  This number has remained consistent for years, noted Adams. DancingDinosaur periodically checks in with shops that announce plans to eliminate their mainframe and finds that a year later many have barely made any progress.

The top mainframe advantages shouldn’t surprise you:  availability (53%); security (51%); centralized data serving (47%) and transaction throughput (42%). More interesting results emerged when the respondents addressed new workloads. The mainframe’s cloud role includes data access (33%), cloud management from Linux on z (22%) and dynamic test environments via self-service (15%). Surprisingly, when it comes to big data analytics, 34% report that the mainframe acts as their analytics engine. This wasn’t supposed to be the case, at least not until BigInsights and Hadoop on z gained more traction.

Meanwhile, 28% say they move data off platform for analytics, and 14% report they federate mainframe data to an off-platform analytics engine. Yet, more than 81% now incorporate the mainframe into their Big Data strategy, up from 70% previously. The non-finance industries are somewhat more likely to use the mainframe as the big data engine, BMC noted. Those concerned with cost should seriously consider doing their analytics on the z, where the data is. It is costly to keep moving data around.

In terms of mobility, making existing applications accessible for mobile ranked as the top issue followed by developing new mobile applications and securing corporate data on mobile devices. Mobile processing increases for transaction volume came in at the bottom of mobility issues, but that will likely change when mobile transactions start impacting peak workload volumes and trigger increased costs. Again, those concerned about costs should consider IBM’s mobile transaction discount, which was covered by DancingDinsosaur here in the spring.

Since cost reduction is such a big topic again, the survey respondents offered their cost reduction priorities.  Reducing resource usage during peak led the list.  Other cost reduction priorities included consolidating mainframe software vendors, exploiting zIIP and specialty engines (which have distinctly lower cost/MIPS), and moving workloads to Linux on z.

So, judging from the latest BMC survey the mainframe is far from dead. But at least one recent IT consultant and commentator, John Appleby, seems to think so. This prediction has proven wrong so often that DancingDinosaur has stopped bothering to refute it.

BTW, change came to BMC last year  in the form of an acquisition by a venture capital group. Adams reports that the new owners have already demonstrated a commitment to continued investment in mainframe technology products, and plans already are underway for next year’s survey.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or see more of his writing at or in wide-ranging blogs here.

IBM POWER8 CAPI for Efficient Top Performance

August 21, 2014

IBM’s Power Systems Power8 Coherent Accelerator Processor Interface (CAPI) is not for every IT shop running Power Systems. However, for those that aim to attach devices to their POWER8 systems over the PCIe interface and want fast, efficient performance CAPI will be unbeatable.  Steve Fields, IBM Distinguished Engineer and Director of Power Systems Design introduces it here. Some of it gets pretty geeky but slides #12-17 make the key points.

DancingDinosaur first covered CAPI here, in April, shortly after its introduction. At that point it looked like CAPI would be a game changer and nothing since suggests otherwise. As we described it then, CAPI sits directly on the POWER8 board and works with the same memory addresses that the processor uses. Pointers de-reference the same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable and, most importantly, a direct interface. In the process, it offloads complexity.

In short, CAPI provides:

  • SMP Coherence Protocol transported over PCI Express interface
  • Provides isolation and filtering through the support unit in the processor (“CAPP”)
  • Manages caching and address translation through the standard POWER Service Layer in the accelerator device
  • Enables accelerator Functional Units to operate as part of the application at the user (direct) level, just like a CPU

What you end up with is a coherent connected accelerator for just a fraction of the development effort otherwise required. As such, CAPI enables more efficient accelerator development. It can reduce the typical seven-step I/O model flow (1-Device Driver Call, 2-Copy or Pin Source Data, 3-MMIO Notify Accelerator, 4-Acceleration, 5-Poll/Int Completion, 6-Copy or Unpin Result Data, 7-Return From Device Driver Completion) to just three steps (1-shared memory/notify accelerator, 2-acceleration, and 3-shared memory completion). The result is an easier, more natural programming model with traditional thread-level programming and no need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing (e.g. Java garbage-collection).

Other advantages include an open ecosystem for accelerators built using Field Programmable Gate Arrays (FPGA). The number and size of FPGAs can be based on application requirements, and FPGAs can attach to other components, such as private DRAM, flash memory, or a high-speed network.

Driving the need for CAPI is the insatiable demand for performance.  For that, acceleration is required, which is complicated and resource-intensive to build. So IBM created CAPI, not just for pure compute but for any network-attached or storage-attached I/O. In the end it eliminates the overhead of the I/O subsystem, allowing the focus to be on the workload.

In one example IBM reported it was able to attach an IBM Flash appliance to POWER8 via the CAPI interface. As a result it could generate Read/Write commands from applications and eliminate 97% of code path length, a savings of 20-30 cores per 1M IOPS. In another test IBM reported being able to leverage CAPI to integrate flash into a server; the memory-like semantics allowed the flash to replace DRAM for many in-memory workloads. The result: 5x cost savings plus large density and energy improvements. Furthermore, by eliminating the I/O subsystem overhead from high IOPS flash access, it freed the CPU to focus on the application workload.

Finally, in a Monte Carlo simulation of 1 million iterations, a POWER8 core with FPGA and CAPI ran a full execution of the Heston pricing model for a single security 250x faster than the POWER8 core alone. It also proved easier to code, reducing the lines of C code to write by 40x compared to non-CAPI FPGA.

IBM is just getting started with CAPI. Coming up next will be CAPI working with Linux, mainly for use with analytics. Once Linux comes into the picture, expect more PCIe card vendors to deliver products that leverage CAPI. AIX too comes into the picture down the road.

Plan to attend IBM Enterprise2014 in Las Vegas, Oct. 6-19. Here is one intriguing CAPI presentation that will be there: Light up performance of your LAMP apps with a stack optimized for Power, by Alise Spence, Andi Gutmans, and Antonio Rosales. It will discuss how to leverage CAPI with POWER8 to create what they call a “killer stack” that brings together continuous delivery with exceptional performance at a competitive price. Other CAPI sessions also are in the works for Enterprise2014.

DancingDinosaur (Alan Radding) definitely is attending IBM Enterprise2014. You can follow DancingDinosaur on Twitter, @mainframeblog, or check out Upcoming posts will look more closely at Enterprise2014 and explore some session content.

The Next Generation of Mainframers

March 6, 2014

With seemingly every young person with any technology inclinations aiming to become the next WhatsApp and walk away with some of Facebook’s millions it is fair to wonder: Where is the next generation of mainframers going to come from and who are they going to be?

The answer: IBM is lining them up now. As the mainframe turns 50 you’ll have a chance to meet some of these up and coming mainframers as part of IBM’s 50th Mainframe Anniversary celebration in New York, April 8, when IBM announces winners of the World Championship round of its popular Master of the Mainframe competition.

According to IBM, the Championship is designed to assemble the best university students from around the globe who have demonstrated superior technical skills through participation in their regional IBM Master the Mainframe Contests. Out of the 20,000 students who have engaged in country-level Master the Mainframe Contests over the last three years, the top 44 students from 22 countries have been invited to participate in the inaugural IBM Master the Mainframe World Championship.

These students will spend the month of March working through the Systems of Engagement concept, an expansion of the traditional Systems of Record—core transaction systems—that have been the primary workload of mainframe computing. The students will deploy Systems of Record mainframe business applications written with Java and COBOL using DB2 for z/OS API’s to demonstrate how the Systems of Engagement concept takes full advantage of the mainframe’s advanced capabilities. In short, the mainframe is designed to support tomorrow’s most demanded complex workloads  Big Data, Cloud, and Mobile computing workloads and do them all with the most effective enterprise-class security. The students will showcase their applications on April 7, 2014 in New York City where judges will determine which student earns the distinction of “Master the Mainframe World Champion.”

Representing the United States are Mugdha Kadam from the University of Florida, Elton Cheng from the University of California San Diego, and Rudolfs Dambis from the University of Nevada Las Vegas. You can follow the progress of the competitors here.  After March 17 the site will include a leaderboard so you can follow your favorites. No rumors of betting pools being formed yet but it wouldn’t surprise DancingDinosaur.  Win or not, each competitor should be a prime candidate if your organization needs mainframe talent.

This is part of IBM’s longstanding System z Academic Initiative, which has been expanding worldwide and now encompasses over 64,000 students at more than 1000 schools across 67 countries.  And now high school students are participating in the Master the Mainframe competition. Over 360 companies are actively recruiting from these students, including Baldor, Dillards, JB Hunt, Wal-mart, Cigna, Compuware, EMC, Fidelity, JP Morgan Chase, and more.

Said Jeff Gill, at VISA: “Discovering IBM’s Academic Initiative has been a critical success factor in building a lifeline to our future—a new base of Systems Engineers and Applications Developers who will continue to evolve our mainframe applications into flexible open enterprise solutions while maintaining high volume / high availability demands. Without the IBM Academic Initiative, perhaps we could have found students with aptitude – but participation in the Academic Initiative demonstrates a student’s interest in mainframe technology which, to us, translates to a wise long-term investment.“ Gill is one of the judges of the Masters the Mainframe World Championship.

Added Martin Kennedy of Citigroup: “IBM’s Master the Mainframe Contest offers a great resource to secure candidates and helps the company get critical skills as quickly as possible.”

The Master of the Mainframe Championship and even the entire 50th Anniversary celebration that will continue all year are not really IBM’s primary mainframe thrust this year.  IBM’s real focus is on emphasizing the forward-moving direction of the mainframe. As IBM puts in: “By continually adapting to trends and evolving IT, we’re driving new approaches to cloud, analytics, security and mobile computing to help tackle challenges never before thought possible.  The pioneering innovations of the mainframe all serve one mission—deliver game-changing technology that makes the extraordinary possible and improves the way the world works.

DancingDinosaur covers the mainframe and other enterprise-class technology. Watch this blog for more news on the mainframe and other enterprise systems including Power, enterprise storage, and enterprise-scale cloud computing.

With that noted, please plan to attend Edge 2014, May 19-23 in Las Vegas. Being billed as an infrastructure and storage technology conference, it promises to be an excellent follow-on to last year’s Edge conference.  DancingDinosaur will be there, no doubt hanging out in the blogger’s lounge where everyone is welcome. Watch this blog for upcoming details on the most interesting sessions.

And follow DancingDinosaur on Twitter, @mainframeblog

February 25, 2014

How the 50 Year-Old Mainframe Remains Relevant

The mainframe turns 50 years old this year and the many pundits and experts who predicted it would be long gone by now must be scratching their heads.  Yes, it is still around and has acquired over 260 new accounts just since zEnterprise launch. It also has shipped over 320 hybrid computing units (not to be confused with zBX chassis only) since the zBX was introduced and kicked off hybrid mainframe computing.

As for MIPS, although IBM experienced a MIPS decline last quarter that follows the largest MIPS shipment in mainframe history a year ago resulting in a 2-year CGR of +11%.  (Mainframe sales follow the new product release cycle in a predictable pattern.) IBM brought out the last System z release, the zEC12, faster than the mainframe’s historic release cycle. Let’s hope IBM repeats the quick turnaround with the next release.

Here’s what IBM is doing to keep the mainframe relevant:

  • Delivered steady price/performance improvements with each release. And with entry-level BC-class pricing and the System z Solution Edition programs you can end up with a mainframe system that is as competitive or better than x86-based systems while being more secure and more reliable out of the box.
  • Adopted Linux early, before it had gained the widespread acceptance it has today. Last year over three-quarters of the top 100 enterprises had IFLs installed. This year IBM reports a 31% increase in IFL MIPS. In at least two cases where DancingDinosaur recently interviewed IT managers, Linux on z was instrumental in bringing their shops to the mainframe.
  • Supported for SOA, Java, Web services, and cloud, mobile, and social computing continues to put the System z at the front of the hot trends. It also prominently plays with big data and analytics.  Who ever thought that the mainframe would be interacting with RESTful APIs? Certainly not DancingDinosaur’s computer teacher back in the dark ages.
  • Continued delivery of unprecedented scalability, reliability, and security at a time when the volumes of transactions, data, workloads, and users are skyrocketing.  (IDC predicts millions of apps, billions of users, and trillions of things connected by 2020.)
  • Built a global System z ecosystem of tools and technologies to support cloud, mobile, big data/analytics, social and non-traditional mainframe workloads. This includes acquisitions like SoftLayer and CSL Wave to deliver IBM Wave for z/VM, a simplified and cost effective way to harness the consolidation capabilities of the IBM System z platform along with its ability to host the workloads of tens of thousands of commodity servers. The mainframe today can truly be a fully fledged cloud player.

And that just touches on the mainframe platform advantages. While others boast of virtualization capabilities, the mainframe comes 100% virtualized out of the box with virtualization at every level.  It also comes with a no-fail redundant architecture and built-in networking. 

Hybrid computing is another aspect of the mainframe that organizations are just beginning to tap.  Today’s multi-platform compound workloads are inherently hybrid, and the System z can manage the entire multi-platform workload from a single console.

The mainframe anniversary celebration, called Mainframe50, officially kicks off in April but a report from the Pulse conference suggests that Mainframe50 interest already is ramping up. A report from Pulse 2014 this week suggests IBM jumped the gun by emphasizing how the z provides new ways never before thought possible to innovate while tackling challenges previously out of reach.

Pulse 2014, it turns out, offered 38 sessions on System z topics, of which 27 will feature analysts or IBM clients. These sessions promise to address key opportunities and challenges for today’s mainframe environments and the latest technology solutions for meeting them, including OMEGAMON, System Automation, NetView, GDPS, Workload Automation Tivoli Asset Discovery for z/OS and Cloud.

One session featured analyst Phil Murphy, Vice President and Principal Analyst from Forrester Research, discussing the critical importance of a robust infrastructure in a mixed mainframe/distributed cloud environment—which is probably the future most DancingDinosaur readers face—and how it can help fulfill the promise of value for cloud real time.

Another featured mainframe analyst Dot Alexander from Wintergreen Research who looked at how mainframe shops view executing cloud workloads on System z. The session focused on the opportunities and challenges, private and hybrid cloud workload environments, and the impact of scalability, standards, and security.

But the big celebration is planned for April 8 in NYC. There IBM promises to make new announcements, launch new research projects, and generally focus on the mainframe’s future.  A highlight promises to be Showcase 20, which will focus on 20 breakthrough areas referred to by IBM as engines of progress.  The event promises to be a sellout; you should probably talk to your System z rep if you want to attend. And it won’t stop on April 8. IBM expects to continue the Mainframe50 drumbeat all year with new announcements, deliverables, and initiatives. Already in February alone IBM has made a slew of acquisitions and cloud announcements that will touch every mainframe shop with any cloud interests (which should be every mainframe shop at one point or another).

In coming weeks stay tuned to DancingDinosaur for more on Mainframe50. Also watch this space for details of the upcoming Edge 2014 conference, with an emphasis on infrastructure innovation coming to Las Vegas in May.

Please follow DancingDinosaur on Twitter, @mainframeblog

A Maturity Model for the New Mainframe Normal

February 3, 2014

Last week Compuware introduced its new mainframe maturity model designed to address what is emerging as the new mainframe normal. DancingDinosaur played a central role in the creation of this model.

A new mainframe maturity model is needed because the world of the mainframe is changing rapidly.  Did your data center team ever think they would be processing mainframe transactions from mobile phones? Your development team probably never imagined they would be architecting compound workloads across the mainframe and multiple distributed systems running both Windows and Linux? What about the prospect of your mainframe serving up millions or even billions of customer-facing transactions a day?  But that’s the mainframe story today.

Even IBM, the most stalwart of the mainframe vendors, repeats the driving trends—cloud, mobile, social, big data, analytics, Internet of things—like a mantra. As the mainframe celebrates its 50th anniversary year, it is fitting that a new maturity model be introduced because there is, indeed, a new mainframe normal rapidly evolving.

Things certainly are changing in ways most mainframe data center managers wouldn’t have anticipated 10 years ago, probably not even five years ago. Of those, perhaps the most disconcerting change for traditional mainframe shops is the need to accommodate distributed, open systems (systems of engagement) alongside the traditional mainframe environment (systems of record).

Since the rise of distributed systems two decades ago, there has existed both a technical and cultural gap between the mainframe and distributed teams. The emergence of technologies like hybrid computing, middleware, and the cloud have gone far to alleviate the technical gap. The cultural gap is not so amenable to immediate fixes. Still, navigating that divide is no longer optional – it has become a business imperative.  Crossing the gap is what the new maturity model addresses.

Many factors contribute to the gap; the largest of which appears to be that most organizations still approach the mainframe and distributed environments as separate worlds. One large financial company, for example, recently reported that they view the mainframe as simply MQ messages to distributed developers.

The new mainframe maturity model can be used as a guide to bridging both the technical and cultural gaps.  Specifically, the new model defines five levels of maturity. In the process, it incorporates distributed systems alongside the mainframe and recognizes the new workloads, processes and challenges that will be encountered. The five levels are:

  1. Ad-hoc:  The mainframe runs core systems and applications; these represent the traditional mainframe workloads and the green-screen approach to mainframe computing.
  2. Technology-centric:  An advanced mainframe is focused on ever-increasing volumes, higher capacity, and complex workload and transaction processing while keeping a close watch on MIPS consumption.
  3. Internal services-centric:  The focus shifts to mainframe-based services through a service delivery approach that strives to meet internal service level agreements (SLAs).
  4. External services-centric:  Mainframe and non-mainframe systems interoperate through a services approach that encompasses end-user expectations and tracks external SLAs.
  5. Business revenue-centric:  Business needs and the end-user experience are addressed through interoperability with cloud and mobile systems, services- and API-driven interactions, and real-time analytics to support revenue initiatives revolving around complex, multi-platform workloads.

Complicating things is the fact that most IT organizations will likely find themselves straddling different maturity levels. For example, although many have achieved levels 4 and 5 when it comes to technology the IT culture remains at levels 1 or 2. Such disconnects mean IT still faces many obstacles preventing it from reaching optimal levels of service delivery and cost management. And this doesn’t just impact IT; there can be ramifications for the business itself, such as decreased customer satisfaction and slower revenue growth.

DancingDinosaur’s hope is that as the technical cultures come closer through technologies like Java, Linux, SOA, REST, hybrid computing, mobile, and such to allow organizations to begin to close the cultural gap too.

Follow DancingDinosaur on Twitter: @mainframeblog


Get every new post delivered to your Inbox.

Join 801 other followers

%d bloggers like this: