Posts Tagged ‘REST’

February 25, 2014

How the 50 Year-Old Mainframe Remains Relevant

The mainframe turns 50 years old this year and the many pundits and experts who predicted it would be long gone by now must be scratching their heads.  Yes, it is still around and has acquired over 260 new accounts just since zEnterprise launch. It also has shipped over 320 hybrid computing units (not to be confused with zBX chassis only) since the zBX was introduced and kicked off hybrid mainframe computing.

As for MIPS, although IBM experienced a MIPS decline last quarter that follows the largest MIPS shipment in mainframe history a year ago resulting in a 2-year CGR of +11%.  (Mainframe sales follow the new product release cycle in a predictable pattern.) IBM brought out the last System z release, the zEC12, faster than the mainframe’s historic release cycle. Let’s hope IBM repeats the quick turnaround with the next release.

Here’s what IBM is doing to keep the mainframe relevant:

  • Delivered steady price/performance improvements with each release. And with entry-level BC-class pricing and the System z Solution Edition programs you can end up with a mainframe system that is as competitive or better than x86-based systems while being more secure and more reliable out of the box.
  • Adopted Linux early, before it had gained the widespread acceptance it has today. Last year over three-quarters of the top 100 enterprises had IFLs installed. This year IBM reports a 31% increase in IFL MIPS. In at least two cases where DancingDinosaur recently interviewed IT managers, Linux on z was instrumental in bringing their shops to the mainframe.
  • Supported for SOA, Java, Web services, and cloud, mobile, and social computing continues to put the System z at the front of the hot trends. It also prominently plays with big data and analytics.  Who ever thought that the mainframe would be interacting with RESTful APIs? Certainly not DancingDinosaur’s computer teacher back in the dark ages.
  • Continued delivery of unprecedented scalability, reliability, and security at a time when the volumes of transactions, data, workloads, and users are skyrocketing.  (IDC predicts millions of apps, billions of users, and trillions of things connected by 2020.)
  • Built a global System z ecosystem of tools and technologies to support cloud, mobile, big data/analytics, social and non-traditional mainframe workloads. This includes acquisitions like SoftLayer and CSL Wave to deliver IBM Wave for z/VM, a simplified and cost effective way to harness the consolidation capabilities of the IBM System z platform along with its ability to host the workloads of tens of thousands of commodity servers. The mainframe today can truly be a fully fledged cloud player.

And that just touches on the mainframe platform advantages. While others boast of virtualization capabilities, the mainframe comes 100% virtualized out of the box with virtualization at every level.  It also comes with a no-fail redundant architecture and built-in networking. 

Hybrid computing is another aspect of the mainframe that organizations are just beginning to tap.  Today’s multi-platform compound workloads are inherently hybrid, and the System z can manage the entire multi-platform workload from a single console.

The mainframe anniversary celebration, called Mainframe50, officially kicks off in April but a report from the Pulse conference suggests that Mainframe50 interest already is ramping up. A report from Pulse 2014 this week suggests IBM jumped the gun by emphasizing how the z provides new ways never before thought possible to innovate while tackling challenges previously out of reach.

Pulse 2014, it turns out, offered 38 sessions on System z topics, of which 27 will feature analysts or IBM clients. These sessions promise to address key opportunities and challenges for today’s mainframe environments and the latest technology solutions for meeting them, including OMEGAMON, System Automation, NetView, GDPS, Workload Automation Tivoli Asset Discovery for z/OS and Cloud.

One session featured analyst Phil Murphy, Vice President and Principal Analyst from Forrester Research, discussing the critical importance of a robust infrastructure in a mixed mainframe/distributed cloud environment—which is probably the future most DancingDinosaur readers face—and how it can help fulfill the promise of value for cloud real time.

Another featured mainframe analyst Dot Alexander from Wintergreen Research who looked at how mainframe shops view executing cloud workloads on System z. The session focused on the opportunities and challenges, private and hybrid cloud workload environments, and the impact of scalability, standards, and security.

But the big celebration is planned for April 8 in NYC. There IBM promises to make new announcements, launch new research projects, and generally focus on the mainframe’s future.  A highlight promises to be Showcase 20, which will focus on 20 breakthrough areas referred to by IBM as engines of progress.  The event promises to be a sellout; you should probably talk to your System z rep if you want to attend. And it won’t stop on April 8. IBM expects to continue the Mainframe50 drumbeat all year with new announcements, deliverables, and initiatives. Already in February alone IBM has made a slew of acquisitions and cloud announcements that will touch every mainframe shop with any cloud interests (which should be every mainframe shop at one point or another).

In coming weeks stay tuned to DancingDinosaur for more on Mainframe50. Also watch this space for details of the upcoming Edge 2014 conference, with an emphasis on infrastructure innovation coming to Las Vegas in May.

Please follow DancingDinosaur on Twitter, @mainframeblog

A Maturity Model for the New Mainframe Normal

February 3, 2014

Last week Compuware introduced its new mainframe maturity model designed to address what is emerging as the new mainframe normal. DancingDinosaur played a central role in the creation of this model.

A new mainframe maturity model is needed because the world of the mainframe is changing rapidly.  Did your data center team ever think they would be processing mainframe transactions from mobile phones? Your development team probably never imagined they would be architecting compound workloads across the mainframe and multiple distributed systems running both Windows and Linux? What about the prospect of your mainframe serving up millions or even billions of customer-facing transactions a day?  But that’s the mainframe story today.

Even IBM, the most stalwart of the mainframe vendors, repeats the driving trends—cloud, mobile, social, big data, analytics, Internet of things—like a mantra. As the mainframe celebrates its 50th anniversary year, it is fitting that a new maturity model be introduced because there is, indeed, a new mainframe normal rapidly evolving.

Things certainly are changing in ways most mainframe data center managers wouldn’t have anticipated 10 years ago, probably not even five years ago. Of those, perhaps the most disconcerting change for traditional mainframe shops is the need to accommodate distributed, open systems (systems of engagement) alongside the traditional mainframe environment (systems of record).

Since the rise of distributed systems two decades ago, there has existed both a technical and cultural gap between the mainframe and distributed teams. The emergence of technologies like hybrid computing, middleware, and the cloud have gone far to alleviate the technical gap. The cultural gap is not so amenable to immediate fixes. Still, navigating that divide is no longer optional – it has become a business imperative.  Crossing the gap is what the new maturity model addresses.

Many factors contribute to the gap; the largest of which appears to be that most organizations still approach the mainframe and distributed environments as separate worlds. One large financial company, for example, recently reported that they view the mainframe as simply MQ messages to distributed developers.

The new mainframe maturity model can be used as a guide to bridging both the technical and cultural gaps.  Specifically, the new model defines five levels of maturity. In the process, it incorporates distributed systems alongside the mainframe and recognizes the new workloads, processes and challenges that will be encountered. The five levels are:

  1. Ad-hoc:  The mainframe runs core systems and applications; these represent the traditional mainframe workloads and the green-screen approach to mainframe computing.
  2. Technology-centric:  An advanced mainframe is focused on ever-increasing volumes, higher capacity, and complex workload and transaction processing while keeping a close watch on MIPS consumption.
  3. Internal services-centric:  The focus shifts to mainframe-based services through a service delivery approach that strives to meet internal service level agreements (SLAs).
  4. External services-centric:  Mainframe and non-mainframe systems interoperate through a services approach that encompasses end-user expectations and tracks external SLAs.
  5. Business revenue-centric:  Business needs and the end-user experience are addressed through interoperability with cloud and mobile systems, services- and API-driven interactions, and real-time analytics to support revenue initiatives revolving around complex, multi-platform workloads.

Complicating things is the fact that most IT organizations will likely find themselves straddling different maturity levels. For example, although many have achieved levels 4 and 5 when it comes to technology the IT culture remains at levels 1 or 2. Such disconnects mean IT still faces many obstacles preventing it from reaching optimal levels of service delivery and cost management. And this doesn’t just impact IT; there can be ramifications for the business itself, such as decreased customer satisfaction and slower revenue growth.

DancingDinosaur’s hope is that as the technical cultures come closer through technologies like Java, Linux, SOA, REST, hybrid computing, mobile, and such to allow organizations to begin to close the cultural gap too.

Follow DancingDinosaur on Twitter: @mainframeblog

SOA Software Enables New Ways to Tap Mainframe Code

January 30, 2014

Is the core enterprise processing role handled by the mainframe enough? Yet, enterprises today often are running different types of workloads built using different app dev styles. These consist of compound applications encompassing the mainframe and a variety of distributed systems (Linux, UNIX, Windows) and different programming models, data schema, services, and more. Pieces of these workloads may be running on the public cloud, a partner’s private cloud, and a host of other servers. The pieces are pulled together at runtime to support the particular workload.  Mainframe shops should want to play a big role in this game too.

“Mainframe applications still sit at heart of enterprise operations, but mainframe managers also want to take advantage of these applications in new ways,” says Brent Carlson, SVP at SOA Software. The primary way of doing this is through SOA services, and mainframes have been playing in the SOA arena for years. But it has never been as seamless, easy, and flexible as it should. And as social and mobile and other new types of workloads get added to the services mix, the initial mainframe SOA approach has started to show its age. (Over the years, DancingDinosaur has written considerably on mainframe SOA and done numerous SOA studies.)

That’s why DancingDinosaur welcomes SOA Software’s Lifecycle Manager to the mainframe party.  It enables what the company calls a “RESTful Mainframe,” through governance of REST APIs that front zOS-based web services. This amounts to a unified platform from a governance perspective to manage both APIs as well as existing SOA assets. As Carlson explained: applying development governance to mainframe assets helps mainframe shops overcome the architectural challenges inherent in bringing legacy systems into the new API economy, where mobile apps need rapid, agile access to backend systems.

The company is aiming to make Lifecycle Manager into the system-of-record for all enterprise assets including mainframe-based SOAP services and RESTful APIs that expose legacy software functionality. The promise: seamless access to service discovery and impact analysis whether on mainframe, distributed systems, or partner systems. Both architects and developers should be able to map dependencies between APIs and mainframe assets at the development stage and manage those APIs across their full lifecycles.

Lifecycle Manager integrates with SOA’s Policy Manager to work either top down or bottom up.  The top down approach relies on a service wrapping of existing mainframe programs. Think of this as the WSDL first approach to designing web services and then developing programs on mainframe to implement it.  The bottom up approach starts with the copy book.  Either way, it is automated and intended to be seamless. It also promises to guide services developers on best practices like encryption, assign and enforce correct policies, and more.

“Our point: automate whatever we can, and guide developers into good practices,” said Carlson.  In the process, it simplifies the task of exposing mainframe capabilities to a broader set of applications while not interfering with mainframe developers.  To distributed developers the mainframe is just another service endpoint that is accessed as a service or API.  Nobody has to learn new things; it’s just a browser-based IDE using copy books.

For performance, the Lifecycle Manager-based runtime environment is written in assembler, which makes it fast while minimizing MIPS consumption. It also comes with the browser-based IDE, copybook tool, and import mappings.

The initial adopters have come from financial services and the airlines.  The expectation is that usage will expand beyond that as mainframe shops and distributed developers seek to leverage core mainframe code for a growing array of workloads that weren’t on anybody’s radar screen even a few years ago.

There are other ways to do this on the mainframe, starting with basic SOA and web services tools and protocols, like WSDL. Many mainframe SOA efforts leverage CICS, and IBM offers additional tools, most recently SoftLayer, that address the new app dev styles.

This is healthy for mainframe data centers. If nothing else SOA- and API-driven services workloads that include the mainframe help lower the cost per workload of the mainframe. It also puts the mainframe at the center of today’s IT action.

Follow DancingDinosaur on Twitter: @mainframeblog

Next Generation zEnterprise Developers

April 19, 2013

Mainframe development keeps getting more complicated.  The latest complication can be seen in Doug Balog’s reference to mobile and social business on the zEnterprise, reported by DancingDinosaur here a few weeks ago. That is what the next generation of z developers face.

Forget talk about shortages of System z talent due to the retirement of mainframe veterans.  The bigger complication comes from need for non-traditional mainframe development skills required to take advantage mobile and social business as well as other recent areas of interest such as big data and analytics. These areas entail combining new skills like JSON, Atom, Rest, Hadoop, Java, SOA, Linux, hybrid computing along with traditional mainframe development skills like CICS and COBOL, z/VM, SQL, VSAM, and IMS. This combination is next to impossible to find in one individual. Even assembling a coherent team encompassing all those skills presents a serious challenge.

The mainframe industry has been scrambling to address this in various ways.  CA Technologies added GUI to its various tools and BMC has similarly modernized its various management and DB2 tools. IBM, of course, has been steadily bolstering the Rational RDz tool set.   RDz is a z/OS Eclipse-based software IDE.  RDz streamlines and refactors z/OS development processes into structured analysis, editing, and testing operations with modern GUI tools, wizards, and menus that, IBM notes, are perfect for new-to the-mainframe twenty- and thirty-something developers, the next generation of z developers.

Compuware brings its mainframe workbench, described as a modernized interactive developer environment that introduces a new graphical user interface for managing mainframe application development activities.  The interactive toolset addresses every phase of the application lifecycle.

Most recently, Micro Focus announced the release of its new Enterprise Developer for IBM zEnterprise.  The product enables customers to optimize all aspects of mainframe application delivery and promises to drive down costs, increase productivity, and accelerate innovation. Specifically, it enables both on- and off-mainframe development, the latter without consuming mainframe resources, to provide a flexible approach to the delivery of new business functions. In addition, it allows full and flexible customization of the IDE to support unique development processes and provides deep integration into mainframe configuration management and tooling for a more comprehensive development environment. It also boasts of improved application quality with measurable improvement in delivery times.  These capabilities together promise faster developer adoption.

Said Greg Lotko, Vice President and Business Line Executive, IBM System z, about the new Micro Focus offering:  We are continually working with our technology partners to help our clients maximize the value in their IBM mainframes, and this latest innovation from Micro Focus is a great example of that commitment.

Behind all of this development innovation is an industry effort to cultivate the next generation of mainframe developers. Using a combination of trusted technology (COBOL and mainframe) and new innovation (zEnterprise, hybrid computing, expert systems, and Eclipse), these new developers; having been raised on GUI and mobile and social, can leverage what they learned growing up to build the multi-platform, multi-device mainframe applications that organizations will need going forward.

As these people come on board as mainframe-enabled developers organizations will have more confidence in continuing to invest in their mainframe software assets, which currently amount to an estimated 200-300 billion lines of source code and may even be growing as mainframes are added in developing markets, considered a growth market by IBM.  It only makes sense to leverage this proven code base than try to replace it.

This was confirmed in a CA Technologies survey of mainframe users a year ago, which found that 1) the mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise; 2) the machine is viewed as an enabler of innovation as big data and cloud computing transform the face of enterprise IT—now add mobile; and 3) companies are seeking candidates with cross-disciplinary skill sets to fill critical mainframe workforce needs in the new enterprise IT thinking.

Similarly, a recent study by the Standish Group showed that 70 percent of CIOs saw their organizations’ mainframes as having a central and strategic role in their overall business success.  Using the new tools noted above organizations can maximize the value of the mainframe asset and cultivate the next generation mainframe developers.

System z Application Modernization

December 10, 2012

People still complain about how they are held back by old green-screen mainframe applications. It’s not the underlying business logic or application performance they usually are complaining about—that apparently remains rock solid and relevant and has been, in some cases, for decades—but the user interface. Granted, 3270 apps are clunky to use and require plowing through cumbersome screen sequences to complete even a simple task and scream for modernization but they can be modernized through CICS.

Another complaint is that the applications are difficult to change, especially now when organizations want to provide access to mainframe logic and data to users with smartphones or tablets. The question then is what degree of modernization: a pretty GUI facelift or something more structural or maybe a migration to a new platform.  In the age of IBM hybrid computing, you actually have a lot more options than you did even a year ago.

IBM, mainly through the Rational Software group, offers a variety of ways to modernize z applications. You can start with the System z tools here. They enable you to develop mainframe-based applications in COBOL, PL/I, Assembler, C/C++, and Java, as well as workstation-based applications in COBOL, PL/I, and Java.

WebSphere, the app server, is another way to modernize z apps using Java and J2EE. IBM Rational Application Developer for WebSphere accelerates the development and deployment of not only Java, Java EE, Web 2.0 but mobile, portal, and service-oriented architecture (SOA) applications by providing integrated tools for development, testing, profiling, and delivery of applications. Recent upgrades to CICS also make SOA-based modernization even more appealing with support for some of the latest goodies like Atom feeds, RESTful interfaces, and more.

For several years DancingDinosaur has been touting SOA as the most direct way to modernize and repurpose mainframe logic and data. IBM Rational Developer for SOA Construction enables you to create and maintain RPG and COBOL applications as well as modernize them with a variety of techniques using IBM HATS. IBM’s developerWorks has the latest on SOA and web services. Ball State University has been using SOA to modernize its z applications for several years. For example, the school made the critical student schedule app, a CICS system, available to students anywhere, anytime, from any device.  You can read Independent Assessment’s case study here.

Since social business promises to be the next thing, you can develop social business applications through Linux on z, either Red Hat or SUSE, using IBM Connections and WebSphere.  Social business will become of interest to z shops as companies begin collecting social sentiment data on the z and want to analyze it fast.

System z shops actually have been doing some of this for a while.  IBM reports an ISV seeking to increase efficiency and improve time to market for its z software products took advantage of the Metal C feature of the IBM z/OS XL C/C++ compiler to enable its programmers to write code in the C syntax and leverage advanced optimization technology in the z/OS XL C/C++ compiler. The IBM compiler’s Metal C feature cut development time by up to 66% while the company capitalized on C programming skills.

Even IBM reports its CICS dev team tapped IBM Rational Team Concert and IBM Rational Developer for System z software to convert its product development cycle from the existing waterfall development processes to agile development methods. The team used the Rational products to create a highly configurable, end-to-end integrated development environment. Adopting an agile approach and using IBM Rational software has helped the team reduce the amount of preparation required for status meetings by 75% and improved the efficiency of status meetings, decreasing meeting times by 33%. Anything that shortens meetings is worth its weight in gold.

The point is that z shops can do all the sexy app dev stuff—Java, cloud, social, mobile, agile, SOA—to produce richer, more flexible apps faster. And do so without abandoning the z or eating its considerable investment in the mainframe and still bring the z’s compelling virtues it brings to the party.

IBM System z goes mobile

April 13, 2010

A recent IBM Redbook documented how you could use the latest mobile phones (iPod, Android, others) with the mainframe. I found the document here although it seems to have gone AWOL for the moment. (Resources sometimes appear, disappear, and reappear on the IBM website.)  Otherwise, try this.

The document, titled Access to z/OS from Smartphones was authored by Alex Louwe Kooijmans, Lydia Parziale, Reginaldo Barosa, Dave Ellis, Ankur Goyal, Fabio Riva, and Kenichi Yoshimura. The goal is to “demonstrate that it is possible to combine the traditional strengths of the mainframe to manage large volumes of data and run business transactions can be combined with the Web 2.0 paradigm… and show how mainframe data can be accessed by modern smartphones such as Android or iPhone.”

This really isn’t new. Viterra, the Saskatchewan grain cooperative, a year ago let its members access data through CICS via their Blackberry devices, even while standing in the middle of a field. However, the Redbook authors take this even further by imagining new System z capabilities. “We can receive notifications in real time, for example, of successful/unsuccessful termination of a TWS job stream, or we can immediately get alerts about abends that occurred on a critical application,” they write.

Of particular interest to the authors is the iPhone with its widely embraced intuitive interface and phones using the Android OS and its open software development environment. Combining the intuitive GUI of the iPhone with the open dev of Android leads the authors to imagine two scenarios:

  1. Accessing mainframe-based information proactively as it is delivered on demand in real time
  2. Notifying users when certain events occur on their mainframe computers

Except, maybe, for the mainframe delivering this information to this latest generation of smartphones none, of this is exactly new.

The tools and technologies to do this exist starting with SOA on the System z. Organizations as different as Ball State University and Aetna Insurance and Sears Canada already use their mainframes in an SOA strategy that delivers mainframe data to users with the latest consumer devices. The authors describe how CICS, HATS, XML, AJAX, REST, ATOM, and a slew of other Web 2.0 technologies can be readily combined to make this happen.

What has inspired these mainframe IBMers has been the rapid adoption of Smartphones. Welcome to the party. Back in February, I suggested to a CFO readership at Business Finance Magazine that they ditch their laptops for smartphones. They didn’t exactly jump at the idea; even C-level execs don’t want to let go of their Excel spreadsheets. But with a smartphone they don’t have to. They can access all their spreadsheets and documents and even mainframe data through the cloud (private and/or public) with just a smartphone.

Think about it; the 3270 terminal of the future could very well be a smartphone. And with SOA and SaaS it could happen sooner than you think.


Follow

Get every new post delivered to your Inbox.

Join 667 other followers

%d bloggers like this: