Posts Tagged ‘DevOps’

Compuware Continues Mainframe Software Renaissance

January 19, 2017

While IBM focuses on its strategic imperatives, especially cognitive computing (which are doing quite well according to the latest statement that came out today–will take up next week), Compuware is fueling a mainframe software renaissance on its own. It’s latest two announcements brings Java-like unit testing to COBOL code via its Topaz product set and automate and intelligently optimize the processing of batch jobs through its acquisition of MVS Solutions. Both modernize and simplify the processes around legacy mainframe coding thus the reference to mainframe software renaissance.

compuware-total-test-graphic-process-flow-diagram

Let’s start with Compuware’s Topaz set of graphical tools. Since they are GUI-based even novice developers can immediately validate and troubleshoot whatever changes, either intended or inadvertent, they made to the existing COBOL applications.  Compuware’s aim for Topaz for Total Test is to eliminate any notion that such applications are legacy code and therefore cannot be updated as frequently and with the same confidence as other types of applications. Basically, mainframe DevOps.

By bringing fast, developer-friendly unit testing to COBOL applications, the new test tool also enables enterprises to deliver better customer experiences—since to create those experiences, IT needs its Agile/DevOps processes to encompass all platforms, from the mainframe to the cloud.  As a result z shops can gain increased digital agility along with higher quality, lower costs, and dramatically reduced dependency on the specialized knowledge of mainframe veterans aging out of the active IT workforce. In fact, the design of the Topaz tools enables z data centers to rapidly introduce the z to novice mainframe staff, which become productive virtually from the start—another cost saver.

Today in 2017 does management still need to be reminded of the importance of the mainframe. Probably, even though many organizations—among them the world’s largest banks, insurance companies, retailers and airlines—continue run their business on mainframe applications, and recent surveys clearly indicate that situation is unlikely to change anytime soon. However, as Compuware points out, the ability of enterprises to quickly update those applications in response to ever-changing business imperatives is daily being hampered by manual, antiquated development and testing processes; the ongoing loss of specialized COBOL programming knowledge; and the risk and associated fear of introducing even the slightest defect into core mainframe systems of record. The entire Topaz design approach from the very first tool, was to make mainframe code accessible to novices. That has continued every quarter for the past two years.

This is not just a DancingDinosaur rant. IT analyst Rich Ptak from Ptak Associates also noted: “By eliminating a long-standing constraint to COBOL Compuware provides enterprise IT the ability to deliver more digital capabilities to the business at greater speed and with less risk.”

Gartner in its latest Predicts 2017, chimes in with its DevOps equivalent of your mother’s reminder to brush your teeth after each meal: Application leaders in IT organizations should adopt a continuous quality culture that includes practices to manage technical debt and automate tests focused on unit and API testing. It should also automate test lab operations to provide access to production-like environments, and enable testing of deployment through the use of DevOps pipeline tools.” OK mom; everybody got the message.

The acquisition of MVS Solutions, Compuware’s fourth in the last year, adds to the company’s collection of mainframe software tools that promise agile, DevOps and millennial-friendly management of the IBM z platform—a continuation of its efforts to make the mainframe accessible to novices. DancingDinosaur covered these acquisition in early December here.

Batch processing accounts for the majority of peak mainframe workloads at large enterprises, providing essential back-end digital capabilities for customer-, employee- and partner-facing mobile, cloud, and web applications. As demands on these back-end mainframe batch processes intensify in terms of scale and performance, enterprises are under increasing pressure to ensure compliance with SLAs and control costs.

These challenges are exacerbated by the fact that responsibility for batch management is rapidly being shifted from platform veterans with decades of experience in mainframe operations to millennial ops staff who are unfamiliar with batch management. They also find native IBM z Systems management tools arcane and impractical, which increases the risk of critical batch operations being delayed or even failing. Run incorrectly, the batch workloads risk generating excessive peak utilization costs.

The solution, notes Compuware, lies in its new ThruPut Manager, which promises automatic, intelligent optimized batch processing. In the process it:

  • Provides immediate, intuitive insight into batch processing that even inexperienced operators can readily understand
  • Makes it easy to prioritize batch processing based on business policies and goals
  • Ensures proper batch execution by verifying that jobs have all the resources they need and proactively managing resource contention between jobs
  • Reduces the organizations’ IBM Monthly Licensing Charges (MLC) by minimizing rolling four-hour average (R4HA) processing peaks while avoiding counter-productive soft capping

Run in conjunction with Strobe, Compuware’s mainframe application performance management tool, ThruPut Manager also makes it easier to optimize batch workload and application performance as part of everyday mainframe DevOps tasks. ThruPut promises to lead to more efficiency and greater throughput resulting in a shorter batch workload and reduced processing capacity. These benefits also support better cross-platform DevOps, since distributed and cloud applications often depend on back-end mainframe batch processing.

Now, go out an hire some millenials and bring fresh blood into the mainframe. (Watch for DancingDinosaur’s upcoming post on why the mainframe is cool again.)

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC Mainframe Survey Confirms z System Is Here to Stay

November 11, 2016

No surprise there. BMC’s 11th annual mainframe survey covering 1,200 mainframe executives and tech professionals found 58% of respondents reported usage of the mainframe is increasing as they look to capitalize on every infrastructure advantage it provides and add more workloads. Another 23% consider the mainframe as the best option to run critical work.

ibm_system_z10

IBM z10

Driving the continuing interest in the mainframe are the new demands for data handling, scalable processing, analytics, and more. According to the BMC survey nearly 60% of companies are seeing increased data and transaction volumes. They opt to stay with the mainframe for its highly secure, superior data handling and transaction serving, particularly as digital business adds unpredictability and volatility to workloads.

Overall respondents fell into three primary groups: 1) entrenched mainframe shops, 58% that are on board for the long haul; 2) shops, 23% that intend to maintain a steady amount of work on the mainframe; and 3) the 19% that are moving away from the mainframe.  The first two groups, committed mainframe shops, amount to just over survey 80% of the respondents.

Many companies surveyed are focused on addressing the increased workload demands, especially the rapidly growing demand for new applications. But surprisingly, the survey does not directly touch on hybrid cloud, cognitive computing or any of the latest technologies IBM has been promoting, not even DevOps, which can streamline mainframe application development and deployment. “We are not hearing much about a hybrid cloud environments or blockchain yet. Most companies seem to be in the early tire kicking stage, observed John McKenny, BMC Vice President, Strategy and Operations.

Eighty-eight percent of companies in the first group, entrenched mainframe shops, for example, are looking to increase the workloads they run on Java on the mainframe, primarily to address new application demands. It also doesn’t hurt that Java on the mainframe also can help lower data center costs by directing workloads to lower cost assist processors.

Other interesting BMC survey findings:

  • Half of the respondents report keeping 50% of their data on the mainframe and continue to invest in the platform for reasons you already know—security, availability, data serving capability
  • Continued steady growth of Linux in production on the z: 41% in 2014, 48% in 2015, 52% in 2016
  • Increased use of Java on the mainframe report as 67% of respondents cite need to meet growing application demand

Those looking to reduce mainframe presence cited three reasons: 1) perception of high cost, 2) outdated management understanding, and 3) looking for ways to reduce workloads over time.  DancingDinosaur has spoken with mainframe shops intending to migrate off the z and they cite the usual reasons, especially #1 above.

Top mainframe priorities for 2016 according to the BMC survey:  Cost reduction/optimization (65%); data privacy, compliance, security (50%); application availability (49%); application modernization (41%. Responses indicated the priorities for next year haven’t changed at all.

Surprisingly, many of the latest technologies for the z that IBM has touted recently have not yet shown up in the BMC survey responses, except maybe Java and Linux. This would include hybrid clouds, blockchain, IoT, and cognitive computing. IDC, for example, already is projecting cognitive computing to grow at a CAGR of 55.1% from 2016 to 2020. For z shops, however, cognitive computing appears almost invisible.

In some case with surveys like this you need to read between the lines. Where respondents report changes in activity levels driving application growth or the growth of interest in Java or the frequency of application changes and references to operational analytics they’re making oblique references to mobile or big data or even cognitive computing or other recent technologies for the z.

At its best, the BMC notes that digital technologies are transforming the ways in which mainframe shops conduct business and interact with their customers.  Adds BMC mainframe customer Credit Suisse: “IT departments are moving toward centralized, virtualized, and highly automated environments. This is being pursued to drive cost and processing efficiencies. Many companies realize that the Mainframe has provided these benefits for many years and is a mature and stable environment,” said Frank Cortell, Credit Suisse Director of Information Technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Compuware Continues Mainframe GUI Tool Enhancements

July 1, 2016

Early in 2015 Compuware announced the first in what it promised would be a continuing stream of new mainframe tools and tool enhancements. Did anyone really believe them? Mainframe ISVs are not widely regarded for their fast release cycles. DancingDinosaur reported on it then here and has continued to follow up and report its progress through a handful of new releases. This past week, DancingDinosaur received new Compuware mainframe tool announcements. For a mainframe ISV this is almost unheard of. IBM sometimes releases new mainframe products in intense spurts but then quickly resumes its typical languid release pace.

compuware ispw

Screen from Compuware’s ISPW for Continuous Delivery to the Mainframe

Let’s take a look at each of these new releases. First, ISPW Deploy, an advanced mainframe release automation solution that enables large enterprises to bring continuous delivery best practices to their IBM z/OS environments. ISPW Deploy, built on the ISPW technology Compuware acquired in January 2016, facilitates faster and more reliable mainframe software deployment. Specifically, it helps, according to Compuware, in three ways, through:

  1. Automation that rapidly moves code through the deployment process, including test staging and approvals, while also providing greatly simplified full or partial rollbacks.
  1. Visualization that enables DevOps managers to quickly pinpoint deployment issues in order to both solve immediate rollout problems and address persistent bottlenecks in code promotion.
  1. Integrations with both third-party solutions and Compuware’s own industry-leading mainframe toolkit that allow IT to build complete SCM-to-production DevOps pipelines and to quickly launch associated remediation support tools if and when deployment issues occur.

Compuware is further empowering enterprises to achieve mainframe agility by integrating. For instance, its ISPW and XebiaLabs’ cross-platform continuous delivery solutions enable IT organizations to orchestrate and visualize their mainframe DevOps processes in a common manner with their broader cross-platform DevOps automation.

The second announcement focused on Xebial Labs, as noted above. The idea here is to deliver cross-platform continuous releases for the mainframe. As Compuware explained, enterprises using XebiaLabs’ solution suite and Compuware ISPW, can now automate and monitor all phases of mainframe DevOps within the same continuous delivery management environment they use for their distributed, web, and cloud platforms. This automation and monitoring includes test/QA, pre-copy staging, and code promotion. The goal, as with all DevOps, is to speed digital agility for mainframe or distributed systems or both.

The third announcement concerned a partnership between Compuware and ConicIT that aims to help a new generation of IT ops staff proactively resolve emerging mainframe issues before they impact application service levels. It does so by integrating ConicIT’s predictive mainframe analytics with Compuware’s Strobe, which provides visually intuitive troubleshooting intelligence. Together, the two companies promise to enable even IT staff with relatively little hands-on mainframe experience to quickly identify and resolve a wide range of application performance problems.

The key to doing this is a reliance on the adoption of intuitive GUI interfaces. Compuware started this with its Topaz tools and has been continuing along this path for two years. Compuware’s CEO, Chris O’Malley, has been harping on these themes almost since he first arrived there.

Compuware customers apparently have gotten the message. As reported: “Market pressures are making it essential for us to deliver quality products and services to our clients more frequently, and the mainframe plays a critical role in that delivery,” according to Craig Danielson, Assistant Vice President for Commerce Bank. “We leverage ISPW to help in this capacity and its new capabilities will provide us the automation and visibility of our software deployment process to help us continuously improve our internal operations and services.” (note: DancingDinosaur did not validate this customer statement.)

Companies will need all the help modern mainframe tools can deliver. Mainframe data centers are facing unprecedented challenges that require unusual speed and agility. In short, they need DevOps fast. And they will have to respond with an increasingly aging core of experienced mainframe staff supplemented by millennials who have to be coaxed and cajoled onto the mainframe with easy graphical tools. If mainframe data centers can’t respond to these challenges—not just cloud, mobile, Linux, and analytics, but also IoT, blockchain, cognitive computing, and whatever else is coming along next—how are they going to cope. Already their users, the line of business managers, are turning to shadow IT out of frustration with the slow response from the mainframe data centers. And you know what comes next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Ubuntu Linux (beta) for the z System is Available Now

April 8, 2016

As recently as February, DancingDinosaur has been lauding IBM’s bolstering of the z System for Linux and support for the latest styles of app dev. As part of that it expected Ubuntu Linux for z by the summer. It arrived early.  You can download it for LinuxONE and the z now, hereubuntu-logo-300x225

Of course, the z has run Linux for over a decade. That was a customized version that required a couple of extra steps, mainly recompiling, if x86 Linux apps were to run seamlessly. This time Canonical and the Ubuntu community have committed to work with IBM to ensure that Ubuntu works seamlessly with IBM LinuxONE, z Systems, and Power Systems. The goal is to enable IBM’s enterprise platforms to play nicely with the latest app dev goodies, including NFV, containers, KVM, OpenStack, big data analytics, DevOps, and even IoT. To that end, all three parties (Canonical, the Ubuntu community, and IBM) commit to provide reference architectures, supported solutions, and cloud offerings, now and in the future.

Ubuntu is emerging as the platform of choice for organizations running scale-out, next-generation workloads in the cloud. According to Canonical, Ubuntu dominates public cloud guest volume and production OpenStack deployments with up to 70% market share. Global brands running Ubuntu at scale in the cloud include AT&T, Walmart, Deutsche Telecom, Bloomberg, Cisco and others.

The z and LinuxONE machines play right into this. They can support thousands of Linux images with no-fail high availability, security, and performance. When POWER 9 processors come to market it gets even better. At a recent OpenPOWER gathering the POWER 9 generated tremendous buzz with Google discussing its intentions of building a new data center server  based on an open POWER9 design that conforms to Facebook’s Open Compute Project server.

These systems will be aimed initially at hyperscale data centers. OpenPOWER processors combined with acceleration technology have the potential to fundamentally change server and data center design today and into the future.  OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.

According to Aaron Sullivan, Open Compute Project Incubation Committee Member and Distinguished Engineer at Rackspace. “OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.” This is true today and with POWER9, a reportedly 14nm processor coming around 2017, it will be even more so then. This particular roadmap looks out to 2020 when POWER10, a 10nm processor, is expected with the task of delivering extreme analytics optimization.

But for now, what is available for the z isn’t exactly chopped liver. Ubuntu is delivering scale-out capabilities for the latest development approaches to run on the z and LinuxONE. As Canonical promises: Ubuntu offers the best of open source for IBM’s enterprise customers along with unprecedented performance, security and resiliency. The latest Ubuntu version, Ubuntu 16.04 LTS, is in beta and available to all IBM LinuxOne and z Systems customers. See the link above. Currently SUSE and Red Hat are the leading Linux distributions among z data centers. SUSE also just announced a new distro of openSUSE Linux for the z to be called openSUSE Factory.

Also this week the OpenPOWER Foundation held its annual meeting where it introduced technology to boost data center infrastructures with more choices, essentially allowing increased data workloads and analytics to drive better business results. Am hoping that the Open Mainframe Project will emulate the Open POWER group and in a year or two by starting to introducing technology to boost mainframe computing along the same lines.

For instance OpenPOWER introduced more than 10 new OpenPOWER servers, offering expanded services for high performance computing and server virtualization. Or this: IBM, in collaboration with NVIDIA and Wistron, revealed plans to release its second-generation OpenPOWER high performance computing server, which includes support for the NVIDIA Tesla Accelerated Computing platform. The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via the NVIDIA NVLink, a high-speed interconnect technology.

In the same batch of announcements TYAN announced its GT75-BP012, a 1U, POWER8-based server solution with the ppc64 architecture. The ppc64 architecture is optimized for 64-bit big-endian PowerPC and Power Architecture processors.  Also of interest to DancingDinosaur readers may be the variation of the ppc64 that enables a pure little-endian mode with the POWER8 to enable the porting of x86 Linux-based software with minimal effort. BTW, the OpenPOWER-based platform, reportedly, offers exceptional capability for in-memory computing in a 1U implementation, part of the overall trend toward smaller, denser, and more efficient systems. The latest TYAN offerings will only drive more of it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Docker on IBM z System

January 7, 2016

“If you want Docker on z, you can do it in next to 30 seconds, says Dale Hoffman,Program Director, Linux SW Ecosystem & Innovation Lab.  At least if you’re running Linux on z and preferably on a LinuxONE z.  With all the work Hoffman’s team has done laying the ground work for Docker on the z, you barely have to do anything yourself.

HybridCloud_Infographic (3)

Containers are ideal for cloud computing or, more importantly, for hybrid clouds, defined as the connection of one or more clouds to other clouds. Hybrid clouds are where IBM sees the industry and the z going, and containers, particularly Docker containers, have emerged as the vehicle to get enterprises there. Click here for an FAQ on Docker with z.

z System shops can get there fast using tools Hoffman’s group has already built for the z. To get started, just click here. Or, simply go to IBM Bluemix, from which you can build and deploy Docker containers for the z and other platforms. Back in June IBM introduced enterprise class containers that make it easier for developers to deliver production applications across their hybrid environments.

IBM also offers its own IBM branded containers that allow organizations to deploy, manage, and run application components on the IBM Bluemix development platform by leveraging the open-source Docker container technology. IBM Bluemix now offers three infrastructure compute technology choices to deploy applications – Docker containers, OpenStack virtual machines, or Cloud Foundry apps. Designed for enterprise production workloads, IBM Containers can be securely deployed with integrated scalability and reliability, which enterprise customers rely upon.

In keeping with IBM’s policy of not going it alone, the company also has become a founding member of a coalition of partners and users to create the Open Container Platform (OCP) that aims to ensure containers are interoperable. Features of the IBM Containers include integrated tools such as log analytics, performance monitoring and delivery pipeline, elastic scaling, zero downtime deployments, automated image security/vulnerability scanning, and access to Bluemix’s catalog of over 100 cloud services including Watson, Analytics, IoT and Mobile.

Enterprise z shops want containers because they need to be as fast and agile as the born-in-the-cloud upstarts challenge them. Think survival. Containers like Docker really provide ease of use, portability, and fast deployment almost anywhere to get new applications into production fast. Through containers Docker basically puts its engine/runtime on top of the OS and provides the virtual containers to deploy software into the container. The appeal of this is easy portability for the application/software to any Docker container anywhere and fast deployment.

Specifically the Docker technology provides application portability by utilizing open-source, standardized, light-weight, and self-sufficient container capabilities. IBM’s implementation of the Docker technology with enterprise capabilities further strengthens IBM’s support for hybrid cloud environments. Of course, not every application at every stage in its lifecycle will run in the public cloud—many if not most won’t ever–but IBM Containers enables the developers to determine when to run containers on premise and when to deploy to the public cloud on IBM Bluemix with full Internet connectivity. Image files created within IBM Containers support portability and can be instantiated as containers on any infrastructure that runs Docker.

Through the use of containers on z you can shape your environment using system virtualization and container elements according to your landscape and your requirements with hardly any constraints in performance.  In addition, Docker on z provides greater business agility to go to market quicker and solve business problems effectively through DevOps agility via Docker containers and microservices. Then add hybrid cloud and portability by which you move the same application across multiple clouds.   In short, you can define your IT structures according to your needs, not your system constraints.

Finally, there is nothing threatening about Docker containers on z. Docker is Docker is Docker, even on z, says Hoffman; it relies on the same container technology of Linux, which has been available on z for many years. So get started with containers on z and let DancingDinosaur know when you have success deploying your z containers.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM z Systems at Edge2015

April 9, 2015

There are so many interesting z Systems sessions at IBM Edge2015 that DancingDinosaur can’t come close to attending them all or even writing about them.  Edge2015 will be in Las Vegas, May 10-15, at the Venetian, a huge hotel that just happens to have a faux Venice canal running within it (and Vegas is in the desert, remember).

The following offers a brief summation of a few z Systems sessions that jumped out at me.  In the coming weeks Dancing Dinosaur will look at sessions on Storage, Power Systems, cross-platform sessions, and middleware. IBM bills Edge2015 as the Infrastructure Innovation Conference so this blog will try at least to touch on bits of all of it. Am including the session numbers and presenters but please note that session and presenters may change.

radcliffe mobile as the next evolutionCourtesy of IBM (click to enlarge)

Session zBA1909; Mobile and Analytics Collide – A New Tipping Point; presenter Mark Simmonds

DancingDinosaur starting following mobile on z in 2012 and was reporting IBM mobile successes as recently as last month, click here. In this session Simmonds observes organizations being driven to deliver more insight and smarter outcomes in pursuit of increasing revenue and profit while lowering business costs and risks. The ubiquity of mobile devices adds two important dimensions to business analytics, the time and location of customers. Now you have an opportunity to leverage both via the mobile channel but only if your analytics strategy can respond to the demands of the mobile moment. At this session you’ll see how customers are using IBM solutions and the z to deliver business critical insight across the mobile community and hear how organizations are setting themselves apart by delivering near real-time analytics.

Session zBA1822; Hadoop and z Systems; presenter Alan Fellwock

DancingDinosaur looked at Hadoop on z as early as 2011. At that point it was mainly an evolving promise. By this past fall it had gotten real, click here.  In this session, Fellwock notes that various use cases are emerging that require Hadoop processing in conjunction with z Systems. In one category, the data originates on the z Systems platform itself—this could be semi-structured or unstructured data held in DB2 z/OS, VSAM or log files in z/OS. In another category, the data originates outside z Systems –this could be social media data, email, machine data, etc.—but needs to be integrated with core data on z Systems. Security and z Systems governance becomes critical for use cases where data originates on z Systems. There are several z Hadoop approaches available, ranging from Hadoop on Linux to an outboard Hadoop cluster under z governance to a cloud model that integrates with SoftLayer.

Session zAD1876; Bluemix to Mainframe – Making Development Accessible in the Cloud; presenter Rosalind Radcliffe

Cloud capability and technology is changing the way enterprises go to market. DancingDinosaur interviewed Radcliffe for a posting on DevOps for the mainframe in March. DevOps is about bringing the entire organization together, including development and operations, to more efficiently deliver business value be it on premise, off premise, or in a hybrid cloud environment. This session promises to explore how IBM DevOps solutions can transform the enterprise into a high quality application factory by leveraging technology across platforms and exploiting both systems of record and systems of engagement applications. It will show how to easily expose your important data and customer applications to drive innovation in a nimble, responsive way, maintaining the logic and integrity of your time-tested systems.

Session zAD1620; APIs to the Enterprise: Unlocking Mainframe Assets for Mobile and Cloud Applications; presenter Asit Dan

The emergence of APIs has changed how organizations build innovative mobile and web applications, enter new markets, and integrate with cloud and third party applications. DancingDinosaur generally refers to this as the API economy and it will become only more important going forward. IBM z Systems data centers have valuable assets that support core business functions. Now they can leverage these assets by exposing them as APIs for both internal and external consumption. With the help of IBM API Management, these organizations can govern the way APIs are consumed and get detailed analytics on the success of the APIs and applications that are consuming them. This session shows how companies can expose z Systems based functions as APIs creating new business opportunities.

Session zAD1469; Java 8 on IBM z13 – An Unstoppable Force Meets an Immovable Object; presenter Elton De Souza

What happens when you combine the most powerful commercially available machine on the planet with the latest iteration of the most popular programming language on the planet? An up to 50% throughput improvement for your generic applications and up to 2x throughput improvement for your security-enabled applications – that’s what! This session covers innovation and performance of Java 8 and IBM z13. With features such as SMT, SIMD and cryptographic extensions (CPACF) exploitation, IBM z Systems is once again pushing the envelope on Java performance. Java 8 is packed with features such as lambdas and streams along with improved performance, RAS and monitoring that continues a long roadmap of innovation and integration with z Systems. Expect to hear a lot about z13 at Edge2015.

Of course, there is more at Edge2015 than just z Systems sessions. There also is free evening entertainment. This year the headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. Check her out here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM DevOps for the Mainframe

March 27, 2015

DevOps is not just for distributed platforms. IBM has a DevOps strategy for large enterprises (usually mainframe shops) too. Nationwide, a longtime mainframe shop, is an early adopter of DevOps and already is reporting significant gains: reduction in critical software defects by 80% and a 20% efficiency gain in its maintenance and support operations in just 18 months.

DevOps, an agile methodology, establishes a continuous feedback loop between software development and deployment/operations that speeds development and deployment while ensuring quality. This is a far cry from the waterfall development methodologies of the mainframe past.

 desz DevOps adoption model

Courtesy of IBM (click to enlarge)

The IBM DevOps initiative, announced last November (link above), taps into the collaborative capabilities of IBM’s Cloud portfolio to speed the delivery of software that drives new models of engagement and business. Software has become the rock star of IT with software-driven innovation becoming a primary strategy for creating and delivering new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70% of the time. As such, IBM notes, DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Some mainframe shops, however, continue to operate from a software standpoint as if client/server computing and PCs were still the new game in town. Meanwhile the business units keep complaining about how long it takes to make software changes while long backlogs drag on the IT budget.

DevOps is about continuous software development and deployment. That means continuous business planning, continuous collaborative dev, continuous testing, continuous release and deployment, continuous monitoring, and continuous feedback and optimization in a never ending cycle. Basically, continuous everything.  And it really works, as Nationwide can attest.

But DevOps makes traditional mainframe shops nervous. Mainframe applications are rock solid and crashes and failures almost unheard of. How can they switch to DevOps without risking everything the mainframe stands for, zero failure?

The answer: mainframe DevOps that leads straight into continuous testing, not deployment. The testing can and should be as rigorous and extensive as is necessary to reassure that everything works as it should and anything that will fail has failed. Only then does it go into production.

It would be comforting to the data centers to say that DevOps only addresses systems of engagement; those pesky mobile, collaborative, and social systems that suddenly are making demands on the core mainframe production applications. But that is not correct. DevOps is about integrating systems of engagement with systems of record, the enterprise’s mainframe crown jewels. The trick is to bring together the culture, processes, and tools across the entire software delivery lifecycle, as IBM says, to span it all—mobile to mainframe, slowing down only to conduct as exhaustive testing as the enterprise requires.

Mainframe tools from the era of waterfall methodologies won’t cut it. Rational offers a set of tools starting with Blue Agility. IBM also offers an expanded set of tools acquired through acquisitions such as UrbanCode (release automation) and GreenHat (software quality and testing solutions for the cloud and more) that offer an integrated developer experience on open cloud platforms such as Bluemix to expedite DevOps collaboration, according to IBM.

Expect push back from any attempt to introduce DevOps into a traditional mainframe development culture. Some shops have been developing systems the same way for 30 years or more. Resistance to change is normal. Plan to start gradually, implementing DevOps incrementally.

Some shops, however, may surprise you. Here the mainframe team senses they are falling behind. IBM, of course, has tools to help (see above). Some experts recommend focusing on automated testing early on; when testing is automated DevOps adoption gets easier, they say, and old school developers feel more reassured.

At IBM Edge2015, there are at least 2 sessions on DevOps: Light Up Performance of Your LAMP Apps and DevOps with a Power Optimized Stack; and CICS Cloud DevOps = Agility2. BTW, it is a good time to register for IBM Edge2015 right away, when you can still get a discount. IBM Edge2015, being billed as the Infrastructure Innovation Conference, takes place May 11 – 15 at The Venetian in Las Vegas. DancingDinsosaur will be there. Have just started pouring over the list of sessions on hundreds of topics for every IBM platform and infrastructure subject. IBM Edge2015 combines what previously had been multiple conferences into one.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

IBM Brings Cloud DevOps to the Mainframe

December 3, 2014

Is your organization ready for DevOps?  It should be coming to System z data centers almost any day now, riding in on newly announced IBM cloud-based DevOps services, software, and infrastructure designed to help large organizations develop and deliver quality software faster.

IBM Launches Bluemix Garage at London's Level39

Launch of the Bluemix Garage in London

DevOps streamlines enterprise workflow by truncating the development, testing, and deployment process. It entails collaborative communications around the end-to-end enterprise workflow flow and incorporates a continuous feedback to expedite the process. DevOps evolved out of Agile methodologies over a decade ago.

Agile was intended to streamline the traditional waterfall IT development process by putting developers and business unit people and the deployment folks together to build, test, and deploy new applications fast. Agile teams would deliver agreed upon and tested functionality within a month. Each deliverable was short, addressing only a subset of the total functionality. Each was followed by the next containing yet more functionality. In the process, previously delivered functionality might be modified or replaced with a new deliverable.

IBM is streamlining the process further by tapping into the collaborative power of the company’s Cloud portfolio and business transformation experience to speed the delivery of software that supports new models of engagement.  To be clear, IBM definitely is not talking about using DevOps with the organization’s systems of record—the core transaction systems that are hallmark of the z and the heartbeat of the enterprise. The most likely candidates will be systems of engagement, systems of innovation, and analytics systems.  These are systems that need to be delivered fast and will change frequently.

According to IBM software-driven innovation has emerged as a primary way businesses create and deliver new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70 percent of the time. DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Agile represented a radical departure from the waterfall process, which called for developers to take a full set of business requirements, disappear to two years, and return with a finished application that worked right.  Except that it often took longer for the developers to return with the code and the application didn’t work as promised. By then the application was well over budget and late.  System z shops know this well.

DevOps today establishes a continuous, iterative process flow between the development team and the deployment group and incorporates many Agile concepts, including the active involvement of the business people, frequent testing, and quick release cycles. As the IBM survey noted  DevOps was spurred by the rise of smartphones and mobile computing. Mobile users demand working functionality fast and expect frequent updates. Two-year release cycles were unacceptable; competitors would be out with newer and better apps long before.  Even six-month release cycles seem unresponsive. This is one of the realities DevOps addresses.  Another reality is extreme scaling, something z data centers understand.

According to IBM, the company’s new DevOps Innovation Services help address the challenge of scaling development, enabling enterprises to shorten their software delivery lifecycle. The hybrid cloud services combine IBM’s industry expertise from hundreds of organizational change and application development projects with the industry’s leading application development portfolio, especially Bluemix, IBM’s open DIY cloud PaaS platform. They also apply the flexibility of IBM’s enterprise-grade, hybrid cloud portfolio, which was recently ranked by Synergy Research Group as the leading hybrid and private cloud for the enterprise. These services are based on SoftLayer, IBM’s cloud infrastructure platform.

In a second DevOps-related announcement last month IBM described an initiative to bring a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix. The new initiative enables developers to build applications around their most sensitive data and deploy them in a dedicated cloud environment to help them capture the benefits of cloud while avoiding the compliance, regulatory and performance issues that are presented with public clouds. System z shops can appreciate this.

Major enterprise system vendors like IBM, EMC, Cisco, and Oracle are making noises about DevOps. As far as solid initiatives IBM appears far ahead, especially with the two November announcements.

DancingDinosaur is Alan Radding, an independent IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Find more of his IT writing at Technologywriter.com and here.


%d bloggers like this: