Posts Tagged ‘IDE’

IBM Boosts DevOps with ADDI on Z

February 9, 2018

IBM’s Application Discovery and Delivery Intelligence (ADDI) is an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so you can quickly discover and understand interdependencies and impacts of change. You can use this intelligence to transform and renew these applications faster than ever. Capitalize on time-tested mainframe code to engage the API economy. Accelerate application transformation of your IBM Z hybrid cloud environment and more.

Formerly, ADDI was known as EZSource. Back then EZSource was designed to expedite digital transformations by unlocking core business logic and apps. Specifically it enabled the IT team to pinpoint specific mainframe code in preparation for leveraging IT through a hybrid cloud strategy. In effect it enabled the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enabled enterprise DevOps, which was necessary to keep up with the pace of changes overtaking existing business processes.

This wasn’t easy when EZSource initially arrived and it still isn’t although the intelligence built into ADDI makes it easier now.  Originally it was intended to help the mainframe data center team to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data, and schedule interdependencies
  • Aid in sizing the change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people came onboarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Today, IBM describes Application Discovery and Delivery Intelligence (ADDI), its follow-up to EZSource, as an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so your team can quickly discover and understand interdependencies and impacts of any change. In theory you should be able to use this intelligence to transform and renew these applications more efficiently and productively. In short, it should allow you to leverage time-tested mainframe code to engage with the API economy and accelerate the application transformation on your IBM Z and hybrid cloud environment.

More specifically, it promises to enable your team to analyze a broad range of IBM and non-IBM programing languages, databases, workload schedulers, and environments. Enterprise application portfolios were built over decades using an ever-evolving set of technologies, so you need a tool with broad support, such as ADDI, to truly understand the relationships between application components and accurately determine the impacts of potential changes.

In practice, it integrates with mainframe environments and tools via a z/OS agent to automatically synchronize application changes. Without keeping your application analysis synchronized with the latest changes that your developers made, according to IBM, your analysis can get out of date and you risk missing critical changes.

In addition, it provides visual analysis integrated with leading IDEs. Data center managers are petrified of changing applications that still work, fearing they will inadvertently break it or slow performance. When modifying complex applications, you need to be able to quickly navigate the dependencies between application components and drill down to see relevant details. After you understand the code, you can then effectively modify it at much lower risk. The integration between ADDI and IBM Developer for z (IDz) combines the leading mainframe IDE with the application understanding and analytics capabilities you need to safely and efficiently modify the code.

It also, IBM continues, cognitively optimizes your test suites.  When you have a large code base to maintain and manyf tests to run, you must run the tests most optimally. ADDI correlates code coverage data and code changes with test execution records to enable you to identify which regression tests are the most critical, allowing you to optimize time and resources while reducing risk. It exposes poorly tested or complex code and empowers the test teams with cognitive insights that turns awareness of trends into mitigation of future risks.

Finally, ADDI intelligently identifies performance degradations before they hit production. It correlates runtime performance data with application discovery data and test data to quickly pinpoint performance degradation and narrow down the code artifacts to those that are relevant to the cause of bad performance. This enables early detection of performance issues and speeds resolution.

What’s the biggest benefit of ADDI on the Z? It enables your data center to play a central role in digital transformation, a phrase spoken by every c-level executive today as a holy mantra. But more importantly, it will keep your mainframe relevant.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Compuware Triples Down on Promised Quarterly z System Releases

October 14, 2016

Since Jan 2015 Compuware has been releasing enhancements to its mainframe software portfolio quarterly.  The latest quarterly release, dated Oct. 3, delivers REST APIs for ISPW source code management and DevOps release automation; Integration of Compuware Abend-AID with Syncsort Ironstream to create their own custom cross-platform DevOps toolchains; and a new Seasoft Plug-In for Topaz Workbench. The Seasoft plug-in will help less skilled IBM z/OS developers to manage mainframe batch processing along with other z platform tasks


Compuware’s point is to position the mainframe at the heart of agile DevOps computing. As part of the effort, it needs to deliver slick, modern tools that will appear to the non-mainframers who are increasingly moving into multi-platform development roles that include the mainframe. These people want to work as if they are dealing with a Windows or Linux machine. They aren’t going to wrestle with arcane mainframe constructs like Abends or JCL.  Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets. The new dev and ops people who are filling out data center ranks haven’t the patience to learn what they view as antiquated mainframe concepts. They need intelligent tools that visualize the issue and let them intuitively click, drag, drop, and swipe their way through whatever needs to be done.

This is driven by the long-expected attrition of veteran mainframers and the mainframe knowledge and application insight they brought. Only the recession that began in 2008 slowed the exit of aging mainframers. Now they are leaving; one mainframe credit card processor reportedly lost 50 mainframe staff in a month.  The only way to replace this kind of experience is with intelligent and easy to learn tools and expert automation.

Compuware’s response has been to release new tools and enhancements every quarter. It started with Topaz in 2015. DancingDinosaur covered it Jan. 2015 here.  The beauty of Topaz lies in its graphical ease-of-use. Data center newbies didn’t need to know z/OS; they could understand what they were seeing and do meaningful work. With each quarterly release Compuware, in one way or another, has advanced this basic premise.

The most recent advances are streamlining the DevOps process in a variety of ways.  DevOps has emerged as critical with mainframe shops scrambling to remain relevant and effective in a rapidly evolving app dev environment. Just look at Bluemix if you want to see where things are heading.

In the first announcement, Compuware extended mainframe DevOps innovation with REST APIs for ISPW SCM and release automation. The new APIs enable large enterprises to flexibly integrate their numerous other mainframe and non-mainframe DevOps tools with ISPW to create their own custom cross-platform DevOps toolchains. Part of that was  the acquisition of the assets associated with Itegrations’s source code management (SCM) migration practice and methodology, which will  enable Compuware users to more easily migrate their SCM systems from Agile-averse products such as CA Endevor, CA Panvalet, CA Librarian, and Micro Focus/Serena ChangeMan as well as internally developed SCM systems—to ISPW

According to Compuware, these DevOps toolchains are becoming increasingly important for two reasons:

  • Enterprises must aggressively adopt DevOps disciplines in their mainframe environments to fulfill business requirements for digital agility. Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets to counter new, digitally nimble market disruptors.
  • Data centers need to better integrate the toolchains that support their newly adopted mainframe DevOps workflows with those that support DevOps across their various other platforms. This is because mainframe applications and data so often function as back-end systems-of-record for front-end web and mobile systems-of-engagement in multi-tier/cross-platform environments.

In the second announcement Compuware integrated Abend-AID and Syncsort’s Ironstream to give fast, clear insight into mainframe issues. Specifically, the integration of Abend-AID and Ironstream \ enables IT to more quickly discover and act upon correlations between application faults and broader conditions in the mainframe environment. This is particularly important, notes Compuware, as enterprises, out of necessity, shift operational responsibilities for the platform to staffs with limited experience on z/OS. Just put yourself into the shoes of a distributed system manager now dealing with a mainframe. What might appear to be a platform issue may turn out to be software faults, and vice versa.  The retired 30-year mainframe veterans would probably see it immediately (but not always). Mainframe newcomers need a tool with the intelligence to recognize it for them.

With the last announcement Compuware and Software Engineering of America (SEA) introduced the release of SEA’s JCLplus+ Remote Plug-In and $AVRS Plug-In for Compuware’s Topaz Workbench mainframe IDE. Again think about mainframe neophytes. The new plug-ins for Topaz significantly ease challenging JCL- and output-related tasks, according to Compuware, effectively enabling both expert and novice IT staff to perform those tasks more quickly and more accurately in the context of their other mainframe DevOps activities.

An encouraging aspect of this is that Compuware is not doing this alone. The company is teaming up with SEA and with Syncsort to make this happen. As the mainframe vendors work to make mainframe computing easier and more available to lesser trained people it will be good for the mainframe industry as a whole and maybe even help lower the cost of mainframe operations.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


SOA Software Enables New Ways to Tap Mainframe Code

January 30, 2014

Is the core enterprise processing role handled by the mainframe enough? Yet, enterprises today often are running different types of workloads built using different app dev styles. These consist of compound applications encompassing the mainframe and a variety of distributed systems (Linux, UNIX, Windows) and different programming models, data schema, services, and more. Pieces of these workloads may be running on the public cloud, a partner’s private cloud, and a host of other servers. The pieces are pulled together at runtime to support the particular workload.  Mainframe shops should want to play a big role in this game too.

“Mainframe applications still sit at heart of enterprise operations, but mainframe managers also want to take advantage of these applications in new ways,” says Brent Carlson, SVP at SOA Software. The primary way of doing this is through SOA services, and mainframes have been playing in the SOA arena for years. But it has never been as seamless, easy, and flexible as it should. And as social and mobile and other new types of workloads get added to the services mix, the initial mainframe SOA approach has started to show its age. (Over the years, DancingDinosaur has written considerably on mainframe SOA and done numerous SOA studies.)

That’s why DancingDinosaur welcomes SOA Software’s Lifecycle Manager to the mainframe party.  It enables what the company calls a “RESTful Mainframe,” through governance of REST APIs that front zOS-based web services. This amounts to a unified platform from a governance perspective to manage both APIs as well as existing SOA assets. As Carlson explained: applying development governance to mainframe assets helps mainframe shops overcome the architectural challenges inherent in bringing legacy systems into the new API economy, where mobile apps need rapid, agile access to backend systems.

The company is aiming to make Lifecycle Manager into the system-of-record for all enterprise assets including mainframe-based SOAP services and RESTful APIs that expose legacy software functionality. The promise: seamless access to service discovery and impact analysis whether on mainframe, distributed systems, or partner systems. Both architects and developers should be able to map dependencies between APIs and mainframe assets at the development stage and manage those APIs across their full lifecycles.

Lifecycle Manager integrates with SOA’s Policy Manager to work either top down or bottom up.  The top down approach relies on a service wrapping of existing mainframe programs. Think of this as the WSDL first approach to designing web services and then developing programs on mainframe to implement it.  The bottom up approach starts with the copy book.  Either way, it is automated and intended to be seamless. It also promises to guide services developers on best practices like encryption, assign and enforce correct policies, and more.

“Our point: automate whatever we can, and guide developers into good practices,” said Carlson.  In the process, it simplifies the task of exposing mainframe capabilities to a broader set of applications while not interfering with mainframe developers.  To distributed developers the mainframe is just another service endpoint that is accessed as a service or API.  Nobody has to learn new things; it’s just a browser-based IDE using copy books.

For performance, the Lifecycle Manager-based runtime environment is written in assembler, which makes it fast while minimizing MIPS consumption. It also comes with the browser-based IDE, copybook tool, and import mappings.

The initial adopters have come from financial services and the airlines.  The expectation is that usage will expand beyond that as mainframe shops and distributed developers seek to leverage core mainframe code for a growing array of workloads that weren’t on anybody’s radar screen even a few years ago.

There are other ways to do this on the mainframe, starting with basic SOA and web services tools and protocols, like WSDL. Many mainframe SOA efforts leverage CICS, and IBM offers additional tools, most recently SoftLayer, that address the new app dev styles.

This is healthy for mainframe data centers. If nothing else SOA- and API-driven services workloads that include the mainframe help lower the cost per workload of the mainframe. It also puts the mainframe at the center of today’s IT action.

Follow DancingDinosaur on Twitter: @mainframeblog

%d bloggers like this: