Posts Tagged ‘Compuware’

May 4, 2018

Compuware Tackles Mainframe Workforce Attrition and Batch Processing

While IBM works furiously to deliver quantum computing and expand AI and blockchain into just about everything, many DancingDinosaur readers are still wrestling with the traditional headaches and boosting quality and efficiency or mainframe operations and optimizing the most traditional mainframe activities there are, batch processes. Would be nice if quantum computing could handle multiple batch operations simultaneously but that’s not high on IBM’s list of quantum priorities.

So Compuware is stepping up as it has been doing quarterly by delivering new systems to expedite and facilitate conventional mainframe processes.  Its zAdviser promises actionable analytic insight to continuously improve quality, velocity and efficiency on the mainframe. While Compuware’s ThruPut Manager enables next-gen ITstaff to optimize mainframe batch execution through new visually intuitive workload scheduling.

zAdviser captures data about developers’ behaviors

zAdviser uses machine learning to continuously measure and improve an organization’s mainframe DevOps processes and development outcomes. Based on key performance indicators (KPIs), zAdviser measures application quality, as well as development speed and the efficiency of a development team. The result: managers can now make evidence-based decisions in support of their continuous improvement efforts.

The new tool leverages a set of analytic models that uncover correlations between mainframe developer behaviors and mainframe DevOps KPIs. These correlations represent the best available empirical evidence regarding the impact of process, training and tooling decisions on digital business outcomes. Compuware is offering zAdviser free to customers on current maintenance.

zAdviser leverages a set of analytic models that uncover correlations between mainframe developer behaviors and mainframe DevOps KPIs. These correlations represent the best available empirical evidence regarding the impact of process, training and tooling decisions on digital business outcomes.

Long mainframe software backlogs are no longer acceptable. Improvements in mainframe DevOps has become an urgent imperative for large enterprises that find themselves even more dependent on mainframe applications—not less. According to a recent Forrester Consulting study commissioned by Compuware, 57 percent of enterprises with a mainframe run more than half of their business-critical workloads on the mainframe. That percentage is expected to increase to 64 percent by 2019, while at the same time enterprises are failing to replace the expert mainframe workforce they have lost by attrition. Hence the need for modern, automated, intelligent tools to speed the learning curve for workers groomed on Python or Node.js.

Meanwhile, IBM hasn’t exactly been twiddling its thumbs in regard to DevOps analytics for the Z. Its zAware delivers a self-contained firmware IT analytics offering that helps systems and operations professionals rapidly identify problematic messages and unusual system behavior in near real time, which systems administrators can use to take corrective actions.

ThruPut Manager brings a new web interface that offers  visually intuitive insight for the mainframe staff, especially new staff, into how batch jobs are being initiated and executed—as well as the impact of those jobs on mainframe software licensing costs.

By implementing ThruPut Manager, Compuware explains, enterprises can better safeguard the performance of both batch and non-batch applications while avoiding the significant adverse economic impact of preventable spikes in utilization as measured by Rolling 4-Hour Averages (R4HA). Reducing the R4HA is a key way data centers can contain mainframe costs.

More importantly,  with the new ThruPut Manager, enterprises can successfully transfer batch management responsibilities to the next generation of IT staff with far less hands-on platform experience—without exposing themselves to related risks such as missed batch execution deadlines, missed SLAs, and excess costs.

With these new releases, Compuware is providing a way to reduce the mainframe software backlog—the long growing complaint that mainframe shops cannot deliver new requested functionality fast enough—while it offers a way to replace the attrition among aging mainframe staff with young staff who don’t have years of mainframe experience to fall back on. And if the new tools lower some mainframe costs however modestly in the process, no one but IBM will complain.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Mainframe ISVs Advance the Mainframe While IBM Focuses on Think

March 30, 2018

Last week IBM reveled in the attention of upwards of 30,000 visitors to its Think conference, reportedly a record for an IBM conference. Meanwhile Syncsort and Compuware stayed home pushing new mainframe initiatives. Specifically, Syncsort introduced innovations to deliver mainframe log and application data in real-time directly to Elastic for deeper next generation analytics through like Splunk, Hadoop and the Elastic Stack.

Syncsort Ironstone for next-gen analytics

Compuware reported that the percentage of organizations running at least half their business-critical applications on the mainframe expected to increase next year, although the loss of skilled mainframe staff, and the failure to subsequently fill those positions pose significant threats to application quality, velocity and efficiency. Compuware has been taking the lead in modernizing the mainframe developer experience to make it compatible with the familiar x86 experience.

According to David Hodgson, Syncsort’s chief product officer, many organizations are using Elastic’s Kibana to visualize Elasticsearch data and navigate the Elastic Stack. These organizations, like others, are turning to tools like Hadoop and Splunk to get a 360-degree view of their mainframe data enterprise-wide. “In keeping with our proven track record of enabling our customers to quickly extract value from their critical data anytime, anywhere, we are empowering enterprises to make better decisions by making mission-critical mainframe data available in another popular analytics platform,” he adds.

For cost management, Syncsort now offers Ironstream with the flexibility of MSU-based (capacity) or Ingestion-based pricing.

Compuware took a more global view of the mainframe. The mainframe, the company notes, is becoming more important to large enterprises as the percentage of organizations running at least half their business-critical applications on that platform expected to increase next year. However, the loss of skilled mainframe staff, and the failure to subsequently fill those positions, pose significant threats to application quality, velocity and efficiency.

These are among the findings of research and analysis conducted by Forrester Consulting on behalf of Compuware.  According to the study, “As mainframe workload increases—driven by modern analytics, blockchain and more mobile activity hitting the platform—customer-obsessed companies should seek to modernize application delivery and remove roadblocks to innovation.”

The survey of mainframe decision-makers and developers in the US and Europe also revealed the growing mainframe importance–64 percent of enterprises will run more than half of their critical applications on the platform within the next year, up from 57 percent this year. And just to ratchet up the pressure a few notches, 72 percent of customer-facing applications at these enterprises are completely or very reliant on mainframe processing.

That means the loss of essential mainframe staff hurts, putting critical business processes at risk. Overall, enterprises reported losing an average of 23 percent of specialized mainframe staff in the last five years while 63 percent of those positions have not been filled.

There is more to the study, but these findings alone suggest that mainframe investments, culture, and management practices need to evolve fast in light of the changing market realities. As Forrester puts it: “IT decision makers cannot afford to treat their mainframe applications as static environments bound by long release cycles, nor can they fail to respond to their critical dependence with a retiring workforce. Instead, firms must implement the modern tools necessary to accelerate not only the quality, but the speed and efficiency of their mainframe, as well as draw [new] people to work on the platform.”

Nobody has 10 years or even three years to cultivate a new mainframer. You need to attract and cultivate talented x86 or ARM people now, equip each—him or her—with the sexiest, most efficient tools, and get them working on the most urgent items at the top of your backlog.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Compuware Brings Multi-Platform DevOps to the Z

January 19, 2018

The rush has started to DevOps for Z. IBM jumped on the bandwagon with an updated release of IBM Developer for z Systems (IDz) V14.1.1, which allows Z organizations to provide new capabilities and product maintenance to users sooner than the traditional release models they previously used from IBM.

Even more recently, Compuware, which described DevOps and the mainframe as the ultimate win-win, announced a program to advance DevOps on the mainframe with integrated COBOL code coverage metrics for multi-platform DevOps.  This will make it possible for all developers in the organization to fluidly handle multi-platform code, including mainframe code, in a fast delivery DevOps approach.

SonarSource-Compuware DevOps Dashboard

The new Compuware-SonarSource integrations are expected to ease enterprise DevOps teams trying to track and validate code coverage of COBOL application testing and do it with the same ease and employing the same processes as they do with Java and other more mainstream code. This ability to automate code coverage tracking across platforms is yet another example of empowering enterprise IT to apply the same proven and essential Agile, DevOps and Continuous Integration/Continuous Delivery (CI/CD) disciplines to both core systems-of-record (mainframe) as well as systems-of-engagement (mostly distributed systems).

Code coverage metrics promise insight into the degree to which source code is executed during a test. It identifies  which lines of code have been executed, and what percentage of an application has been tested. These measurements allow IT teams to understand the scope and effectiveness of its testing as code is moved towards production.

DevOps has become increasingly critical to mainframe shops that risk becoming irrelevant and even replaceable if they cannot turn around code improvements fast enough. The mainframe continues to be valued as the secure repository of the organization’s critical data but that won’t hold off those who feel the mainframe is a costly extravagance, especially when mainframe shops can’t turn out code updates and enhancements as fast as systems regarded as more inherently agile.

As Compuware puts it, the latest integrations automatically feed code coverage results captured by its Topaz for Total Test into SonarSource’s SonarQube. This gives DevOps teams an accurate, unified view of quality metrics and milestones across platforms enterprise-wide.

For z shops specifically, such continuous code quality management across platforms promises high value to large enterprises, enabling them to bring new digital deliverables to market, which increasingly is contingent on simultaneously updating code across both back-end mainframe systems-of-record and front-end mobile/web and distributed systems-of-engagement.

Specifically, notes Compuware, integration between Topaz for Total Test and SonarQube enables DevOps teams to:

  • Gain insight into the coverage of code being promoted for all application components across all platforms
  • Improve the rigor of digital governance with strong enforcement of mainframe QA policies for coding errors, data leakage, credential vulnerabilities, and more
  • Shorten feedback loops to speed time-to-benefit and more promptly address shortfalls in COBOL skills and bottlenecks in mainframe DevOps processes

Topaz for Total Test captures code coverage metrics directly from the source code itself, rather than from a source listing, as is the case with outdated mainframe tools. This direct capture is more accurate and eliminates the need for development, Compuware reported.

The new integration actually encompasses a range of tools and capabilities. For instance:

From within a Compuware Xpediter debug session, a developer can kick off a Compuware Topaz for Total Test automated unit test and set it up to collect code coverage info as it runs. Code coverage metrics then can be automatically fed into SonarSource’s SonarQube where they can be displayed in a dashboard along with other quality metrics, such as lines going to subprograms.

It also integrates with Jenkins as a Continuous Integration (CI) platform, which acts as a process orchestrator and interacts with an SCM tool, such as Compuware ISPW, which automates software quality checks and pushes metrics onto SonarQube among other things. ISPW also is where code gets promoted to the various stages within the lifecycle and ultimately deployed. Finally Topaz is Compuware’s Eclipse-based IDE from which developers drive all these activities.

The Compuware announcement further delivers on its promise to mainstream the mainframe; that is, provide a familiar, modern, and intuitive multi-platform mainframe development environment—integrated with state-of-the-art DevOps tools for veteran mainframe developers and, more importantly, those newly engaged as IT newbies from the distributed world. In short, this is how you keep your Z relevant and invaluable going forward.

** Special note regarding last week’s DancingDinosaur reporting on chip problems here; Don’t count on an immediate solution coming from the vendors anytime soon; not Google, IBM, Intel, AMD, ARM, or others. The word among chip geeks is that the dependencies are too complex to be fully fixed with a patch. This probably requires new chip designs and fabrication. DancingDinosaur will keep you posted.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Brings the Mainframe to AWS

October 6, 2017

IBM talks about the power of the cloud for the mainframe and has turned Bluemix into a cloud development and deployment platform for open systems. Where’s the Z?

Now Compuware has made for the past several years quarterly advances in its mainframe tooling, which are now  available through AWS. Not only have those advances made mainframe management and operations more intuitive and graphical through a string of Topaz releases, but with AWS it is now more accessible from anywhere. DancingDinosaur has been reporting on Compuware’s string of Topaz advances for two years, here, here, and here.

By tapping the power of both the cloud and the mainframe, enterprises can deploy Topaz to their global development workforce in minutes, accelerating the modernization of their mainframe environments. As Compuware noted: mainframe shops now have the choice of deploying Topaz on-premise or on AWS. By leveraging the cloud, they can deploy Topaz more quickly, securely, and scale without capital costs while benefiting from new Topaz features as soon as the company delivers them.

To make Topaz work on AWS Compuware turned to Amazon AppStream 2.0 technology, which provides for global development, test, and ops teams with immediate and secure cloud access to Compuware’s entire innovative mainframe Agile/DevOps solution stack, mainly Topaz. Amazon AppStream 2.0 is a fully managed, secure application streaming service that allows users to stream desktop applications from AWS to any device running a web browser.

Cloud-based deployment of Topaz, Compuware notes, allows for significantly faster implementation, simple administration, a virtual integrated development environment (IDE), adaptive capacity, and immediate developer access to software updates. The last of these is important, since Compuware has been maintaining a quarterly upgrade release schedule, in effect delivering new capabilities every 90 days.

Compuware is in the process of patenting technology to offer an intuitive, streamlined configuration menu that leverages AWS best practices to make it easy for mainframe admins to quickly configure secure connectivity between Topaz on AWS and their mainframe environment. It also enables the same connectivity to their existing cross-platform enterprise DevOps toolchains running on-premise, in the cloud, or both. The upshot: organizations can deploy Topaz across their global development workforce in minutes, accelerating the modernization of their mainframe environments.

Using Topaz on AWS, notes Compuware, mainframe shops can benefit in a variety of ways, specifically:

  • Modify, test and debug COBOL, PL/I, Assembler and other mainframe code via an Eclipse-based virtual IDE
  • Visualize complex and/or undocumented application logic and data relationships
  • Manage source code and promote artifacts through the DevOps lifecycle
  • Perform common tasks such as job submission, review, print and purge
  • Leverage a single data editor to discover, visualize, edit, compare, and protect mainframe files and data

The move to the Eclipse-based IDE presents a giant step for traditional mainframe shops trying to modernize. Eclipse is a leading open source IDE with IBM as a founding member. In addition to Eclipse, Compuware also integrates with other modern tools, including Jenkins, SonarSource, Altassian. Jenkins is an open source automation server written in Java that helps to automate the non-human part of software development process with continuous integration while facilitating technical aspects of continuous delivery. SonarSource enables visibility into mainframe application quality. Atlassian develops products for software developers, project managers, and content management and is best known for Jira, its issue tracking application.

Unlike many mainframe ISVs, Compuware has been actively partnering with various innovative vendors to extend the mainframe’s tool footprint and bring the kind of tools to the mainframe that young developers, especially Millennials, want. Yes, it is possible to access the sexy REST-based Web and mobile tools through IBM’s Bluemix, but for mainframe shops it appears kludgy. By giving its mainframe customers access through AWS to advanced tools, Compuware improves on this. And AWS beats Bluemix in terms of cloud penetration and low cost.

All mainframe ISVs should make their mainframe products accessible through the cloud if they want to keep their mainframe products relevant. IBM has its cloud; of course there is AWS, Microsoft has Azure, and Google rounds out the top four. These and others will keep cloud economics competitive for the foreseeable future. Hope to see you in the cloud.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Latest Compuware Tools Bring Mainframe and DevOps Together

July 7, 2017

At the end of June Compuware announced the integration of Topaz for Total Test, an automated unit testing tool for COBOL, with Jenkins, SonarQube and Compuware ISPW. Together, the technologies enable enterprises nimbly, easily and efficiently update their core mainframe applications in response to ever-changing business requirements.  This continues the company’s ongoing quarterly releases of updates and modernization of mainframe tools.

The latest enable mainframe legacy technologies to participate in integrated modern DevOps. They allow enterprise IT to better orchestrate changes to mainframe systems of record with changes to systems of engagement—a significant benefit given the fact that customer-facing digital services often rely on code running across multiple platforms, legacy and distributed.

Compuware Topaz for Total Test

The days when a mainframe shop can get by with leisurely updates of their systems, especially their business critical applications, are long gone.  Organizations need to modernize and integrate their tools to deliver the kind of fast response attributed to DevOps.

Of course, successful DevOps, whether mainframe or distributed, is less a matter of tools than of culture, communication, and process.  Still, there’s no doubt that modern, integrated, and context-aware tools along with automation help by speeding the process and reducing mistakes.

Topaz for Total Test appears to cover all the tool bases. It brings together automated unit testing for COBOL with Jenkins, SonarQube, and Compuware ISPW. Jenkins is an open-source continuous integration software tool written in the Java for testing and reporting on isolated changes in a larger code base in real time. The real time aspect is critical for DevOps, where speed counts. The software enables developers to find and solve defects in a code base rapidly and to automate testing of their builds. SonarQube (formerly Sonar[1]) is an open source platform for continuous inspection of code quality. Again, error elimination counts.

The problem, as Compuware sees it, comes from mainframe shops’ historical inability to update their business-critical COBOL applications fast enough due to antiquated tools, excessive dependence on specialized expertise, and risk concerns. All these combine to produce long delays in updating code.

The addition of Jenkins and SonarQube along with Compuware’s ISPW source code management and deployment produce a pretty complete DevOps package for mainframes. In addition, Compuware strengthened support for DB2. That support entails new stubbing for DB2 databases, which allows developers to run unit tests without requiring an active connection to a live DB2 database. While Topaz for Total Test can be used to test code that processes all types of mainframe data, its stubbing capability for DB2 but also VSAM and QSAM data types. This makes it easier to create repeatable tests fast. Data stubs are created automatically and do not require re-compiling.

Although much of the world’s business activity still revolves in one way or another around the mainframe, many mainframe shops struggle when it comes to updating those applications to reflect rapidly changing business demands. Typically, they are hampered by manual development and testing processes; ongoing loss of specialized COBOL programming knowledge; and the fear of introducing even the slightest defect into core mainframe systems of record, notes Compuware.

And it gets worse. “Given the abject failure of re-platforming initiatives, large enterprises hoping to avoid digital irrelevance must aggressively modernize their mainframe DevOps practices,” said Rich Ptak of IT analyst firm Ptak Associates in Compuware’s Topaz for Total Test announcement. “Key to the modernization and ‘de-legacing’ of mainframe applications is the adoption of unit testing for COBOL code that is equivalent to and well-integrated with unit testing as practiced across the rest of the enterprise codebase.”

Compuware Topaz for Total Test transforms mainframe application development by automatically breaking COBOL code down into units and creating tests for those logical units. Developers at all skill levels—not just mainframe cowboys but preferably those with distributed and open system skills or even systems novices—can quickly and easily perform unit testing on COBOL code just as they do in Java, PHP and other popular programming languages. In fact, Topaz is actually more advanced than typical Java tools, because it requires no coding and automatically generates default unit test result assertions for developers.  So yes, novices are welcome.

With the recently released integrations and enhancements, Compuware has now delivered mainframe innovations for eleven consecutive quarters. Few mainframe shops even try to do this, not even IBM. This reflects Compuware’s commitment to improving innovation throughput and quality using the latest Agile and DevOps methods.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Drives zSystem and Distributed Data Integration

June 8, 2017

IBM appears to be so busy pursuing its strategic imperatives—security, blockchain, quantum computing, and cognitive computing—that it seems to have forgotten the daily activities that make up the bread-and-butter of mainframe data centers. Stepping up to fill the gap have been mainframe ISVs like Compuware, Syncsort, Data Kinetics, and a few others.

IBM’s Project DataWorks taps into unstructured data often missed

IBM hasn’t completely ignored this need. For instance, Project DataWorks uses Watson Analytics and natural language processing to analyze and create complex visualizations. Syncsort, on the other hand, latched onto open Apache technologies, starting in the fall of 2015. Back then it introduced a set of tools to facilitate data integration through Apache Kafka and Apache Spark, two of the most active Big Data open source projects for handling real-time, large-scale data processing, feeds, and analytics.

Syncsort’s primary integration vehicle then revolved around the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, both on premise or in the cloud.

Since then Syncsort, in March, announced another big data integration solution. This time its DMX-h, is now integrated with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, Syncsort explained, organizations can quickly pull data into new, ready-to-work clusters in the cloud. This accelerates how quickly they can take advantage of big data cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

A month before that, this past February, Syncsort introduced new enhancements in its Big Data integration solution by again deploying DMX-h to deliver integrated workflow capabilities and Spark 2.0 integration, which simplifies Hadoop and Spark application development, effectively enabling mainframe data centers to extract maximum value from their data assets.

In addition, Syncsort brought new integrated workflow capabilities and Spark 2.0 integration to simplify Hadoop and Spark application development. It lets data centers tap value from their enterprise data assets regardless of where it resides, whether on the mainframe, in distributed systems, or in the cloud.

Syncsort’s new integrated workflow capability also gives organizations a simpler, more flexible way to create and manage their data pipelines. This is done through the company’s design-once, deploy-anywhere architecture with support for Apache Spark 2.0, which makes it easy for organizations to take advantage of the benefits of Spark 2.0 and integrated workflow without spending time and resources redeveloping their jobs.

Assembling such an end-to-end data pipeline can be time-consuming and complicated, with various workloads executed on multiple platforms, all of which need to be orchestrated and kept up to date. Delays in such complicated development, however, can prevent organizations from getting the timely insights they need for effective decision-making.

Enter Syncsort’s Integrated Workflow, which helps organizations manage various workloads, such as batch ETL on large repositories of historical data. This can be done by referencing business rules during data ingest in a single workflow, in effect simplifying and speeding development of the entire data pipeline, from accessing critical enterprise data, to transforming that data, and ultimately analyzing it for business insights.

Finally, in October 2016 Syncsort announced new capabilities in its Ironstream software that allows organizations to access and integrate mainframe log data in real-time to Splunk IT Service Intelligence (ITSI). Further, the integration of Ironstream and Compuware’s Application Audit software deliver the audit data to Splunk Enterprise Security (ES) for Security Information and Event Management (SIEM). This integration improves an organization’s ability to detect threats against critical mainframe data, correlate them with related information and events, and satisfy compliance requirements.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware-Syncsort-Splunk to Boost Mainframe Security

April 6, 2017

The mainframe has proven to be remarkably secure over the years, racking up the highest security certifications available. But there is still room for improvement. Earlier this week Compuware announced Application Audit, a software tool that aims to transform mainframe cybersecurity and compliance through real-time capture of user behavior.

Capturing user behavior, especially in real-time, is seemingly impossible if you have to rely on the data your collect from the various logs and SMF data.  Compuware’s solution, Application Audit, in conjunction with Syncsort and Splunk, fully captures and analyzes start-to-finish mainframe application user behavior.

As Compuware explains: Most enterprises still rely on disparate logs and SMF data from security products such as RACF, CA-ACF2 and CA-Top Secret to piece together user behavior.  This is too slow if you want to capture bad behavior while it’s going on. Some organization try to apply analytics to these logs but that also is too slow. By the time you have collected enough logs to deduce who did what and when the damage may have been done.  Throw in the escalating demands of cross-platform enterprise cybersecurity and increasingly burdensome global compliance mandates you haven’t a chance without an automated tool optimized for this.

Fortunately, the mainframe provides rich and comprehensive session data you can run through and analyze with Application Audit and in conjunction with the organization’s security information and event management (SIEM) systems to more quickly and effectively see what really is happening. Specifically, it can:

  • Detect, investigate, and respond to inappropriate behavior by internal users with access
  • Detect, investigate, and respond to hacked or illegally accessed user accounts
  • Support criminal/legal investigations with complete and credible forensics
  • Fulfill compliance mandates regarding protection of sensitive data

IBM, by the way, is not ignoring the advantages of analytics for z security.  Back in February you read about IBM bringing its cognitive system to the z on DancingDinosaur.  IBM continues to flog cognitive on z for real-time analytics and security; promising to enable faster customer insights, business insights, and systems insights with decisions based on real-time analysis of both current and historical data delivered on an analytics platform designed for availability, optimized for flexibility, and engineered with the highest levels of security. Check out IBM’s full cognitive for z pitch.

The data Compuware and Syncsort collect with Application Audit is particularly valuable for maintaining control of privileged mainframe user accounts. Both private- and public-sector organizations are increasingly concerned about insider threats to both mainframe and non-mainframe systems. Privileged user accounts can be misused by their rightful owners, motivated by everything from financial gain to personal grievances, as well as by malicious outsiders who have illegally acquired the credentials for those accounts. You can imagine what havoc they could wreak.

In addition, with Application Audit Compuware is orchestrating a number of players to deliver the full security picture. Specifically, through collaboration with CorreLog, Syncsort and Splunk, Compuware is enabling enterprise customers to integrate Application Audit’s mainframe intelligence with popular SIEM solutions such as Splunk, IBM QRadar, and HPE Security ArcSight ESM. Additionally, Application Audit provides an out-of-the-box Splunk-based dashboard that delivers value from the start. As Compuware explains, these integrations are particularly useful for discovering and addressing security issues associated with today’s increasingly common composite applications, which have components running on both mainframe and non-mainframe platforms. SIEM integration also ensures that security, compliance and other risk management staff can easily access mainframe-related data in the same manner as they access data from other platforms.

“Effective IT management requires effective monitoring of what is happening for security, cost reduction, capacity planning, service level agreements, compliance, and other purposes,” noted Stu Henderson, Founder and President of the Henderson Group in the Compuware announcement. “This is a major need in an environment where security, technology, budget, and regulatory pressures continue to escalate.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Compuware Continues Mainframe Software Renaissance

January 19, 2017

While IBM focuses on its strategic imperatives, especially cognitive computing (which are doing quite well according to the latest statement that came out today–will take up next week), Compuware is fueling a mainframe software renaissance on its own. It’s latest two announcements brings Java-like unit testing to COBOL code via its Topaz product set and automate and intelligently optimize the processing of batch jobs through its acquisition of MVS Solutions. Both modernize and simplify the processes around legacy mainframe coding thus the reference to mainframe software renaissance.

compuware-total-test-graphic-process-flow-diagram

Let’s start with Compuware’s Topaz set of graphical tools. Since they are GUI-based even novice developers can immediately validate and troubleshoot whatever changes, either intended or inadvertent, they made to the existing COBOL applications.  Compuware’s aim for Topaz for Total Test is to eliminate any notion that such applications are legacy code and therefore cannot be updated as frequently and with the same confidence as other types of applications. Basically, mainframe DevOps.

By bringing fast, developer-friendly unit testing to COBOL applications, the new test tool also enables enterprises to deliver better customer experiences—since to create those experiences, IT needs its Agile/DevOps processes to encompass all platforms, from the mainframe to the cloud.  As a result z shops can gain increased digital agility along with higher quality, lower costs, and dramatically reduced dependency on the specialized knowledge of mainframe veterans aging out of the active IT workforce. In fact, the design of the Topaz tools enables z data centers to rapidly introduce the z to novice mainframe staff, which become productive virtually from the start—another cost saver.

Today in 2017 does management still need to be reminded of the importance of the mainframe. Probably, even though many organizations—among them the world’s largest banks, insurance companies, retailers and airlines—continue run their business on mainframe applications, and recent surveys clearly indicate that situation is unlikely to change anytime soon. However, as Compuware points out, the ability of enterprises to quickly update those applications in response to ever-changing business imperatives is daily being hampered by manual, antiquated development and testing processes; the ongoing loss of specialized COBOL programming knowledge; and the risk and associated fear of introducing even the slightest defect into core mainframe systems of record. The entire Topaz design approach from the very first tool, was to make mainframe code accessible to novices. That has continued every quarter for the past two years.

This is not just a DancingDinosaur rant. IT analyst Rich Ptak from Ptak Associates also noted: “By eliminating a long-standing constraint to COBOL Compuware provides enterprise IT the ability to deliver more digital capabilities to the business at greater speed and with less risk.”

Gartner in its latest Predicts 2017, chimes in with its DevOps equivalent of your mother’s reminder to brush your teeth after each meal: Application leaders in IT organizations should adopt a continuous quality culture that includes practices to manage technical debt and automate tests focused on unit and API testing. It should also automate test lab operations to provide access to production-like environments, and enable testing of deployment through the use of DevOps pipeline tools.” OK mom; everybody got the message.

The acquisition of MVS Solutions, Compuware’s fourth in the last year, adds to the company’s collection of mainframe software tools that promise agile, DevOps and millennial-friendly management of the IBM z platform—a continuation of its efforts to make the mainframe accessible to novices. DancingDinosaur covered these acquisition in early December here.

Batch processing accounts for the majority of peak mainframe workloads at large enterprises, providing essential back-end digital capabilities for customer-, employee- and partner-facing mobile, cloud, and web applications. As demands on these back-end mainframe batch processes intensify in terms of scale and performance, enterprises are under increasing pressure to ensure compliance with SLAs and control costs.

These challenges are exacerbated by the fact that responsibility for batch management is rapidly being shifted from platform veterans with decades of experience in mainframe operations to millennial ops staff who are unfamiliar with batch management. They also find native IBM z Systems management tools arcane and impractical, which increases the risk of critical batch operations being delayed or even failing. Run incorrectly, the batch workloads risk generating excessive peak utilization costs.

The solution, notes Compuware, lies in its new ThruPut Manager, which promises automatic, intelligent optimized batch processing. In the process it:

  • Provides immediate, intuitive insight into batch processing that even inexperienced operators can readily understand
  • Makes it easy to prioritize batch processing based on business policies and goals
  • Ensures proper batch execution by verifying that jobs have all the resources they need and proactively managing resource contention between jobs
  • Reduces the organizations’ IBM Monthly Licensing Charges (MLC) by minimizing rolling four-hour average (R4HA) processing peaks while avoiding counter-productive soft capping

Run in conjunction with Strobe, Compuware’s mainframe application performance management tool, ThruPut Manager also makes it easier to optimize batch workload and application performance as part of everyday mainframe DevOps tasks. ThruPut promises to lead to more efficiency and greater throughput resulting in a shorter batch workload and reduced processing capacity. These benefits also support better cross-platform DevOps, since distributed and cloud applications often depend on back-end mainframe batch processing.

Now, go out an hire some millenials and bring fresh blood into the mainframe. (Watch for DancingDinosaur’s upcoming post on why the mainframe is cool again.)

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Acquires Standardware COPE IMS to Speed DevOps and Save Money

December 16, 2016

Compuware, in early December, acquired the assets of Standardware, the leading provider of IMS virtualization technology.  Standardware’s COPE reduces the considerable time, cost and technical difficulty associated with the development and testing of IMS systems, enabling z-based data centers to significantly increase their digital business agility while also enabling less mainframe-experienced staff to perform IMS-related DevOps tasks. In addition, it allows IMS to run as a virtualized image, saving significantly on software charges.

compuware-ims-virtual-environment-31594_apollo_technical_graphic_3

Standardware’s COPE IMS, courtesy of Compuware

All three Compuware acquisitions this year—Standardware, ISPW, Itegrations—aimed to facilitate mainframe code management or app dev. The company’s acquisition of ISPW brought source code management and release automation. Itegrations eased the migration to ISPW from CA Endevor. Now Standardware brings IMS virtualization technology.

IMS continues as a foundational database and transaction management technology for systems of record at large global mainframe enterprises, especially in industries such as banking, insurance, airlines and such. Its stability, dependability, and high efficiency at scale make it particularly valuable as a back-end resource for high-traffic, customer-facing apps. IBM’s mainframe Information Management System (IMS) provides a hierarchical database and information management system with extensive transaction processing capabilities. It offers a completely different database model from the common relational model behind IBM’s DB2.

IBM touts IMS as the most secure, highest performing, and lowest cost hierarchical database management software for online transaction processing (OLTP). IMS is used by many of the top Fortune 1000 companies worldwide. Collectively these companies process more than 50 billion transactions per day through IMS, and they do so securely.

As Compuware puts it, IMS remains a deeply foundational database and transaction management technology for systems of record at large global enterprises, especially in the core mainframe segments like financial services or transportation. Its stability, dependability and high efficiency ensure it can continue to play an important role as a back-end resource for high-traffic customer-facing apps. All that’s needed is to reduce the effort required to use it.

Conventional approaches to the development and testing of IMS systems, however, can be excessively slow, technically challenging, and expensive. This is too high a technical price to pay in today’s agile, fast iteration app dev environment.  For example,  the set-up of IMS application development environments require configuring dedicated IMS regions and databases, which is especially time-consuming; additional resources must be defined and compiled for each instance, and at every stage of development expect testing, training, and systems integration. Worse yet, these tasks typically require experienced DBAs and system programmers with IMS-specific skills, making it an increasingly problematic and costly constraint given the generational shift underway in IT, which makes those skills increasingly rare.

As a result of these bottlenecks and resource constraints, large enterprises can find themselves far less nimble than their smaller competitors and unable to fully leverage their current IMS assets in response to digital requirements.  That leaves the mainframe shop at a distinct disadvantage.

Since COPE comes well integrated with Compuware Xpediter, an automated mainframe debugging tool, many such problems go away. Xpediter, which is interactive,  can be used within the Standardware virtualized environment and COPE. When a problem occurs, developers can quickly set up an interactive test session with minimal effort and resolve the problem. When they’re done, they can confidently move the application into production. And now that Xpediter is integrated with COPE IMS virtualization lets multiple developers debug application code in the same or different logical IMS systems within the virtualized COPE IMS environment.

And therein lies the savings for mainframe shops. As Tyler Allman, Compuware’s COPE product manager explains, COPE converts IMS to run in a virtual environment. It takes a COPE expert to set it up initially, but once set up, it can run as a logical IMS system with almost no ongoing maintenance, which results in administrative savings.

On the software side, IMS is licensed as part of the usual rolling average 4hr workload software billing. Once the environment has been virtualized with COPE, you can run multiple IMS logical regions at no additional cost. The savings experienced by mainframe data centers , Allman suggests, can amount to tens if not hundreds of thousands of dollars. These saving alone can justify COPE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

 

Compuware Triples Down on Promised Quarterly z System Releases

October 14, 2016

Since Jan 2015 Compuware has been releasing enhancements to its mainframe software portfolio quarterly.  The latest quarterly release, dated Oct. 3, delivers REST APIs for ISPW source code management and DevOps release automation; Integration of Compuware Abend-AID with Syncsort Ironstream to create their own custom cross-platform DevOps toolchains; and a new Seasoft Plug-In for Topaz Workbench. The Seasoft plug-in will help less skilled IBM z/OS developers to manage mainframe batch processing along with other z platform tasks

compuware-blended-ecosystem

Compuware’s point is to position the mainframe at the heart of agile DevOps computing. As part of the effort, it needs to deliver slick, modern tools that will appear to the non-mainframers who are increasingly moving into multi-platform development roles that include the mainframe. These people want to work as if they are dealing with a Windows or Linux machine. They aren’t going to wrestle with arcane mainframe constructs like Abends or JCL.  Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets. The new dev and ops people who are filling out data center ranks haven’t the patience to learn what they view as antiquated mainframe concepts. They need intelligent tools that visualize the issue and let them intuitively click, drag, drop, and swipe their way through whatever needs to be done.

This is driven by the long-expected attrition of veteran mainframers and the mainframe knowledge and application insight they brought. Only the recession that began in 2008 slowed the exit of aging mainframers. Now they are leaving; one mainframe credit card processor reportedly lost 50 mainframe staff in a month.  The only way to replace this kind of experience is with intelligent and easy to learn tools and expert automation.

Compuware’s response has been to release new tools and enhancements every quarter. It started with Topaz in 2015. DancingDinosaur covered it Jan. 2015 here.  The beauty of Topaz lies in its graphical ease-of-use. Data center newbies didn’t need to know z/OS; they could understand what they were seeing and do meaningful work. With each quarterly release Compuware, in one way or another, has advanced this basic premise.

The most recent advances are streamlining the DevOps process in a variety of ways.  DevOps has emerged as critical with mainframe shops scrambling to remain relevant and effective in a rapidly evolving app dev environment. Just look at Bluemix if you want to see where things are heading.

In the first announcement, Compuware extended mainframe DevOps innovation with REST APIs for ISPW SCM and release automation. The new APIs enable large enterprises to flexibly integrate their numerous other mainframe and non-mainframe DevOps tools with ISPW to create their own custom cross-platform DevOps toolchains. Part of that was  the acquisition of the assets associated with Itegrations’s source code management (SCM) migration practice and methodology, which will  enable Compuware users to more easily migrate their SCM systems from Agile-averse products such as CA Endevor, CA Panvalet, CA Librarian, and Micro Focus/Serena ChangeMan as well as internally developed SCM systems—to ISPW

According to Compuware, these DevOps toolchains are becoming increasingly important for two reasons:

  • Enterprises must aggressively adopt DevOps disciplines in their mainframe environments to fulfill business requirements for digital agility. Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets to counter new, digitally nimble market disruptors.
  • Data centers need to better integrate the toolchains that support their newly adopted mainframe DevOps workflows with those that support DevOps across their various other platforms. This is because mainframe applications and data so often function as back-end systems-of-record for front-end web and mobile systems-of-engagement in multi-tier/cross-platform environments.

In the second announcement Compuware integrated Abend-AID and Syncsort’s Ironstream to give fast, clear insight into mainframe issues. Specifically, the integration of Abend-AID and Ironstream \ enables IT to more quickly discover and act upon correlations between application faults and broader conditions in the mainframe environment. This is particularly important, notes Compuware, as enterprises, out of necessity, shift operational responsibilities for the platform to staffs with limited experience on z/OS. Just put yourself into the shoes of a distributed system manager now dealing with a mainframe. What might appear to be a platform issue may turn out to be software faults, and vice versa.  The retired 30-year mainframe veterans would probably see it immediately (but not always). Mainframe newcomers need a tool with the intelligence to recognize it for them.

With the last announcement Compuware and Software Engineering of America (SEA) introduced the release of SEA’s JCLplus+ Remote Plug-In and $AVRS Plug-In for Compuware’s Topaz Workbench mainframe IDE. Again think about mainframe neophytes. The new plug-ins for Topaz significantly ease challenging JCL- and output-related tasks, according to Compuware, effectively enabling both expert and novice IT staff to perform those tasks more quickly and more accurately in the context of their other mainframe DevOps activities.

An encouraging aspect of this is that Compuware is not doing this alone. The company is teaming up with SEA and with Syncsort to make this happen. As the mainframe vendors work to make mainframe computing easier and more available to lesser trained people it will be good for the mainframe industry as a whole and maybe even help lower the cost of mainframe operations.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: