Posts Tagged ‘COBOL’

IBM On-Premises Cognitive Means z Systems Only

February 16, 2017

Just in case you missed the incessant drumbeat coming out of IBM, the company committed to cognitive computing. But that works for z data centers since IBM’s cognitive system is available on-premises only for the z. Another z first: IBM just introduced Machine Learning (key for cognitive) for the private cloud starting with the z.

ibm-congitive-graphic

There are three ways to get IBM cognitive computing solutions: the IBM Cloud, Watson, or the z System, notes Donna Dillenberger, IBM Fellow, IBM Enterprise Solutions. The z, however, is the only platform that IBM supports for cognitive computing on premises (sorry, no Power). As such, the z represents the apex of programmatic computing, at least as IBM sees it. It also is the only IBM platform that supports cognitive natively; mainly in the form of Hadoop and Spark, both of which are programmatic tools.

What if your z told you that a given strategy had a 92% of success. It couldn’t do that until now with IBM’s recently released cognitive system for z.

Your z system today represents the peak of programmatic computing. That’s what everyone working in computers grew up with, going all the way back to Assembler, COBOL, and FORTRAN. Newer languages and operating systems have arrived since; today your mainframe can respond to Java or Linux and now Python and Anaconda. Still, all are based on the programmatic computing model.

IBM believes the future lies in cognitive computing. Cognitive has become the company’s latest strategic imperative, apparently trumping its previous strategic imperatives: cloud, analytics, big data, and mobile. Maybe only security, which quietly slipped in as a strategic imperative sometime 2016, can rival cognitive, at least for now.

Similarly, IBM describes itself as a cognitive solutions and cloud platform company. IBM’s infatuation with cognitive starts with data. Only cognitive computing will enable organizations to understand the flood of myriad data pouring in—consisting of structured, local data but going beyond to unlock the world of global unstructured data; and then to decision tree-driven, deterministic applications, and eventually, probabilistic systems that co-evolve with their users by learning along with them.

You need cognitive computing. It is the only way, as IBM puts it: to move beyond the constraints of programmatic computing. In the process, cognitive can take you past keyword-based search that provides a list of locations where an answer might be located to an intuitive, conversational means to discover a set of confidence-ranked possibilities.

Dillenberger suggests it won’t be difficult to get to the IBM cognitive system on z . You don’t even program a cognitive system. At most, you train it, and even then the cognitive system will do the heavy lifting by finding the most appropriate training models. If you don’t have preexisting training models, “just use what the cognitive system thinks is best,” she adds. Then the cognitive system will see what happens and learn from it, tweaking the models as necessary based on the results and new data it encounters. This also is where machine learning comes in.

IBM has yet to document payback and ROI data. Dillenberger, however, has spoken with early adopters.  The big promised payback, of course, will come from the new insights uncovered and the payback will be as astronomical or meager as you are in executing on those insights.

But there also is the promise of a quick technical payback for z data centers managers. When the data resides on z—a huge advantage for the z—you just run analytics where the data is. In such cases you can realize up to 3x the performance, Dillenberger noted.  Even if you have to pull data from some other location too you still run faster, maybe 2x faster. Other z advantages include large amounts of memory, multiple levels of cache, and multiple I/O processors get at data without impacting CPU performance.

When the data and IBM’s cognitive system resides on the z you can save significant money. “ETL consumed huge amounts of MIPS. But when the client did it all on the z, it completely avoided the costly ETL process,” Dillenberger noted. As a result, that client reported savings of $7-8 million dollars a year by completely bypassing the x-86 layer and ETL and running Spark natively on the z.

As Dillenberger describes it, cognitive computing on the z is here now, able to deliver a payback fast, and an even bigger payback going forward as you execute on the insights it reveals. And you already have a z, the only on-premises way to IBM’s Cognitive System.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Play the Cloud-Mobile App Dev Game with z/OS Client Web Enablement

April 15, 2016

Is you z team feeling a little nervous that they are missing an important new game? Are business managers bugging you about running slick Cloud and mobile applications through the z? Worse, are they turning to third party contractors to build apps that will try to connect your z to the cloud and mobile world? If so, it is time to take a close look at IBM’s z/OS Client Web Enablement Toolkit.

mobile access backend data 1800FLOWERS

Accessing backend system through a mobile device

If you’re a z shop running Linux on z or a LinuxONE shop you don’t need z/OS Web Enablement. The issue only comes up when you need to connect the z/OS applications to cloud, web, and mobile apps. IBM began talking up z/OS Enablement Toolkit since early this year. Prior to the availability of the toolkit, native z/OS applications had little or no easy options available to participate as a web services client.

You undoubtedly know the z in its role as a no-fail transaction workhorse. More recently you’ve watched as it learned new tricks like managing big data or big data analytics through IBM’s own tools and more recently with Spark. The z absorbed the services wave with SOA and turned CICS into a handler for Web transactions. With Linux it learned an entire new way to relate to the broader distributed world. The z has rolled with all the changes and generally came out ahead.

Now the next change for z data centers has arrived. This is the cloud/web-mobile-analytics execution environment that seemingly is taking over the known world. It almost seems like nobody wants a straight DB2 CICS transaction without a slew of other devices getting involved, usually as clients. Now everything is HTTP REST to handle x86 clients and JSON along with a slew of even newer scripting languages. Heard about Python and Ruby? And they aren’t even the latest.  The problem: no easy way to perform HTTP REST calls or handle JSON parsing on z/OS. This results from the utter lack of native JSON services built into z/OS, according to Steve Warren, IBM’s z/OS Client Web Enablement guru.

Starting, however, with z/OS V2.2 and now available in z/OS V2.1 via a couple of service updates,  Warren reports, the new z/OS Client Web Enablement Toolkit changes the way a z/OS-based data center can think about z/OS applications communicating with another web server. As he explains it, the toolkit provides an easy-to-use, lightweight solution for applications looking to easily participate as a client, in a client/server web application. Isn’t that what all the kids are doing with Bluemix? So why not with the z and z/OS?

Specifically, the z/OS Toolkit provides a built-in protocol enabler using interfaces similar in nature to other industry-standard APIs along with a z/OS JSON parser to parse JSON text coming from any source and the ability to build new or add to existing JSON text, according to Warren.  Suddenly, it puts z/OS shops smack in the middle of this hot new game.

While almost all environments on z/OS can take advantage of these new services, Warren adds, traditional z/OS programs running in a native environment (apart from a z/OS UNIX or JVM environment) stand to benefit the most. Before the toolkit, native z/OS applications, as noted above, had little or no easy options available to them to participate as a web services client. Now they do.

Programs running as a batch job, a started procedure, or in almost any address space on a z/OS system have APIs they can utilize in a similar manner to any standard z/OS APIs provided by the OS. Programs invoke these APIs in the programming language of their choice. Among z languages, C/C++, COBOL, PL/I, and Assembler are fully supported, and the toolkit provides samples for C/C++, COBOL, PL/I initially. Linux on z and LinuxONE shops already can do this.

Businesses with z data centers are being forced by the market to adopt Web applications utilizing published Web APIs that can be used by something as small as the watch you wear, noted Warren. As a result, the proliferation of Web services applications in recent years has been staggering, and it’s not by coincidence. Representational state transfer (REST) applications are simple, use the ubiquitous HTTP protocol—which helps them to be platform-independent—and are easy to organize.  That’s what the young developers—the millennials—have been doing with Bluemix and other cloud-based development environments for their cloud, mobile, and  web-based applications.  With the z/OS web enablement toolkit now any z/OS shop can do the same. As IoT ramps up expect more demands for these kinds of applications and with a variety of new devices and APIs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

The Next Generation of Mainframers

March 6, 2014

With seemingly every young person with any technology inclinations aiming to become the next WhatsApp and walk away with some of Facebook’s millions it is fair to wonder: Where is the next generation of mainframers going to come from and who are they going to be?

The answer: IBM is lining them up now. As the mainframe turns 50 you’ll have a chance to meet some of these up and coming mainframers as part of IBM’s 50th Mainframe Anniversary celebration in New York, April 8, when IBM announces winners of the World Championship round of its popular Master of the Mainframe competition.

According to IBM, the Championship is designed to assemble the best university students from around the globe who have demonstrated superior technical skills through participation in their regional IBM Master the Mainframe Contests. Out of the 20,000 students who have engaged in country-level Master the Mainframe Contests over the last three years, the top 44 students from 22 countries have been invited to participate in the inaugural IBM Master the Mainframe World Championship.

These students will spend the month of March working through the Systems of Engagement concept, an expansion of the traditional Systems of Record—core transaction systems—that have been the primary workload of mainframe computing. The students will deploy Systems of Record mainframe business applications written with Java and COBOL using DB2 for z/OS API’s to demonstrate how the Systems of Engagement concept takes full advantage of the mainframe’s advanced capabilities. In short, the mainframe is designed to support tomorrow’s most demanded complex workloads  Big Data, Cloud, and Mobile computing workloads and do them all with the most effective enterprise-class security. The students will showcase their applications on April 7, 2014 in New York City where judges will determine which student earns the distinction of “Master the Mainframe World Champion.”

Representing the United States are Mugdha Kadam from the University of Florida, Elton Cheng from the University of California San Diego, and Rudolfs Dambis from the University of Nevada Las Vegas. You can follow the progress of the competitors here.  After March 17 the site will include a leaderboard so you can follow your favorites. No rumors of betting pools being formed yet but it wouldn’t surprise DancingDinosaur.  Win or not, each competitor should be a prime candidate if your organization needs mainframe talent.

This is part of IBM’s longstanding System z Academic Initiative, which has been expanding worldwide and now encompasses over 64,000 students at more than 1000 schools across 67 countries.  And now high school students are participating in the Master the Mainframe competition. Over 360 companies are actively recruiting from these students, including Baldor, Dillards, JB Hunt, Wal-mart, Cigna, Compuware, EMC, Fidelity, JP Morgan Chase, and more.

Said Jeff Gill, at VISA: “Discovering IBM’s Academic Initiative has been a critical success factor in building a lifeline to our future—a new base of Systems Engineers and Applications Developers who will continue to evolve our mainframe applications into flexible open enterprise solutions while maintaining high volume / high availability demands. Without the IBM Academic Initiative, perhaps we could have found students with aptitude – but participation in the Academic Initiative demonstrates a student’s interest in mainframe technology which, to us, translates to a wise long-term investment.“ Gill is one of the judges of the Masters the Mainframe World Championship.

Added Martin Kennedy of Citigroup: “IBM’s Master the Mainframe Contest offers a great resource to secure candidates and helps the company get critical skills as quickly as possible.”

The Master of the Mainframe Championship and even the entire 50th Anniversary celebration that will continue all year are not really IBM’s primary mainframe thrust this year.  IBM’s real focus is on emphasizing the forward-moving direction of the mainframe. As IBM puts in: “By continually adapting to trends and evolving IT, we’re driving new approaches to cloud, analytics, security and mobile computing to help tackle challenges never before thought possible.  The pioneering innovations of the mainframe all serve one mission—deliver game-changing technology that makes the extraordinary possible and improves the way the world works.

DancingDinosaur covers the mainframe and other enterprise-class technology. Watch this blog for more news on the mainframe and other enterprise systems including Power, enterprise storage, and enterprise-scale cloud computing.

With that noted, please plan to attend Edge 2014, May 19-23 in Las Vegas. Being billed as an infrastructure and storage technology conference, it promises to be an excellent follow-on to last year’s Edge conference.  DancingDinosaur will be there, no doubt hanging out in the blogger’s lounge where everyone is welcome. Watch this blog for upcoming details on the most interesting sessions.

And follow DancingDinosaur on Twitter, @mainframeblog

New Ways to Lower System z Costs

April 26, 2013

The latest System z capacity offerings offer new ways to boost z usage at  lower cost. The offerings were developed jointly with z users in response to their specific business requirements.

The offerings, reflecting IBM’s willingness to be flexible on pricing, enable z users who typically handle operations like development and testing on cheaper x86 platforms to move those operations to the z while getting the additional capacity they would need at a lower cost. In the process, they eliminate the extra steps involved in deploying the finished production system on the z. You can find more info on IBM System z software here.

With the new capacity offerings and other initiatives, IBM is demonstrating its intention to drive down the cost of mainframe computing in a variety of ways. For example, with the System z capacity offering for Cloud, IBM offers the flexibility to increase capacity, then move portions of that incremental capacity within a 12-18 month period. This enables clients to grow before they know exactly where they’ll want to run the work, a welcome sign of flexibility. 

For System z disaster recovery in the cloud, again users gain more flexibility by moving workloads between systems.  For clients who are working aggressively towards business resiliency and disaster recovery, this can be very valuable and removes the restrictions previously out there on the number of tests than can be run.

Specifically, this allows active capacity mobility between zEnterprise primary servers and disaster recovery servers (mirrored data center) for more than just a one-time test.  IBM also offers comparable deals in the form of active multiplex pricing for GDPS Active/Active workloads.  While the DR offering requires all workload moves to the DR box at one time, the active multiplex offering allows fractional workload movement.

And finally, with the System z Test and Development offering, IBM is now allowing for discounts for clients who want do their testing on the platform. Previously, IBM was willing to lower the cost for development, but now, by doing development and test on the platform, it’s making the mainframe more attractive again.

None of this is exactly new. Last June DancingDinosaur reported that IBM was moving in this direction with its System z capacity offering for the cloud.  For more, click here.

IBM also announced new System z software for development, deployment and automation of workloads, described as simple-to-use tools for mainframe development. They start with a new enterprise COBOL compiler that promises significant performance improvements to meet increasingly narrow batch windows organizations face and a new Rational Developer for System z and Rational Developer for Enterprise.

Given increasing demands for new ways to connect the z to mobile activities, IBM also announced enhancements to CICS; specifically the CICS TS feature pack for mobile extension, the IBM Mobile messaging client, and Cognos Mobile on z/OS among others. Organizations have been connecting mobile applications to the z for years using SOA and gateways in one form or another.  These just provide another, possible more efficient way to do it.

After you build the app you need to deploy it. For this IBM announced a new Business Process Manager for z/OS, the Operational Decision Manager for z/OS, and Integration Bus on z/OS (previously called IBM WebSphere Message Broker for z/OS). Organizations also can rapidly deploy Java workloads with the new CICS Transaction Server for z/OS, Value Unit Edition. Finally, Tivoli System Automation on z/OS can provide automated end-to-end deployment and management.

At the same briefing IBM introduced Algar Telecom, a Brazilian telco that offers other services as well. A new z user, Algar consolidated large numbers of Intel servers on a z196 and zBX, an example of z-based hybrid computing.  It offers an interesting experience DancingDinosaur will take up in a later post here along with the experience of a z196 shop that upgraded to a zEC12 to create a z-based production systems core around a slew of Intel blades. Both organizations report good results.

Finally, please note: the IBM Edge Conference 2013 is coming up in Las Vegas, June 10-14. Last year Edge was primarily a storage event. This year there continues to be a large amount of storage material, including considerable new material around System z storage, but it appears IBM has expanded the program beyond storage. DancingDinosaur covered it last year and will begin covering Edge 2013 in a series of posts leading up to the event. Please join me in Las Vegas.  If you register here by 4/28 you can save a few bucks. Look for me there; I’ll be the blogger wearing the Mainframes Rule t-shirt.

Monitor System z CICS on an iPad

September 15, 2010

The iPad apparently is hot among System z admins. Unlike last week’s post about managing System z networking through the iPad, the folks at PlexSpy (Matter of Fact Software) only talk about monitoring CICS through the iPad, not managing it. CICS is far too complicated and critical to trust actual management to anything but a full blown mainframe CICS management tool.

There is no shortage of tools to manage CICS. Tools from IBM, CA, and BMC provide robust, industrial strength mainframe tools for CICS.

CICS, even more than JCL or Assembler, has emerged as the litmus test for mainframe competency. Those who have mastered CICS enjoy a small extra measure of status, it seems, when mainframers gather. The IBM Academic Imitative curriculum includes a slew of courses on COBOL and Assembler and JCL but just one course on CICS.

That shouldn’t be surprising. CICS, the mainframe’s high performance, high volume transaction monitor, is powerful and highly complex. Anything CICS touches almost always is mission-critical. And as organizations extend the mainframe into new areas CICS will become more important than ever. It plays a key role in strategies for delivering mainframe capabilities as services and as part of a SOA effort.

Because of the criticality and complexity of CICS, PlexSpy, which runs on the mainframe, is a read-only tool that just monitors CICS. That eliminates the chance of a user screwing something up. It is accessed via any browser, including the iPad, which is what the company uses to demo the tool.

However, PlexSpy takes a broad view of the CICS infrastructure, which makes it easier to start troubleshooting CICS when problems occur. Its purpose is to help administrators who may not be particularly adept at CICS to respond to complaints by identifying the likely problem. Simply entering the name of a business application, for example, will bring up a view of all the relevant CICS infrastructure components—regions, files, whatever—that impact the application.

The tool flags likely discrepancies. From there, the admin can turn over the likely problem to skilled CICS staff, saving them the time it takes to laboriously trace possible problems throughout the extensive CICS infrastructure. To actually resolve the problem, they will turn to their regular CICS management tools. The value of PlexSpy lies in its ability to identify likely problems fast and without requiring deep CICS skills, not in its use of the iPad. In the case of PlexSpy, any browser will do. The iPad is just sexier than, say, a clunky Windows laptop, which would work just as well.

At a time when IT is pressured to contain costs, mainly by reducing staffing, monitoring tools like PlexSpy play a worthwhile role. They make it possible to reduce the workload on more costly CICS experts by having less skilled (meaning less costly) staff do the initial troubleshooting.


%d bloggers like this: