Posts Tagged ‘Java’

Get a Next-Gen Datacenter with IBM-Nutanix POWER8 System

July 14, 2017

First announced by IBM on May 16 here, this solution, driven by client demand for a simplified hyperconverged—combined server, network, storage, hardware, software—infrastructure, is designed for data-intensive enterprise workloads.  Aimed for companies increasingly looking for the ease of deployment, use, and management that hyperconverged solutions promise. It is being offered as an integrated hardware and software offering in order to deliver on that expectation.

Music made with IBM servers, storage, and infrastructure

IBM’s new POWER8 hyperconverged solutions enable a public cloud-like experience through on-premises infrastructure with top virtualization and automation capabilities combined with Nutanix’s public and on-premises cloud capabilities. They provide a combination of reliable storage, fast networks, scalability and extremely powerful computing in modular, scalable, manageable building blocks that can be scaled simply by adding nodes when needed.

Over time, IBM suggests a roadmap of offerings that will roll out as more configurations are needed to satisfy client demand and as feature and function are brought into both the IBM Cognitive Systems portfolio and the Nutanix portfolio. Full integration is key to the value proposition of this offering so more roadmap options will be delivered as soon as feature function is delivered and integration testing can be completed.

Here are three immediate things you might do with these systems:

  1. Mission-critical workloads, such as databases, large data warehouses, web infrastructure, and mainstream enterprise apps
  2. Cloud native workloads, including full stack open source middleware, enterprise databases
    and containers
  3. Next generation cognitive workloads, including big data, machine learning, and AI

Note, however, the change in IBM’s pricing strategy. The products will be priced with the goal to remain neutral on total cost of acquisition (TCA) to comparable offerings on x86. In short, IBM promises to be competitive with comparable x86 systems in terms of TCA. This is a significant deviation from IBM’s traditional pricing, but as we have started to see already and will continue to see going forward IBM clearly is ready to play pricing flexibility to win the deals on products it wants to push.

IBM envisions the new hyperconverged systems to bring data-intensive enterprise workloads like EDB Postgres, MongoDB and WebSphere into a simple-to-manage, on-premises cloud environment. Running these complex workloads on IBM Hyperconverged Nutanix POWER8 system can help an enterprise quickly and easily deploy open source databases and web-serving applications in the data center without the complexity of setting up all of the underlying infrastructure plumbing and wrestling with hardware-software integration.

And maybe more to IBM’s ultimate aim, these operational data stores may become the foundational building blocks enterprises will use to build a data center capable of taking on cognitive workloads. These ever-advancing workloads in advanced analytics, machine learning and AI will require the enterprise to seamlessly tap into data already housed on premises. Soon expect IBM to bring new offerings to market through an entire family of hyperconverged systems that will be designed to simply and easily deploy and scale a cognitive cloud infrastructure environment.

Currently, IBM offers two systems: the IBM CS821 and IBM CS822. These servers are the industry’s first hyperconverged solutions that marry Nutanix’s one-click software simplicity and scalability with the proven performance of the IBM POWER architecture, which is designed specifically for data-intensive workloads. The IBM CS822 (the larger of the two offerings) sports 22 POWER8 processor cores. That’s 176 compute threads, with up to 512 GB of memory and 15.36 TB of flash storage in a compact server that meshes seamlessly with simple Nutanix Prism management.

This server runs Nutanix Acropolis with AHV and little endian Linux. If IBM honors its stated pricing policy promise, the cost should be competitive on the total cost of acquisition for comparable offerings on x86. DancingDinosaur is not a lawyer (to his mother’s disappointment), but it looks like there is considerable wiggle room in this promise. IBM Hyperconverged-Nutanix Systems will be released for general availability in Q3 2017. Specific timelines, models, and supported server configurations will be announced at the time of availability.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Latest Compuware Tools Bring Mainframe and DevOps Together

July 7, 2017

At the end of June Compuware announced the integration of Topaz for Total Test, an automated unit testing tool for COBOL, with Jenkins, SonarQube and Compuware ISPW. Together, the technologies enable enterprises nimbly, easily and efficiently update their core mainframe applications in response to ever-changing business requirements.  This continues the company’s ongoing quarterly releases of updates and modernization of mainframe tools.

The latest enable mainframe legacy technologies to participate in integrated modern DevOps. They allow enterprise IT to better orchestrate changes to mainframe systems of record with changes to systems of engagement—a significant benefit given the fact that customer-facing digital services often rely on code running across multiple platforms, legacy and distributed.

Compuware Topaz for Total Test

The days when a mainframe shop can get by with leisurely updates of their systems, especially their business critical applications, are long gone.  Organizations need to modernize and integrate their tools to deliver the kind of fast response attributed to DevOps.

Of course, successful DevOps, whether mainframe or distributed, is less a matter of tools than of culture, communication, and process.  Still, there’s no doubt that modern, integrated, and context-aware tools along with automation help by speeding the process and reducing mistakes.

Topaz for Total Test appears to cover all the tool bases. It brings together automated unit testing for COBOL with Jenkins, SonarQube, and Compuware ISPW. Jenkins is an open-source continuous integration software tool written in the Java for testing and reporting on isolated changes in a larger code base in real time. The real time aspect is critical for DevOps, where speed counts. The software enables developers to find and solve defects in a code base rapidly and to automate testing of their builds. SonarQube (formerly Sonar[1]) is an open source platform for continuous inspection of code quality. Again, error elimination counts.

The problem, as Compuware sees it, comes from mainframe shops’ historical inability to update their business-critical COBOL applications fast enough due to antiquated tools, excessive dependence on specialized expertise, and risk concerns. All these combine to produce long delays in updating code.

The addition of Jenkins and SonarQube along with Compuware’s ISPW source code management and deployment produce a pretty complete DevOps package for mainframes. In addition, Compuware strengthened support for DB2. That support entails new stubbing for DB2 databases, which allows developers to run unit tests without requiring an active connection to a live DB2 database. While Topaz for Total Test can be used to test code that processes all types of mainframe data, its stubbing capability for DB2 but also VSAM and QSAM data types. This makes it easier to create repeatable tests fast. Data stubs are created automatically and do not require re-compiling.

Although much of the world’s business activity still revolves in one way or another around the mainframe, many mainframe shops struggle when it comes to updating those applications to reflect rapidly changing business demands. Typically, they are hampered by manual development and testing processes; ongoing loss of specialized COBOL programming knowledge; and the fear of introducing even the slightest defect into core mainframe systems of record, notes Compuware.

And it gets worse. “Given the abject failure of re-platforming initiatives, large enterprises hoping to avoid digital irrelevance must aggressively modernize their mainframe DevOps practices,” said Rich Ptak of IT analyst firm Ptak Associates in Compuware’s Topaz for Total Test announcement. “Key to the modernization and ‘de-legacing’ of mainframe applications is the adoption of unit testing for COBOL code that is equivalent to and well-integrated with unit testing as practiced across the rest of the enterprise codebase.”

Compuware Topaz for Total Test transforms mainframe application development by automatically breaking COBOL code down into units and creating tests for those logical units. Developers at all skill levels—not just mainframe cowboys but preferably those with distributed and open system skills or even systems novices—can quickly and easily perform unit testing on COBOL code just as they do in Java, PHP and other popular programming languages. In fact, Topaz is actually more advanced than typical Java tools, because it requires no coding and automatically generates default unit test result assertions for developers.  So yes, novices are welcome.

With the recently released integrations and enhancements, Compuware has now delivered mainframe innovations for eleven consecutive quarters. Few mainframe shops even try to do this, not even IBM. This reflects Compuware’s commitment to improving innovation throughput and quality using the latest Agile and DevOps methods.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM On-Premises Cognitive Means z Systems Only

February 16, 2017

Just in case you missed the incessant drumbeat coming out of IBM, the company committed to cognitive computing. But that works for z data centers since IBM’s cognitive system is available on-premises only for the z. Another z first: IBM just introduced Machine Learning (key for cognitive) for the private cloud starting with the z.

ibm-congitive-graphic

There are three ways to get IBM cognitive computing solutions: the IBM Cloud, Watson, or the z System, notes Donna Dillenberger, IBM Fellow, IBM Enterprise Solutions. The z, however, is the only platform that IBM supports for cognitive computing on premises (sorry, no Power). As such, the z represents the apex of programmatic computing, at least as IBM sees it. It also is the only IBM platform that supports cognitive natively; mainly in the form of Hadoop and Spark, both of which are programmatic tools.

What if your z told you that a given strategy had a 92% of success. It couldn’t do that until now with IBM’s recently released cognitive system for z.

Your z system today represents the peak of programmatic computing. That’s what everyone working in computers grew up with, going all the way back to Assembler, COBOL, and FORTRAN. Newer languages and operating systems have arrived since; today your mainframe can respond to Java or Linux and now Python and Anaconda. Still, all are based on the programmatic computing model.

IBM believes the future lies in cognitive computing. Cognitive has become the company’s latest strategic imperative, apparently trumping its previous strategic imperatives: cloud, analytics, big data, and mobile. Maybe only security, which quietly slipped in as a strategic imperative sometime 2016, can rival cognitive, at least for now.

Similarly, IBM describes itself as a cognitive solutions and cloud platform company. IBM’s infatuation with cognitive starts with data. Only cognitive computing will enable organizations to understand the flood of myriad data pouring in—consisting of structured, local data but going beyond to unlock the world of global unstructured data; and then to decision tree-driven, deterministic applications, and eventually, probabilistic systems that co-evolve with their users by learning along with them.

You need cognitive computing. It is the only way, as IBM puts it: to move beyond the constraints of programmatic computing. In the process, cognitive can take you past keyword-based search that provides a list of locations where an answer might be located to an intuitive, conversational means to discover a set of confidence-ranked possibilities.

Dillenberger suggests it won’t be difficult to get to the IBM cognitive system on z . You don’t even program a cognitive system. At most, you train it, and even then the cognitive system will do the heavy lifting by finding the most appropriate training models. If you don’t have preexisting training models, “just use what the cognitive system thinks is best,” she adds. Then the cognitive system will see what happens and learn from it, tweaking the models as necessary based on the results and new data it encounters. This also is where machine learning comes in.

IBM has yet to document payback and ROI data. Dillenberger, however, has spoken with early adopters.  The big promised payback, of course, will come from the new insights uncovered and the payback will be as astronomical or meager as you are in executing on those insights.

But there also is the promise of a quick technical payback for z data centers managers. When the data resides on z—a huge advantage for the z—you just run analytics where the data is. In such cases you can realize up to 3x the performance, Dillenberger noted.  Even if you have to pull data from some other location too you still run faster, maybe 2x faster. Other z advantages include large amounts of memory, multiple levels of cache, and multiple I/O processors get at data without impacting CPU performance.

When the data and IBM’s cognitive system resides on the z you can save significant money. “ETL consumed huge amounts of MIPS. But when the client did it all on the z, it completely avoided the costly ETL process,” Dillenberger noted. As a result, that client reported savings of $7-8 million dollars a year by completely bypassing the x-86 layer and ETL and running Spark natively on the z.

As Dillenberger describes it, cognitive computing on the z is here now, able to deliver a payback fast, and an even bigger payback going forward as you execute on the insights it reveals. And you already have a z, the only on-premises way to IBM’s Cognitive System.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC Mainframe Survey Confirms z System Is Here to Stay

November 11, 2016

No surprise there. BMC’s 11th annual mainframe survey covering 1,200 mainframe executives and tech professionals found 58% of respondents reported usage of the mainframe is increasing as they look to capitalize on every infrastructure advantage it provides and add more workloads. Another 23% consider the mainframe as the best option to run critical work.

ibm_system_z10

IBM z10

Driving the continuing interest in the mainframe are the new demands for data handling, scalable processing, analytics, and more. According to the BMC survey nearly 60% of companies are seeing increased data and transaction volumes. They opt to stay with the mainframe for its highly secure, superior data handling and transaction serving, particularly as digital business adds unpredictability and volatility to workloads.

Overall respondents fell into three primary groups: 1) entrenched mainframe shops, 58% that are on board for the long haul; 2) shops, 23% that intend to maintain a steady amount of work on the mainframe; and 3) the 19% that are moving away from the mainframe.  The first two groups, committed mainframe shops, amount to just over survey 80% of the respondents.

Many companies surveyed are focused on addressing the increased workload demands, especially the rapidly growing demand for new applications. But surprisingly, the survey does not directly touch on hybrid cloud, cognitive computing or any of the latest technologies IBM has been promoting, not even DevOps, which can streamline mainframe application development and deployment. “We are not hearing much about a hybrid cloud environments or blockchain yet. Most companies seem to be in the early tire kicking stage, observed John McKenny, BMC Vice President, Strategy and Operations.

Eighty-eight percent of companies in the first group, entrenched mainframe shops, for example, are looking to increase the workloads they run on Java on the mainframe, primarily to address new application demands. It also doesn’t hurt that Java on the mainframe also can help lower data center costs by directing workloads to lower cost assist processors.

Other interesting BMC survey findings:

  • Half of the respondents report keeping 50% of their data on the mainframe and continue to invest in the platform for reasons you already know—security, availability, data serving capability
  • Continued steady growth of Linux in production on the z: 41% in 2014, 48% in 2015, 52% in 2016
  • Increased use of Java on the mainframe report as 67% of respondents cite need to meet growing application demand

Those looking to reduce mainframe presence cited three reasons: 1) perception of high cost, 2) outdated management understanding, and 3) looking for ways to reduce workloads over time.  DancingDinosaur has spoken with mainframe shops intending to migrate off the z and they cite the usual reasons, especially #1 above.

Top mainframe priorities for 2016 according to the BMC survey:  Cost reduction/optimization (65%); data privacy, compliance, security (50%); application availability (49%); application modernization (41%. Responses indicated the priorities for next year haven’t changed at all.

Surprisingly, many of the latest technologies for the z that IBM has touted recently have not yet shown up in the BMC survey responses, except maybe Java and Linux. This would include hybrid clouds, blockchain, IoT, and cognitive computing. IDC, for example, already is projecting cognitive computing to grow at a CAGR of 55.1% from 2016 to 2020. For z shops, however, cognitive computing appears almost invisible.

In some case with surveys like this you need to read between the lines. Where respondents report changes in activity levels driving application growth or the growth of interest in Java or the frequency of application changes and references to operational analytics they’re making oblique references to mobile or big data or even cognitive computing or other recent technologies for the z.

At its best, the BMC notes that digital technologies are transforming the ways in which mainframe shops conduct business and interact with their customers.  Adds BMC mainframe customer Credit Suisse: “IT departments are moving toward centralized, virtualized, and highly automated environments. This is being pursued to drive cost and processing efficiencies. Many companies realize that the Mainframe has provided these benefits for many years and is a mature and stable environment,” said Frank Cortell, Credit Suisse Director of Information Technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Compuware Continues Mainframe GUI Tool Enhancements

July 1, 2016

Early in 2015 Compuware announced the first in what it promised would be a continuing stream of new mainframe tools and tool enhancements. Did anyone really believe them? Mainframe ISVs are not widely regarded for their fast release cycles. DancingDinosaur reported on it then here and has continued to follow up and report its progress through a handful of new releases. This past week, DancingDinosaur received new Compuware mainframe tool announcements. For a mainframe ISV this is almost unheard of. IBM sometimes releases new mainframe products in intense spurts but then quickly resumes its typical languid release pace.

compuware ispw

Screen from Compuware’s ISPW for Continuous Delivery to the Mainframe

Let’s take a look at each of these new releases. First, ISPW Deploy, an advanced mainframe release automation solution that enables large enterprises to bring continuous delivery best practices to their IBM z/OS environments. ISPW Deploy, built on the ISPW technology Compuware acquired in January 2016, facilitates faster and more reliable mainframe software deployment. Specifically, it helps, according to Compuware, in three ways, through:

  1. Automation that rapidly moves code through the deployment process, including test staging and approvals, while also providing greatly simplified full or partial rollbacks.
  1. Visualization that enables DevOps managers to quickly pinpoint deployment issues in order to both solve immediate rollout problems and address persistent bottlenecks in code promotion.
  1. Integrations with both third-party solutions and Compuware’s own industry-leading mainframe toolkit that allow IT to build complete SCM-to-production DevOps pipelines and to quickly launch associated remediation support tools if and when deployment issues occur.

Compuware is further empowering enterprises to achieve mainframe agility by integrating. For instance, its ISPW and XebiaLabs’ cross-platform continuous delivery solutions enable IT organizations to orchestrate and visualize their mainframe DevOps processes in a common manner with their broader cross-platform DevOps automation.

The second announcement focused on Xebial Labs, as noted above. The idea here is to deliver cross-platform continuous releases for the mainframe. As Compuware explained, enterprises using XebiaLabs’ solution suite and Compuware ISPW, can now automate and monitor all phases of mainframe DevOps within the same continuous delivery management environment they use for their distributed, web, and cloud platforms. This automation and monitoring includes test/QA, pre-copy staging, and code promotion. The goal, as with all DevOps, is to speed digital agility for mainframe or distributed systems or both.

The third announcement concerned a partnership between Compuware and ConicIT that aims to help a new generation of IT ops staff proactively resolve emerging mainframe issues before they impact application service levels. It does so by integrating ConicIT’s predictive mainframe analytics with Compuware’s Strobe, which provides visually intuitive troubleshooting intelligence. Together, the two companies promise to enable even IT staff with relatively little hands-on mainframe experience to quickly identify and resolve a wide range of application performance problems.

The key to doing this is a reliance on the adoption of intuitive GUI interfaces. Compuware started this with its Topaz tools and has been continuing along this path for two years. Compuware’s CEO, Chris O’Malley, has been harping on these themes almost since he first arrived there.

Compuware customers apparently have gotten the message. As reported: “Market pressures are making it essential for us to deliver quality products and services to our clients more frequently, and the mainframe plays a critical role in that delivery,” according to Craig Danielson, Assistant Vice President for Commerce Bank. “We leverage ISPW to help in this capacity and its new capabilities will provide us the automation and visibility of our software deployment process to help us continuously improve our internal operations and services.” (note: DancingDinosaur did not validate this customer statement.)

Companies will need all the help modern mainframe tools can deliver. Mainframe data centers are facing unprecedented challenges that require unusual speed and agility. In short, they need DevOps fast. And they will have to respond with an increasingly aging core of experienced mainframe staff supplemented by millennials who have to be coaxed and cajoled onto the mainframe with easy graphical tools. If mainframe data centers can’t respond to these challenges—not just cloud, mobile, Linux, and analytics, but also IoT, blockchain, cognitive computing, and whatever else is coming along next—how are they going to cope. Already their users, the line of business managers, are turning to shadow IT out of frustration with the slow response from the mainframe data centers. And you know what comes next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

State of z System CICS in the Modern Enterprise

March 25, 2016

You should be very familiar with the figures describing the continued strength of mainframe computing in the enterprise today. Seventy percent of enterprise data resides on a mainframe, 71 percent of all Fortune 500 companies run their core businesses on the mainframe, and 92 of the top 100 banks rely on the mainframe to provide at-your-fingertip banking services to the customers (many via mobile).  CICS, according to IBM, handles 1.1 million transactions every second, every day. By comparison, Google handles a mere 59,421 searches every second.

cics at-interconnect-2015-1-638

CICS at IBM Interconnect 2015

H&W, a top mainframe ISV recently released its State of CICS in the Modern Enterprise study. Find a copy of the study here.  For starters, it found that nearly two-thirds on respondents run 51-100% their business-critical applications online through CICS. Within government, 32% of respondents reported running 75-100% of business-critical applications through CICS.

A different study suggests that CICS applications handle more than 30 billion transactions per day and process more than $1 trillion dollars’ worth of business each week. Mainframe data also still drives information systems worldwide. Approximately 60 percent of organizations responding to a 2013 Arcati survey said they manage 40 to 100 percent of their enterprise data on the mainframe.

Integrating legacy systems is a strategy mainframe sites continue to adopt. In fact, 74 percent of respondents in that survey said specifically they are web-enabling CICS subsystems. However, as organizations pursue this strategy, challenges can include unlocking the data, keeping the applications and data available to users, and maintaining data integrity in an efficient and cost-effective manner. Nothing new for data center managers about this.

According to the H&W study, online CICS usage has gone up in the last 3 years, from 54% of respondents reporting running over half of their business-crit applications through CICS to 62% in 2015. Hope people will finally stop talking about the mainframe heading toward extinction.

CICS also has carved out a place on the web and with mobile. Sixty-five percent of respondents say at least some of their business-crit applications are available via PC, phone, tablet, and web-based interfaces while 11% more reported plans to mobile- and web-enable their mainframe apps in the future. Thirteen percent reported no plans to do so. Government sector respondents reported that they were significantly more likely to not make the applications available for online access; so much for open government and transparency.

CICS availability proved to raise no concern although a few were concerned with performance. Based on the study results in 2012 some predicted that companies would be moving away from CICS by now. These predictions, apparently, have not come to pass, at least not yet.

In fact, as far as the future of CICS, the technology seems to be facing a remarkably stable outlook for the next 3-5 years. The largest number of respondents, 37%, expected the number of CICS applications to remain the same in that period while 34% said they would be decreasing. More encouragingly, 27% of respondents planned to increase their number of CICS applications accessible online. In the financial services segment, 38% planned to increase the number of online CICS applications while only 10% expected to decrease the number of online applications. Given the demands by banking customers for mobile apps the increase in the number of CICS applications makes perfect sense.

The researchers concluded that CICS continues to play an important role for the majority of mainframe shops surveyed and an increasingly important role for a significant chunk of them.  The respondents also reported that, in general, they were satisfied with CICS performance even in the face of increasingly complex online workloads.

Mainframe CICS may see even more action going forward depending on what companies do with Internet of Things. As with mobile traffic, companies may turn to CICS to handle critical aspects of backend IoT activity, which has the potential to become quite large.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: