Posts Tagged ‘CICS’

IBM Launches New IoT Collaborative Initiative

February 23, 2017

Collaboration partners can pull hundreds of millions of dollars in new revenue from IoT, according to IBM’s recent IoT announcement. Having reached what it describes as a tipping point with IoT innovation the company now boasts of having over 6,000 clients and partners around the world, many of whom are now wanting to join in its new global Watson IoT center to co-innovate. Already Avnet, BNP Paribas, Capgemini, and Tech Mahindra will collocate development teams at the IBM Munich center to work on IoT collaborations.

new-ibm-watson-iot-center

IBM Opens New Global Center for Watson IoT

The IBM center also will act as an innovation space for the European IoT standards organization EEBus.  The plan, according to Harriet Green, General Manager, IBM Watson IoT, Cognitive Engagement and Education (pictured above left), calls for building a new global IoT innovation ecosystem that will explore how cognitive and IoT technologies will transform industries and our daily lives.

IoT and more recently cognitive are naturals for the z System, and POWER Systems have been the platform for natural language processing and cognitive since Watson won Jeopardy three years ago. With the latest enhancements IBM has brought to the z in the form of on-premises cognitive and machine learning the z should assume an important role as it gathers, stores, collects, and processes IoT data for cognitive analysis. DancingDinosaur first reported on this late in 2014 and again just last week. As IoT and cognitive workloads ramp up on z don’t be surprised to see monthly workload charges rise.

Late last year IBM announced that car maker BMW will collocate part of its research and development operations at IBM’s new Watson IoT center to help reimagine the driving experience. Now, IBM is announcing four more companies that have signed up to join its special industry “collaboratories” where clients and partners work together with 1,000 Munich-based IBM IoT experts to tap into the latest design thinking and push the boundaries of the possible with IoT.

Let’s look at the four newest participants starting with Avnet. According to IBM, an IT distributor and global IBM partner, Avnet will open a new joint IoT Lab within IBM’s Watson IoT HQ to develop, build, demonstrate and sell IoT solutions powered by IBM Watson. Working closely with IBM’s leading technologists and IoT experts, Avnet also plans to enhance its IoT technical expertise through hands-on training and on-the-job learning. Avnet’s team of IoT and analytics experts will also partner with IBM on joint business development opportunities across multiple industries including smart buildings, smart homes, industry, transportation, medical, and consumer.

As reported by BNP Paribas, Consorsbank, its retail digital bank in Germany, will partner with IBM´s new Watson IoT Center. The company will collocate a team of solution architects, developers and business development personnel at the Watson facility. Together with IBM’s experts, they will explore how IoT and cognitive technologies can drive transformation in the banking industry and help innovate new financial products and services, such as investment advice.

Similarly, global IT consulting and technology services provider Capgemini will collocate a team of cognitive IoT experts at the Watson center. Together they will help customers maximize the potential of Industry 4.0 and develop and take to market sector-specific cognitive IoT solutions. Capgemini plans a close link between its Munich Applied Innovation Exchange and IBM’s new Customer Experience zones to collaborate with clients in an interactive environment.

Finally, the Indian multinational provider of enterprise and communications IT and networking technology Tech Mahindra, is one of IBM’s Global System Integrators with over 3,000 specialists focused on IBM technology around the world. The company will locate a team of six developers and engineers within the Watson IoT HQ to help deliver on Tech Mahindra’s vision of generating substantial new revenue based on IBM’s Watson IoT platform. Tech Mahindra will use the center to co-create and showcase new solutions based on IBM’s Watson IoT platform for Industry 4.0 and Manufacturing, Precision Farming, Healthcare, Insurance and Banking, and automotive.

To facilitate connecting the z to IoT IBM offers a simple recipe. It requires 4 basic ingredients and 4 steps: Texas Instrument’s SensorTag, a Bluemix account, IBM z/OS Connect Enterprise Edition, and a back-end service like CICS.  Start by exposing an existing z Systems application as a RESTful AP. This is where the z/OS Connect Edition comes in.  Then enable your SensorTag device to Watson IoT Quick Start. From there connect the Cloud to your on-premises Hybrid Cloud.  Finally, enable the published IoT data to trigger a RESTful API. Sounds pretty straightforward but—full disclosure—Dancing Dinosaur has not tried it due to lacking the necessary pieces. If you try it, please tell DancingDinosaur how it works (info@radding.net). Good luck.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Cheers Beating Estimates But Losing Streak Continues

January 26, 2017

It has been 19 quarters since IBM reported positive revenue in its quarterly reports but the noises coming out of IBM with the latest 4Q16 and full year 2016 financials are upbeat due to the company beating analyst consensus revenue estimates and its strategic initiatives are starting to generate serious revenue.   Although systems revenues were down again (12%) the accountants at least had something positive to say about the z: “gross profit margins improved driven by z Systems performance.”

ezsource-dashboard

EZSource: Dashboard visualizes changes to mainframe code

IBM doesn’t detail which z models were contributing but you can guess they would be the LinuxONE models (Emperor and Rock Hopper) and the z13. DancingDinosaur expects z performance to improve significantly in 2017 when a new z, which had been heavily hinted in the 3Q2016 results reported here, is expected to ship.

With it latest financials IBM is outright crowing about its strategic initiatives: Fourth-quarter cloud revenues increased 33 percent.  The annual exit run rate for cloud as-a-service revenue increased to $8.6 billion from $5.3 billion at year-end 2015.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 16 percent and revenues from security increased 7 percent.

For the full year, revenues from strategic imperatives increased 13 percent.  Cloud revenues increased 35 percent to $13.7 billion.  The annual exit run rate for cloud as-a-service revenue increased 61 percent year to year.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 34 percent and from security increased 13 percent.

Of course, cognitive computing is IBM’s strategic imperative darling for the moment, followed by blockchain. Cognitive, for which IBM appears to use an expansive definition, is primarily a cloud play as far as IBM is concerned.  There is, however, a specific role for the z, which DancingDinosaur will get into in a later post. Blockchain, on the other hand, should be a natural z play.  It is, essentially, extremely secure OLTP on steroids.  As blockchain scales up it is a natural to drive z workloads.

As far as IBM’s financials go the strategic imperatives indeed are doing well. Other business units, however, continue to struggle.  For instance:

  • Global Business Services (includes consulting, global process services and application management) — revenues of $4.1 billion, down 4.1 percent.
  • Systems (includes systems hardware and operating systems software), remember, this is where z and Power platforms reside — revenues of $2.5 billion, down 12.5 percent. But as noted above, gross profit margins improved, driven by z Systems performance.
  • Global Financing (includes financing and used equipment sales) — revenues of $447 million, down 1.5 percent.

A couple of decades ago, when this blogger first started covering IBM and the mainframe as a freelancer writing for any technology publication that would pay real money IBM was struggling (if $100 billion behemoths can be thought to be struggling). The buzz among the financial analysts who followed the company was that IBM should be broken up into its parts and sold off.  IBM didn’t take that advice, at least not exactly, but it did begin a rebound that included laying off tons of people and the sale of some assets. Since then it invested heavily in things like Linux on z and open systems.

In December IBM SVP Tom Rosamilia talked about new investments in z/OS and z software like DB2 and CICS and IMS, and the best your blogger can tell he is still there. (Rumors suggest Rosamilia is angling for Rometty’s job in two years.)  If the new z does actually arrive in 2017 and key z software is refreshed then z shops can rest easy, at least for another few quarters.  But whatever happens, you can follow it here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Continues Mainframe Software Renaissance

January 19, 2017

While IBM focuses on its strategic imperatives, especially cognitive computing (which are doing quite well according to the latest statement that came out today–will take up next week), Compuware is fueling a mainframe software renaissance on its own. It’s latest two announcements brings Java-like unit testing to COBOL code via its Topaz product set and automate and intelligently optimize the processing of batch jobs through its acquisition of MVS Solutions. Both modernize and simplify the processes around legacy mainframe coding thus the reference to mainframe software renaissance.

compuware-total-test-graphic-process-flow-diagram

Let’s start with Compuware’s Topaz set of graphical tools. Since they are GUI-based even novice developers can immediately validate and troubleshoot whatever changes, either intended or inadvertent, they made to the existing COBOL applications.  Compuware’s aim for Topaz for Total Test is to eliminate any notion that such applications are legacy code and therefore cannot be updated as frequently and with the same confidence as other types of applications. Basically, mainframe DevOps.

By bringing fast, developer-friendly unit testing to COBOL applications, the new test tool also enables enterprises to deliver better customer experiences—since to create those experiences, IT needs its Agile/DevOps processes to encompass all platforms, from the mainframe to the cloud.  As a result z shops can gain increased digital agility along with higher quality, lower costs, and dramatically reduced dependency on the specialized knowledge of mainframe veterans aging out of the active IT workforce. In fact, the design of the Topaz tools enables z data centers to rapidly introduce the z to novice mainframe staff, which become productive virtually from the start—another cost saver.

Today in 2017 does management still need to be reminded of the importance of the mainframe. Probably, even though many organizations—among them the world’s largest banks, insurance companies, retailers and airlines—continue run their business on mainframe applications, and recent surveys clearly indicate that situation is unlikely to change anytime soon. However, as Compuware points out, the ability of enterprises to quickly update those applications in response to ever-changing business imperatives is daily being hampered by manual, antiquated development and testing processes; the ongoing loss of specialized COBOL programming knowledge; and the risk and associated fear of introducing even the slightest defect into core mainframe systems of record. The entire Topaz design approach from the very first tool, was to make mainframe code accessible to novices. That has continued every quarter for the past two years.

This is not just a DancingDinosaur rant. IT analyst Rich Ptak from Ptak Associates also noted: “By eliminating a long-standing constraint to COBOL Compuware provides enterprise IT the ability to deliver more digital capabilities to the business at greater speed and with less risk.”

Gartner in its latest Predicts 2017, chimes in with its DevOps equivalent of your mother’s reminder to brush your teeth after each meal: Application leaders in IT organizations should adopt a continuous quality culture that includes practices to manage technical debt and automate tests focused on unit and API testing. It should also automate test lab operations to provide access to production-like environments, and enable testing of deployment through the use of DevOps pipeline tools.” OK mom; everybody got the message.

The acquisition of MVS Solutions, Compuware’s fourth in the last year, adds to the company’s collection of mainframe software tools that promise agile, DevOps and millennial-friendly management of the IBM z platform—a continuation of its efforts to make the mainframe accessible to novices. DancingDinosaur covered these acquisition in early December here.

Batch processing accounts for the majority of peak mainframe workloads at large enterprises, providing essential back-end digital capabilities for customer-, employee- and partner-facing mobile, cloud, and web applications. As demands on these back-end mainframe batch processes intensify in terms of scale and performance, enterprises are under increasing pressure to ensure compliance with SLAs and control costs.

These challenges are exacerbated by the fact that responsibility for batch management is rapidly being shifted from platform veterans with decades of experience in mainframe operations to millennial ops staff who are unfamiliar with batch management. They also find native IBM z Systems management tools arcane and impractical, which increases the risk of critical batch operations being delayed or even failing. Run incorrectly, the batch workloads risk generating excessive peak utilization costs.

The solution, notes Compuware, lies in its new ThruPut Manager, which promises automatic, intelligent optimized batch processing. In the process it:

  • Provides immediate, intuitive insight into batch processing that even inexperienced operators can readily understand
  • Makes it easy to prioritize batch processing based on business policies and goals
  • Ensures proper batch execution by verifying that jobs have all the resources they need and proactively managing resource contention between jobs
  • Reduces the organizations’ IBM Monthly Licensing Charges (MLC) by minimizing rolling four-hour average (R4HA) processing peaks while avoiding counter-productive soft capping

Run in conjunction with Strobe, Compuware’s mainframe application performance management tool, ThruPut Manager also makes it easier to optimize batch workload and application performance as part of everyday mainframe DevOps tasks. ThruPut promises to lead to more efficiency and greater throughput resulting in a shorter batch workload and reduced processing capacity. These benefits also support better cross-platform DevOps, since distributed and cloud applications often depend on back-end mainframe batch processing.

Now, go out an hire some millenials and bring fresh blood into the mainframe. (Watch for DancingDinosaur’s upcoming post on why the mainframe is cool again.)

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces New DS8880 All-Flash Arrays

January 13, 2017

Yesterday IBM introduced three new members of the DS8000 line, each an all-flash product.  The new, all-flash storage products are designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical.

ibm-flash-ds8888-mainframe-ficon

IBM envisions these boxes for more than the z’s core OLTP workloads. According to the company, they are built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions are designed to support cognitive workloads, which can be used to uncover trends and patterns that help improve decision-making, customer service, and ROI. ERP and financial transactions certainly constitute conventional OLTP but the cognitive workloads are more analytical and predictive.

The three products:

  • IBM DS8884 F
  • IBM DS8886 F
  • IBM DS8888 F

The F signifies all-flash.  Each was designed with High-Performance Flash Enclosures Gen2. IBM did not just slap flash into existing hard drive enclosures.  Rather, it reports undertaking a complete redesign of the flash-to-z interaction. As IBM puts it: through deep integration between the flash and the z, IBM has embedded software that facilitates data protection, remote replication, and optimization for midrange and large enterprises. The resulting new microcode is ideal for cognitive workloads on z and Power Systems requiring the highest availability and system reliability possible. IBM promises that the boxes will deliver superior performance and uncompromised availability for business-critical workloads. In short, fast enough to catch bad guys before they leave the cash register or teller window. Specifically:

  • The IBM DS8884 F—labelled as the business class offering–boasts the lowest entry cost for midrange enterprises (prices starting at $90,000 USD). It runs an IBM Power Systems S822, which is a 6-core POWER8 processor per S822 with 256 GB Cache (DRAM), 32 Fibre channel/FICON ports, and 6.4 – 154 TB of flash capacity.
  • The IBM DS8886 F—the enterprise class offering for large organizations seeking high performance– sports a 24-core POWER8 processor per S824. It offers 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4 – 614.4 TB of flash capacity. That’s over one-half petabyte of high performance flash storage.
  • The IBM DS8888 F—labelled an analytics class offering—promises the highest performance for faster insights. It runs on the IBM Power Systems E850 with a 48-core POWER8 processor per E850. It also comes with 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4TB – 1.22 PB of flash capacity. Guess crossing the petabyte level qualifies it as an analytics and cognitive device along with the bigger processor complex

As IBM emphasized in the initial briefing, it engineered these storage devices to surpass the typical big flash storage box. For starters, IBM bypassed the device adapter to connect the z directly to the high performance storage controller. IBM’s goal was to reduce latency and optimize all-flash storage, not just navigate a simple replacement by swapping new flash for ordinary flash or, banish the thought, HDD.

“We optimized the data path,” explained Jeff Barber IBM systems VP for HE Storage BLE (DS8, DP&R and SAN). To that end, IBM switched from a 1u to a 4u enclosure, runs on shared-nothing clusters, and boosted throughput performance. The resulting storage, he added, “does database better than anyone; we can run real-time analytics.”  The typical analytics system—a shared system running Hadoop, won’t even come close to these systems, he added. With the DS8888, you can deploy a real-time cognitive cluster with minimal latency flash.

DancingDinosaur always appreciates hearing from actual users. Working through a network of offices, supported by a team of over 850 people, Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (such as electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research on solutions capable of managing these applications –which included both Hitachi and EMC –the organization deployed the IBM DS8886 along with DB2 for z/OS data server software to provide an integrated data backup and restore system. (Full disclosure: DancingDinosaur has not verified this customer story.)

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

For full details and specs on these products, click here

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

z System-Power-Storage Still Live at IBM

January 5, 2017

A mid-December briefing by Tom Rosamilia, SVP, IBM Systems, reassured some that IBM wasn’t putting its systems and platforms on the backburner after racking up financial quarterly losses for years. Expect new IBM systems in 2017. A few days later IBM announced that Japan-based APLUS Co., Ltd., which operates credit card and settlement service businesses, selected IBM LinuxONE as its mission-critical system for credit card payment processing. Hooray!

linuxone-emperor-2

LinuxONE’s security and industry-leading performance will ensure APLUS achieves its operational objectives as online commerce heats up and companies rely on cloud applications to draw and retain customers. Especially in Japan, where online and mobile shopping has become increasingly popular, the use of credit cards has grown, with more than 66 percent of consumers choosing that method for conducting online transactions. And with 80 percent enterprise hybrid cloud adoption predicted by 2017, APLUS is well positioned to connect cloud transactions leveraging LinuxONE. Throw in IBM’s expansion of blockchain capabilities and the APLUS move looks even smarter.

With the growth of international visitors spending money, IBM notes, and the emergence of FinTech firms in Japan have led to a diversification of payment methods the local financial industry struggles to respond. APLUS, which issues well-known credit cards such as T Card Plus, plans to offer leading-edge financial services by merging groups to achieve lean operations and improved productivity and efficiency. Choosing to update its credit card payment system with LinuxONE infrastructure, APLUS will benefit from an advanced IT environment to support its business growth by helping provide near-constant uptime. In addition to updating its server architecture, APLUS has deployed IBM storage to manage mission-critical data, the IBM DS8880 mainframe-attached storage that delivers integration with IBM z Systems and LinuxONE environments.

LinuxONE, however, was one part of the IBM Systems story Rosamilia set out to tell.  There also is the z13s, for encrypted hybrid clouds and the z/OS platform for Apache Spark data analytics and even more secure cloud services via blockchain on LinuxONE, by way of Bluemix or on premises.

z/OS will get attention in 2017 too. “z/OS is the best damn OLTP system in the world,” declared Rosamilia. He went on to imply that enhancements and upgrades to key z systems were coming in 2017, especially CICS, IMS, and a new release of DB2. Watch for new announcements coming soon as IBM tries to push z platform performance and capacity for z/OS and OLTP.

Rosamilia also talked up the POWER story. Specifically, Google and Rackspace have been developing OpenPOWER systems for the Open Compute Project.  New POWER LC servers running POWER8 and the NVIDIA NVLink accelerator, more innovations through the OpenCAPI Consortium, and the team of IBM and Nvidia to deliver PowerAI, part of IBM’s cognitive efforts.

As much as Rosamilia may have wanted to talk about platforms and systems IBM continues to avoid using terms like systems and platforms. So Rosamilia’s real intent was to discuss z and Power in conjunction with IBM’s strategic initiatives.  Remember these: cloud, big data, mobile, analytics. Lately, it seems, those initiatives have been culled down to cloud, hybrid cloud, and cognitive systems.

IBM’s current message is that IT innovation no longer comes from just the processor. Instead, it comes through scaling performance by workload and sustaining leadership through ecosystem partnerships.  We’ve already seen some of the fruits of that innovation through the Power community. Would be nice to see some of that coming to the z too, maybe through the open mainframe project. But that isn’t about z/0S. Any boost in CICS, DB2, and IMS will have to come from the core z team. The open mainframe project is about Linux on z.

The first glimpse we had of this came last spring in a system dubbed Minsky, which was described back then by commentator Timothy Prickett Morgan. With the Minsky machine, IBM is using NVLink ports on the updated Power8 CPU, which was shown in April at the OpenPower Summit and is making its debut in systems actually manufactured by ODM Wistron and rebadged, sold, and supported by IBM. The NVLink ports are bundled up in a quad to deliver 80 GB/sec bandwidth between a pair of GPUs and between each GPU and the updated Power8 CPU.

The IBM version, Morgan describes, aims to create a very brawny node with very tight coupling of GPUs and CPUs so they can better share memory, have fewer overall GPUs, and more bandwidth between the compute elements. IBM is aiming Minsky at HPC workloads, according to Morgan, but there is no reason it cannot be used for deep learning or even accelerated databases.

Is this where today’s z data center managers want to go?  No one is likely to spurn more performance, especially if it is accompanied with a price/performance improvement.  Whether rank-and-file z data centers are queueing up for AI or cognitive workloads will have to be seen. The sheer volume and scale of expected activity, however, will require some form of automated intelligent assist.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

Compuware Acquires Standardware COPE IMS to Speed DevOps and Save Money

December 16, 2016

Compuware, in early December, acquired the assets of Standardware, the leading provider of IMS virtualization technology.  Standardware’s COPE reduces the considerable time, cost and technical difficulty associated with the development and testing of IMS systems, enabling z-based data centers to significantly increase their digital business agility while also enabling less mainframe-experienced staff to perform IMS-related DevOps tasks. In addition, it allows IMS to run as a virtualized image, saving significantly on software charges.

compuware-ims-virtual-environment-31594_apollo_technical_graphic_3

Standardware’s COPE IMS, courtesy of Compuware

All three Compuware acquisitions this year—Standardware, ISPW, Itegrations—aimed to facilitate mainframe code management or app dev. The company’s acquisition of ISPW brought source code management and release automation. Itegrations eased the migration to ISPW from CA Endevor. Now Standardware brings IMS virtualization technology.

IMS continues as a foundational database and transaction management technology for systems of record at large global mainframe enterprises, especially in industries such as banking, insurance, airlines and such. Its stability, dependability, and high efficiency at scale make it particularly valuable as a back-end resource for high-traffic, customer-facing apps. IBM’s mainframe Information Management System (IMS) provides a hierarchical database and information management system with extensive transaction processing capabilities. It offers a completely different database model from the common relational model behind IBM’s DB2.

IBM touts IMS as the most secure, highest performing, and lowest cost hierarchical database management software for online transaction processing (OLTP). IMS is used by many of the top Fortune 1000 companies worldwide. Collectively these companies process more than 50 billion transactions per day through IMS, and they do so securely.

As Compuware puts it, IMS remains a deeply foundational database and transaction management technology for systems of record at large global enterprises, especially in the core mainframe segments like financial services or transportation. Its stability, dependability and high efficiency ensure it can continue to play an important role as a back-end resource for high-traffic customer-facing apps. All that’s needed is to reduce the effort required to use it.

Conventional approaches to the development and testing of IMS systems, however, can be excessively slow, technically challenging, and expensive. This is too high a technical price to pay in today’s agile, fast iteration app dev environment.  For example,  the set-up of IMS application development environments require configuring dedicated IMS regions and databases, which is especially time-consuming; additional resources must be defined and compiled for each instance, and at every stage of development expect testing, training, and systems integration. Worse yet, these tasks typically require experienced DBAs and system programmers with IMS-specific skills, making it an increasingly problematic and costly constraint given the generational shift underway in IT, which makes those skills increasingly rare.

As a result of these bottlenecks and resource constraints, large enterprises can find themselves far less nimble than their smaller competitors and unable to fully leverage their current IMS assets in response to digital requirements.  That leaves the mainframe shop at a distinct disadvantage.

Since COPE comes well integrated with Compuware Xpediter, an automated mainframe debugging tool, many such problems go away. Xpediter, which is interactive,  can be used within the Standardware virtualized environment and COPE. When a problem occurs, developers can quickly set up an interactive test session with minimal effort and resolve the problem. When they’re done, they can confidently move the application into production. And now that Xpediter is integrated with COPE IMS virtualization lets multiple developers debug application code in the same or different logical IMS systems within the virtualized COPE IMS environment.

And therein lies the savings for mainframe shops. As Tyler Allman, Compuware’s COPE product manager explains, COPE converts IMS to run in a virtual environment. It takes a COPE expert to set it up initially, but once set up, it can run as a logical IMS system with almost no ongoing maintenance, which results in administrative savings.

On the software side, IMS is licensed as part of the usual rolling average 4hr workload software billing. Once the environment has been virtualized with COPE, you can run multiple IMS logical regions at no additional cost. The savings experienced by mainframe data centers , Allman suggests, can amount to tens if not hundreds of thousands of dollars. These saving alone can justify COPE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Play the Cloud-Mobile App Dev Game with z/OS Client Web Enablement

April 15, 2016

Is you z team feeling a little nervous that they are missing an important new game? Are business managers bugging you about running slick Cloud and mobile applications through the z? Worse, are they turning to third party contractors to build apps that will try to connect your z to the cloud and mobile world? If so, it is time to take a close look at IBM’s z/OS Client Web Enablement Toolkit.

mobile access backend data 1800FLOWERS

Accessing backend system through a mobile device

If you’re a z shop running Linux on z or a LinuxONE shop you don’t need z/OS Web Enablement. The issue only comes up when you need to connect the z/OS applications to cloud, web, and mobile apps. IBM began talking up z/OS Enablement Toolkit since early this year. Prior to the availability of the toolkit, native z/OS applications had little or no easy options available to participate as a web services client.

You undoubtedly know the z in its role as a no-fail transaction workhorse. More recently you’ve watched as it learned new tricks like managing big data or big data analytics through IBM’s own tools and more recently with Spark. The z absorbed the services wave with SOA and turned CICS into a handler for Web transactions. With Linux it learned an entire new way to relate to the broader distributed world. The z has rolled with all the changes and generally came out ahead.

Now the next change for z data centers has arrived. This is the cloud/web-mobile-analytics execution environment that seemingly is taking over the known world. It almost seems like nobody wants a straight DB2 CICS transaction without a slew of other devices getting involved, usually as clients. Now everything is HTTP REST to handle x86 clients and JSON along with a slew of even newer scripting languages. Heard about Python and Ruby? And they aren’t even the latest.  The problem: no easy way to perform HTTP REST calls or handle JSON parsing on z/OS. This results from the utter lack of native JSON services built into z/OS, according to Steve Warren, IBM’s z/OS Client Web Enablement guru.

Starting, however, with z/OS V2.2 and now available in z/OS V2.1 via a couple of service updates,  Warren reports, the new z/OS Client Web Enablement Toolkit changes the way a z/OS-based data center can think about z/OS applications communicating with another web server. As he explains it, the toolkit provides an easy-to-use, lightweight solution for applications looking to easily participate as a client, in a client/server web application. Isn’t that what all the kids are doing with Bluemix? So why not with the z and z/OS?

Specifically, the z/OS Toolkit provides a built-in protocol enabler using interfaces similar in nature to other industry-standard APIs along with a z/OS JSON parser to parse JSON text coming from any source and the ability to build new or add to existing JSON text, according to Warren.  Suddenly, it puts z/OS shops smack in the middle of this hot new game.

While almost all environments on z/OS can take advantage of these new services, Warren adds, traditional z/OS programs running in a native environment (apart from a z/OS UNIX or JVM environment) stand to benefit the most. Before the toolkit, native z/OS applications, as noted above, had little or no easy options available to them to participate as a web services client. Now they do.

Programs running as a batch job, a started procedure, or in almost any address space on a z/OS system have APIs they can utilize in a similar manner to any standard z/OS APIs provided by the OS. Programs invoke these APIs in the programming language of their choice. Among z languages, C/C++, COBOL, PL/I, and Assembler are fully supported, and the toolkit provides samples for C/C++, COBOL, PL/I initially. Linux on z and LinuxONE shops already can do this.

Businesses with z data centers are being forced by the market to adopt Web applications utilizing published Web APIs that can be used by something as small as the watch you wear, noted Warren. As a result, the proliferation of Web services applications in recent years has been staggering, and it’s not by coincidence. Representational state transfer (REST) applications are simple, use the ubiquitous HTTP protocol—which helps them to be platform-independent—and are easy to organize.  That’s what the young developers—the millennials—have been doing with Bluemix and other cloud-based development environments for their cloud, mobile, and  web-based applications.  With the z/OS web enablement toolkit now any z/OS shop can do the same. As IoT ramps up expect more demands for these kinds of applications and with a variety of new devices and APIs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: