Posts Tagged ‘CICS’

IBM Introduces New DS8880 All-Flash Arrays

January 13, 2017

Yesterday IBM introduced three new members of the DS8000 line, each an all-flash product.  The new, all-flash storage products are designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical.

ibm-flash-ds8888-mainframe-ficon

IBM envisions these boxes for more than the z’s core OLTP workloads. According to the company, they are built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions are designed to support cognitive workloads, which can be used to uncover trends and patterns that help improve decision-making, customer service, and ROI. ERP and financial transactions certainly constitute conventional OLTP but the cognitive workloads are more analytical and predictive.

The three products:

  • IBM DS8884 F
  • IBM DS8886 F
  • IBM DS8888 F

The F signifies all-flash.  Each was designed with High-Performance Flash Enclosures Gen2. IBM did not just slap flash into existing hard drive enclosures.  Rather, it reports undertaking a complete redesign of the flash-to-z interaction. As IBM puts it: through deep integration between the flash and the z, IBM has embedded software that facilitates data protection, remote replication, and optimization for midrange and large enterprises. The resulting new microcode is ideal for cognitive workloads on z and Power Systems requiring the highest availability and system reliability possible. IBM promises that the boxes will deliver superior performance and uncompromised availability for business-critical workloads. In short, fast enough to catch bad guys before they leave the cash register or teller window. Specifically:

  • The IBM DS8884 F—labelled as the business class offering–boasts the lowest entry cost for midrange enterprises (prices starting at $90,000 USD). It runs an IBM Power Systems S822, which is a 6-core POWER8 processor per S822 with 256 GB Cache (DRAM), 32 Fibre channel/FICON ports, and 6.4 – 154 TB of flash capacity.
  • The IBM DS8886 F—the enterprise class offering for large organizations seeking high performance– sports a 24-core POWER8 processor per S824. It offers 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4 – 614.4 TB of flash capacity. That’s over one-half petabyte of high performance flash storage.
  • The IBM DS8888 F—labelled an analytics class offering—promises the highest performance for faster insights. It runs on the IBM Power Systems E850 with a 48-core POWER8 processor per E850. It also comes with 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4TB – 1.22 PB of flash capacity. Guess crossing the petabyte level qualifies it as an analytics and cognitive device along with the bigger processor complex

As IBM emphasized in the initial briefing, it engineered these storage devices to surpass the typical big flash storage box. For starters, IBM bypassed the device adapter to connect the z directly to the high performance storage controller. IBM’s goal was to reduce latency and optimize all-flash storage, not just navigate a simple replacement by swapping new flash for ordinary flash or, banish the thought, HDD.

“We optimized the data path,” explained Jeff Barber IBM systems VP for HE Storage BLE (DS8, DP&R and SAN). To that end, IBM switched from a 1u to a 4u enclosure, runs on shared-nothing clusters, and boosted throughput performance. The resulting storage, he added, “does database better than anyone; we can run real-time analytics.”  The typical analytics system—a shared system running Hadoop, won’t even come close to these systems, he added. With the DS8888, you can deploy a real-time cognitive cluster with minimal latency flash.

DancingDinosaur always appreciates hearing from actual users. Working through a network of offices, supported by a team of over 850 people, Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (such as electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research on solutions capable of managing these applications –which included both Hitachi and EMC –the organization deployed the IBM DS8886 along with DB2 for z/OS data server software to provide an integrated data backup and restore system. (Full disclosure: DancingDinosaur has not verified this customer story.)

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

For full details and specs on these products, click here

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

z System-Power-Storage Still Live at IBM

January 5, 2017

A mid-December briefing by Tom Rosamilia, SVP, IBM Systems, reassured some that IBM wasn’t putting its systems and platforms on the backburner after racking up financial quarterly losses for years. Expect new IBM systems in 2017. A few days later IBM announced that Japan-based APLUS Co., Ltd., which operates credit card and settlement service businesses, selected IBM LinuxONE as its mission-critical system for credit card payment processing. Hooray!

linuxone-emperor-2

LinuxONE’s security and industry-leading performance will ensure APLUS achieves its operational objectives as online commerce heats up and companies rely on cloud applications to draw and retain customers. Especially in Japan, where online and mobile shopping has become increasingly popular, the use of credit cards has grown, with more than 66 percent of consumers choosing that method for conducting online transactions. And with 80 percent enterprise hybrid cloud adoption predicted by 2017, APLUS is well positioned to connect cloud transactions leveraging LinuxONE. Throw in IBM’s expansion of blockchain capabilities and the APLUS move looks even smarter.

With the growth of international visitors spending money, IBM notes, and the emergence of FinTech firms in Japan have led to a diversification of payment methods the local financial industry struggles to respond. APLUS, which issues well-known credit cards such as T Card Plus, plans to offer leading-edge financial services by merging groups to achieve lean operations and improved productivity and efficiency. Choosing to update its credit card payment system with LinuxONE infrastructure, APLUS will benefit from an advanced IT environment to support its business growth by helping provide near-constant uptime. In addition to updating its server architecture, APLUS has deployed IBM storage to manage mission-critical data, the IBM DS8880 mainframe-attached storage that delivers integration with IBM z Systems and LinuxONE environments.

LinuxONE, however, was one part of the IBM Systems story Rosamilia set out to tell.  There also is the z13s, for encrypted hybrid clouds and the z/OS platform for Apache Spark data analytics and even more secure cloud services via blockchain on LinuxONE, by way of Bluemix or on premises.

z/OS will get attention in 2017 too. “z/OS is the best damn OLTP system in the world,” declared Rosamilia. He went on to imply that enhancements and upgrades to key z systems were coming in 2017, especially CICS, IMS, and a new release of DB2. Watch for new announcements coming soon as IBM tries to push z platform performance and capacity for z/OS and OLTP.

Rosamilia also talked up the POWER story. Specifically, Google and Rackspace have been developing OpenPOWER systems for the Open Compute Project.  New POWER LC servers running POWER8 and the NVIDIA NVLink accelerator, more innovations through the OpenCAPI Consortium, and the team of IBM and Nvidia to deliver PowerAI, part of IBM’s cognitive efforts.

As much as Rosamilia may have wanted to talk about platforms and systems IBM continues to avoid using terms like systems and platforms. So Rosamilia’s real intent was to discuss z and Power in conjunction with IBM’s strategic initiatives.  Remember these: cloud, big data, mobile, analytics. Lately, it seems, those initiatives have been culled down to cloud, hybrid cloud, and cognitive systems.

IBM’s current message is that IT innovation no longer comes from just the processor. Instead, it comes through scaling performance by workload and sustaining leadership through ecosystem partnerships.  We’ve already seen some of the fruits of that innovation through the Power community. Would be nice to see some of that coming to the z too, maybe through the open mainframe project. But that isn’t about z/0S. Any boost in CICS, DB2, and IMS will have to come from the core z team. The open mainframe project is about Linux on z.

The first glimpse we had of this came last spring in a system dubbed Minsky, which was described back then by commentator Timothy Prickett Morgan. With the Minsky machine, IBM is using NVLink ports on the updated Power8 CPU, which was shown in April at the OpenPower Summit and is making its debut in systems actually manufactured by ODM Wistron and rebadged, sold, and supported by IBM. The NVLink ports are bundled up in a quad to deliver 80 GB/sec bandwidth between a pair of GPUs and between each GPU and the updated Power8 CPU.

The IBM version, Morgan describes, aims to create a very brawny node with very tight coupling of GPUs and CPUs so they can better share memory, have fewer overall GPUs, and more bandwidth between the compute elements. IBM is aiming Minsky at HPC workloads, according to Morgan, but there is no reason it cannot be used for deep learning or even accelerated databases.

Is this where today’s z data center managers want to go?  No one is likely to spurn more performance, especially if it is accompanied with a price/performance improvement.  Whether rank-and-file z data centers are queueing up for AI or cognitive workloads will have to be seen. The sheer volume and scale of expected activity, however, will require some form of automated intelligent assist.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

Compuware Acquires Standardware COPE IMS to Speed DevOps and Save Money

December 16, 2016

Compuware, in early December, acquired the assets of Standardware, the leading provider of IMS virtualization technology.  Standardware’s COPE reduces the considerable time, cost and technical difficulty associated with the development and testing of IMS systems, enabling z-based data centers to significantly increase their digital business agility while also enabling less mainframe-experienced staff to perform IMS-related DevOps tasks. In addition, it allows IMS to run as a virtualized image, saving significantly on software charges.

compuware-ims-virtual-environment-31594_apollo_technical_graphic_3

Standardware’s COPE IMS, courtesy of Compuware

All three Compuware acquisitions this year—Standardware, ISPW, Itegrations—aimed to facilitate mainframe code management or app dev. The company’s acquisition of ISPW brought source code management and release automation. Itegrations eased the migration to ISPW from CA Endevor. Now Standardware brings IMS virtualization technology.

IMS continues as a foundational database and transaction management technology for systems of record at large global mainframe enterprises, especially in industries such as banking, insurance, airlines and such. Its stability, dependability, and high efficiency at scale make it particularly valuable as a back-end resource for high-traffic, customer-facing apps. IBM’s mainframe Information Management System (IMS) provides a hierarchical database and information management system with extensive transaction processing capabilities. It offers a completely different database model from the common relational model behind IBM’s DB2.

IBM touts IMS as the most secure, highest performing, and lowest cost hierarchical database management software for online transaction processing (OLTP). IMS is used by many of the top Fortune 1000 companies worldwide. Collectively these companies process more than 50 billion transactions per day through IMS, and they do so securely.

As Compuware puts it, IMS remains a deeply foundational database and transaction management technology for systems of record at large global enterprises, especially in the core mainframe segments like financial services or transportation. Its stability, dependability and high efficiency ensure it can continue to play an important role as a back-end resource for high-traffic customer-facing apps. All that’s needed is to reduce the effort required to use it.

Conventional approaches to the development and testing of IMS systems, however, can be excessively slow, technically challenging, and expensive. This is too high a technical price to pay in today’s agile, fast iteration app dev environment.  For example,  the set-up of IMS application development environments require configuring dedicated IMS regions and databases, which is especially time-consuming; additional resources must be defined and compiled for each instance, and at every stage of development expect testing, training, and systems integration. Worse yet, these tasks typically require experienced DBAs and system programmers with IMS-specific skills, making it an increasingly problematic and costly constraint given the generational shift underway in IT, which makes those skills increasingly rare.

As a result of these bottlenecks and resource constraints, large enterprises can find themselves far less nimble than their smaller competitors and unable to fully leverage their current IMS assets in response to digital requirements.  That leaves the mainframe shop at a distinct disadvantage.

Since COPE comes well integrated with Compuware Xpediter, an automated mainframe debugging tool, many such problems go away. Xpediter, which is interactive,  can be used within the Standardware virtualized environment and COPE. When a problem occurs, developers can quickly set up an interactive test session with minimal effort and resolve the problem. When they’re done, they can confidently move the application into production. And now that Xpediter is integrated with COPE IMS virtualization lets multiple developers debug application code in the same or different logical IMS systems within the virtualized COPE IMS environment.

And therein lies the savings for mainframe shops. As Tyler Allman, Compuware’s COPE product manager explains, COPE converts IMS to run in a virtual environment. It takes a COPE expert to set it up initially, but once set up, it can run as a logical IMS system with almost no ongoing maintenance, which results in administrative savings.

On the software side, IMS is licensed as part of the usual rolling average 4hr workload software billing. Once the environment has been virtualized with COPE, you can run multiple IMS logical regions at no additional cost. The savings experienced by mainframe data centers , Allman suggests, can amount to tens if not hundreds of thousands of dollars. These saving alone can justify COPE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Play the Cloud-Mobile App Dev Game with z/OS Client Web Enablement

April 15, 2016

Is you z team feeling a little nervous that they are missing an important new game? Are business managers bugging you about running slick Cloud and mobile applications through the z? Worse, are they turning to third party contractors to build apps that will try to connect your z to the cloud and mobile world? If so, it is time to take a close look at IBM’s z/OS Client Web Enablement Toolkit.

mobile access backend data 1800FLOWERS

Accessing backend system through a mobile device

If you’re a z shop running Linux on z or a LinuxONE shop you don’t need z/OS Web Enablement. The issue only comes up when you need to connect the z/OS applications to cloud, web, and mobile apps. IBM began talking up z/OS Enablement Toolkit since early this year. Prior to the availability of the toolkit, native z/OS applications had little or no easy options available to participate as a web services client.

You undoubtedly know the z in its role as a no-fail transaction workhorse. More recently you’ve watched as it learned new tricks like managing big data or big data analytics through IBM’s own tools and more recently with Spark. The z absorbed the services wave with SOA and turned CICS into a handler for Web transactions. With Linux it learned an entire new way to relate to the broader distributed world. The z has rolled with all the changes and generally came out ahead.

Now the next change for z data centers has arrived. This is the cloud/web-mobile-analytics execution environment that seemingly is taking over the known world. It almost seems like nobody wants a straight DB2 CICS transaction without a slew of other devices getting involved, usually as clients. Now everything is HTTP REST to handle x86 clients and JSON along with a slew of even newer scripting languages. Heard about Python and Ruby? And they aren’t even the latest.  The problem: no easy way to perform HTTP REST calls or handle JSON parsing on z/OS. This results from the utter lack of native JSON services built into z/OS, according to Steve Warren, IBM’s z/OS Client Web Enablement guru.

Starting, however, with z/OS V2.2 and now available in z/OS V2.1 via a couple of service updates,  Warren reports, the new z/OS Client Web Enablement Toolkit changes the way a z/OS-based data center can think about z/OS applications communicating with another web server. As he explains it, the toolkit provides an easy-to-use, lightweight solution for applications looking to easily participate as a client, in a client/server web application. Isn’t that what all the kids are doing with Bluemix? So why not with the z and z/OS?

Specifically, the z/OS Toolkit provides a built-in protocol enabler using interfaces similar in nature to other industry-standard APIs along with a z/OS JSON parser to parse JSON text coming from any source and the ability to build new or add to existing JSON text, according to Warren.  Suddenly, it puts z/OS shops smack in the middle of this hot new game.

While almost all environments on z/OS can take advantage of these new services, Warren adds, traditional z/OS programs running in a native environment (apart from a z/OS UNIX or JVM environment) stand to benefit the most. Before the toolkit, native z/OS applications, as noted above, had little or no easy options available to them to participate as a web services client. Now they do.

Programs running as a batch job, a started procedure, or in almost any address space on a z/OS system have APIs they can utilize in a similar manner to any standard z/OS APIs provided by the OS. Programs invoke these APIs in the programming language of their choice. Among z languages, C/C++, COBOL, PL/I, and Assembler are fully supported, and the toolkit provides samples for C/C++, COBOL, PL/I initially. Linux on z and LinuxONE shops already can do this.

Businesses with z data centers are being forced by the market to adopt Web applications utilizing published Web APIs that can be used by something as small as the watch you wear, noted Warren. As a result, the proliferation of Web services applications in recent years has been staggering, and it’s not by coincidence. Representational state transfer (REST) applications are simple, use the ubiquitous HTTP protocol—which helps them to be platform-independent—and are easy to organize.  That’s what the young developers—the millennials—have been doing with Bluemix and other cloud-based development environments for their cloud, mobile, and  web-based applications.  With the z/OS web enablement toolkit now any z/OS shop can do the same. As IoT ramps up expect more demands for these kinds of applications and with a variety of new devices and APIs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

State of z System CICS in the Modern Enterprise

March 25, 2016

You should be very familiar with the figures describing the continued strength of mainframe computing in the enterprise today. Seventy percent of enterprise data resides on a mainframe, 71 percent of all Fortune 500 companies run their core businesses on the mainframe, and 92 of the top 100 banks rely on the mainframe to provide at-your-fingertip banking services to the customers (many via mobile).  CICS, according to IBM, handles 1.1 million transactions every second, every day. By comparison, Google handles a mere 59,421 searches every second.

cics at-interconnect-2015-1-638

CICS at IBM Interconnect 2015

H&W, a top mainframe ISV recently released its State of CICS in the Modern Enterprise study. Find a copy of the study here.  For starters, it found that nearly two-thirds on respondents run 51-100% their business-critical applications online through CICS. Within government, 32% of respondents reported running 75-100% of business-critical applications through CICS.

A different study suggests that CICS applications handle more than 30 billion transactions per day and process more than $1 trillion dollars’ worth of business each week. Mainframe data also still drives information systems worldwide. Approximately 60 percent of organizations responding to a 2013 Arcati survey said they manage 40 to 100 percent of their enterprise data on the mainframe.

Integrating legacy systems is a strategy mainframe sites continue to adopt. In fact, 74 percent of respondents in that survey said specifically they are web-enabling CICS subsystems. However, as organizations pursue this strategy, challenges can include unlocking the data, keeping the applications and data available to users, and maintaining data integrity in an efficient and cost-effective manner. Nothing new for data center managers about this.

According to the H&W study, online CICS usage has gone up in the last 3 years, from 54% of respondents reporting running over half of their business-crit applications through CICS to 62% in 2015. Hope people will finally stop talking about the mainframe heading toward extinction.

CICS also has carved out a place on the web and with mobile. Sixty-five percent of respondents say at least some of their business-crit applications are available via PC, phone, tablet, and web-based interfaces while 11% more reported plans to mobile- and web-enable their mainframe apps in the future. Thirteen percent reported no plans to do so. Government sector respondents reported that they were significantly more likely to not make the applications available for online access; so much for open government and transparency.

CICS availability proved to raise no concern although a few were concerned with performance. Based on the study results in 2012 some predicted that companies would be moving away from CICS by now. These predictions, apparently, have not come to pass, at least not yet.

In fact, as far as the future of CICS, the technology seems to be facing a remarkably stable outlook for the next 3-5 years. The largest number of respondents, 37%, expected the number of CICS applications to remain the same in that period while 34% said they would be decreasing. More encouragingly, 27% of respondents planned to increase their number of CICS applications accessible online. In the financial services segment, 38% planned to increase the number of online CICS applications while only 10% expected to decrease the number of online applications. Given the demands by banking customers for mobile apps the increase in the number of CICS applications makes perfect sense.

The researchers concluded that CICS continues to play an important role for the majority of mainframe shops surveyed and an increasingly important role for a significant chunk of them.  The respondents also reported that, in general, they were satisfied with CICS performance even in the face of increasingly complex online workloads.

Mainframe CICS may see even more action going forward depending on what companies do with Internet of Things. As with mobile traffic, companies may turn to CICS to handle critical aspects of backend IoT activity, which has the potential to become quite large.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Mobile Financial App Security Appears Shaky

January 15, 2016

IBM has made mobile a key strategic imperative going forward, even discounting mobile software license charges on z. However, a recent study suggests that mobile apps may be less secure than app users think. For example, 83% of the app users surveyed felt their applications were adequately secure. Yet, 90% of the applications Arxan Technologies tested were vulnerable to at least two of the Open Web Application Security Project (OWASP) Mobile Top 10 Risks.

dino Arxan_SOAS_Title_Image

The OWASP Top Ten is an awareness document for web application security. The OWASP Top Ten represents a broad consensus about what the most critical web application security flaws are. Security experts will use the list as a first step in changing the security awareness and software development culture around security in organizations around the world. You can find the Arxan report here.

In the latest study, 41% of mobile finance app users expect their finance apps to be hacked within the next six months. That’s not exactly a vote of confidence. Even worse, 42% of executive IT decision makers, those who have oversight or insight into the security of the mobile finance apps they produce, feel the same way.  Does this bother you?

It should. The researchers found that 81% of app users would change providers if apps offered by similar providers were more secure. While millennials are driving the adoption of mobile apps, their views on the importance of app security were equally as strong as the older non-millennials. Overall, survey results showed very little geographical discrepancies across the US, UK, Germany, and Japan.

This sentiment makes it sound like mobile finance applications are at a hopeless state of security where, despite Herculean efforts to thwart attackers, adversaries are expected to prevail. But the situation is not hopeless; it’s careless. Half the organizations aren’t even trying. Fully 50% of organizations have zero budget allocated for mobile app security—0, nothing, nada—according to the researchers.  By failing to step up their mobile security game organizations risk losing customers to competitors who offer alternative apps that are more secure.

How bad is the mobile security situation? When put to the test, the majority of mobile apps failed critical security tests and could easily be hacked, according to the researchers.  Among 55 popular mobile finance apps tested for security vulnerabilities, 92% were shown to have at least two OWASP Mobile Top 10 Risks. Such vulnerabilities could allow the apps to be tampered and reverse-engineered, which could clearly put sensitive financial information in the wrong hands or, even worse, potentially redirect the flow of money. Ouch!

Think about all the banks and insurance companies that are scrambling to deploy new mobile apps. As it turns out, financial services organizations, the researchers report, also are among the top targets of hackers seeking high-value payment data, intellectual property (IP), and other sensitive information. Specifically, employee, customer, and soft IP data are the top three targets of cyber-attacks in the financial services market; while at the same time theft of hard IP soared 183% in 2015, according to PwC, another firm researching the segment.

With the vast majority of cyber-attacks happening at the application layer, one would think that robust application security would be a fundamental security measure being aggressively implemented and increasingly required by regulators, particularly given the financial services industry’s rapid embrace of mobile financial apps. But apparently it is not.

So where does the financial mobile app industry stand? Among the most prevalent OWASP Mobile Top 10 Risks identified among the mobile finance apps tested the top 2 risks were:

1) Lack of binary protection (98%) – this was the most prevalent vulnerability

2) Insufficient transport layer protection (91%).

A distant third, at 58%, was unintended data leakage. All these vulnerabilities, the top two especially, make the mobile financial applications susceptible to reverse-engineering and tampering in addition to privacy violations and identity theft.

Says Arxan CTO Sam Rehman: “The impact for financial institutions and mobile finance app users can be devastating. Imagine having your mobile finance app leak your personal financial information and identity, or your app maliciously redirecting your money.” The customer outrage and bad press that followed wouldn’t be pretty, not to mention the costly lawsuits.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: