Posts Tagged ‘IMS’

Rocket z/SQL Accesses Non-SQL Mainframe Data

August 2, 2013

Rocket Software’s z/SQL enables access to non-SQL mainframe data using standard SQL commands and queries.  The company is offering a z/SQL free trial; you can install it no charge and get full access for as many users as you want. The only caveat, the free version is limited to three files. You can download the free trial here.

z/SQL will run SQL queries against any data source that speaks ANSI 92. “The tool won’t even know it is running relational data,” explained Gregg Willhoit, managing director of the Rocket Data Lab. That means you can run it against VSAM, IMS, Adabas, DB2 for z/OS, and physical sequential files.  In addition, you can use z/SQL to make real-time SQL queries directly to mainframe programs, including CICS TS, IMS TM, CA IDMS, and Natural.

By diverting up to 99% of processing-intensive data mapping and transformation from the mainframe’s CPU to the zIIP, z/SQL lowers MIPS capacity usage and its associated costs, effectively reducing TCO. And, it opens up the zIIP to extend programs and systems of record data to the full range of environments noted above.

z/SQL’s ability to automatically detect the presence of the z’s zIIP assist processor allows it to apply its patent pending technology to further boost the zIIP’s performance advantages.  The key attributes of the zIIP processor—low  cost,  speeds often greater than the speed of the mainframe engines (sub-capacity mainframe license), and its typical low utilization—are fully exploited by z/SQL for lowering a mainframe shop’s  TCO while providing for an accelerated ROI.

Rocket z/SQL is built on Metal C, a z/OS compiler option that provides C-language extensions allowing you to specify assembly statements that call system services directly. The DRDA support and the ANSI 92 SQL engine have been developed using what amounts to a new language that allows even more of z/SQL’s work to continue to run on the zIIP.  One of the key features in Metal C is allowing z/SQL to optimize its code paths for the hardware that it’s running on.  So, no matter if you’re running on older z9 or z10 or the latest zEC12 and zBC12 processors, z/SQL chooses the code path most optimized for your hardware.

With z/SQL you can expand your System z analytics effort and push a wider range of mainframe data analytics to near real time.  Plus, the usual ETL and all of its associated disadvantages are no longer a factor.  As such z/SQL promises to be a disruptive technology that eliminates the need for ETL while pushing the analytics to where the data resides as opposed to ETL, which must bring the data to the analytics.  The latter, noted Willhoit, is fraught with performance and data currency issues.

It’s not that you couldn’t access non-SQL data before z/SQL, but it was more cumbersome and slower.  You would have to replicate data, often via FTP to something like Excel. Rocket, instead, relies on assembler to generate an optimized SQL engine for the z9, z10, z196, zEC12, and now the zBC12.  With z/SQL the process is remarkably simple: no replication, no rewriting of code, just recompile. It generates the optimized assembler (so no assembler work required on your part).

Query performance, reportedly, is quite good.  This is due, in part, because it is written in assembler, but also because it takes advantage of the z’s multi-threading. It reads the non-relational data source with one thread and uses a second thread to process the network I/O.  This parallel I/O architecture for data promises game changing performance, especially for big data, through significant parallelism of network and database I/O.  It also takes full advantage of the System z hardware by using buffer pools and large frames, essentially eliminating dynamic address translation.

z/SQL brings its own diagnostic capabilities, providing a real-time view into transaction threads with comprehensive trace/browse capabilities for diagnostics.  It enables a single, integrated approach to identifying, diagnosing and correcting data connectivity issues between distributed ODBC, ADO.NET, and JDBC client drivers and mainframes. Similarly z/SQL provides dynamic load balancing and a virtual connection facility that reduces the possibility of application failures, improves application availability and performance, as well as supports virtually unlimited concurrent users and transaction rates, according to the company. Finally, it integrates with mainframe RACF, CA-TopSecret, and CA-ACF2 as well as SSL and client-side, certificate-based authentication on distributed platforms. z/SQL fully participates in the choreography of SSL between the application platform and the mainframe.

By accessing mainframe programs and data stored in an array of relational and non-relational formats z/SQL lets you leave mainframe data in place, on the z where it belongs, and avoids the cost and risk of replication or migration. z/SQL becomes another way to turn the z into an enterprise analytics server for both SQL and non-SQL data.

Rocket calls z/SQL the world’s most advanced mainframe access and integration software. A pretty bold statement that begs to be proven through data center experience. Test it in your data center for free.  As noted above, you can download the free trial here. If you do, please let me know how it works out. (Promise it won’t be publicized here.)

Next Generation zEnterprise Developers

April 19, 2013

Mainframe development keeps getting more complicated.  The latest complication can be seen in Doug Balog’s reference to mobile and social business on the zEnterprise, reported by DancingDinosaur here a few weeks ago. That is what the next generation of z developers face.

Forget talk about shortages of System z talent due to the retirement of mainframe veterans.  The bigger complication comes from need for non-traditional mainframe development skills required to take advantage mobile and social business as well as other recent areas of interest such as big data and analytics. These areas entail combining new skills like JSON, Atom, Rest, Hadoop, Java, SOA, Linux, hybrid computing along with traditional mainframe development skills like CICS and COBOL, z/VM, SQL, VSAM, and IMS. This combination is next to impossible to find in one individual. Even assembling a coherent team encompassing all those skills presents a serious challenge.

The mainframe industry has been scrambling to address this in various ways.  CA Technologies added GUI to its various tools and BMC has similarly modernized its various management and DB2 tools. IBM, of course, has been steadily bolstering the Rational RDz tool set.   RDz is a z/OS Eclipse-based software IDE.  RDz streamlines and refactors z/OS development processes into structured analysis, editing, and testing operations with modern GUI tools, wizards, and menus that, IBM notes, are perfect for new-to the-mainframe twenty- and thirty-something developers, the next generation of z developers.

Compuware brings its mainframe workbench, described as a modernized interactive developer environment that introduces a new graphical user interface for managing mainframe application development activities.  The interactive toolset addresses every phase of the application lifecycle.

Most recently, Micro Focus announced the release of its new Enterprise Developer for IBM zEnterprise.  The product enables customers to optimize all aspects of mainframe application delivery and promises to drive down costs, increase productivity, and accelerate innovation. Specifically, it enables both on- and off-mainframe development, the latter without consuming mainframe resources, to provide a flexible approach to the delivery of new business functions. In addition, it allows full and flexible customization of the IDE to support unique development processes and provides deep integration into mainframe configuration management and tooling for a more comprehensive development environment. It also boasts of improved application quality with measurable improvement in delivery times.  These capabilities together promise faster developer adoption.

Said Greg Lotko, Vice President and Business Line Executive, IBM System z, about the new Micro Focus offering:  We are continually working with our technology partners to help our clients maximize the value in their IBM mainframes, and this latest innovation from Micro Focus is a great example of that commitment.

Behind all of this development innovation is an industry effort to cultivate the next generation of mainframe developers. Using a combination of trusted technology (COBOL and mainframe) and new innovation (zEnterprise, hybrid computing, expert systems, and Eclipse), these new developers; having been raised on GUI and mobile and social, can leverage what they learned growing up to build the multi-platform, multi-device mainframe applications that organizations will need going forward.

As these people come on board as mainframe-enabled developers organizations will have more confidence in continuing to invest in their mainframe software assets, which currently amount to an estimated 200-300 billion lines of source code and may even be growing as mainframes are added in developing markets, considered a growth market by IBM.  It only makes sense to leverage this proven code base than try to replace it.

This was confirmed in a CA Technologies survey of mainframe users a year ago, which found that 1) the mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise; 2) the machine is viewed as an enabler of innovation as big data and cloud computing transform the face of enterprise IT—now add mobile; and 3) companies are seeking candidates with cross-disciplinary skill sets to fill critical mainframe workforce needs in the new enterprise IT thinking.

Similarly, a recent study by the Standish Group showed that 70 percent of CIOs saw their organizations’ mainframes as having a central and strategic role in their overall business success.  Using the new tools noted above organizations can maximize the value of the mainframe asset and cultivate the next generation mainframe developers.

IBM Big Data Innovations Heading to System z

April 4, 2013

Earlier this week IBM announced new technologies intended to help companies and governments tackle Big Data by making it simpler, faster and more economical to analyze massive amounts of data. Its latest innovations, IBM suggested, would drive reporting and analytics results as much as 25 times faster.

The biggest of IBM’s innovations is BLU Acceleration, targeted initially for DB2. It combines a number of techniques to dramatically improve analytical performance and simplify administration. A second innovation, referred to as the enhanced Big Data Platform, improves the use and performance of the InfoSphere BigInsights and InfoSphere Streams products. Finally, it announced the new IBM PureData System for Hadoop, designed to make it easier and faster to deploy Hadoop in the enterprise.

BLU Acceleration is the most innovative of the announcements, probably a bona fide industry first, although others, notably Oracle, are scrambling to do something similar. BLU Acceleration enables much faster access to information by extending the capabilities of in-memory systems. It allows the loading of data into RAM instead of residing on hard disks for faster performance and dynamically moves unused data to storage.  It even works, according to IBM, when data sets exceed the size of the memory.

Another innovation included in BLU Acceleration is data skipping, which allows the system to skip over irrelevant data that doesn’t need to be analyzed, such as duplicate information. Other innovations include the ability to analyze data in parallel across different processors; the ability to analyze data transparently to the application, without the need to develop a separate layer of data modeling; and actionable compression, where data no longer has to be decompressed to be analyzed because the data order has been preserved.   Finally, it leverages parallel vector processing, which enables multi-core and SIMD (Single Instruction Multiple Data) parallelism.

During testing, IBM reported, some queries in a typical analytics workload ran more than 1000x faster when using the combined innovations of BLU Acceleration. It also resulted in 10x storage space savings during beta tests. BLU acceleration will be used first in DB2 10.5 and Informix 12.1 TimeSeries for reporting and analytics. It will be extended for other data workloads and to other products in the future.

BLU Acceleration promises to be as easy to use as load-and-go.  BLU tables coexist with traditional row tables; using the same schema, storage, and memory. You can query any combination of row or BLU (columnar) tables, and IBM assures easy conversion of conventional tables to BLU tables.

DancingDinosaur likes seeing the System z included as an integral part of the BLU Acceleration program.  The z has been a DB2 workhorse and apparently will continue to be as organizations move into the emerging era of big data analytics. On top of its vast processing power and capacity, the z brings its unmatched quality of service.

Specifically, IBM has called out the z for:

  • InfoSphere BigInsights via the zEnterprise zBX for data exploration and online archiving
  • IDAA (in-memory Netezza technology) for reporting and analytics as well as operational analytics
  • DB2 for SQL and NoSQL transactions with enhanced Hadoop integration in DB2 11 (beta)
  • IMS for highest performance transactions with enhanced Hadoop integration  in IMS 13 (beta)

Of course, the zEnterprise is a full player in hybrid computing through the zBX so zEnterprise shops have a few options to tap when they want to leverage BLU Accelerator and IBM’s other big data innovations.

Finally, IBM announced the new IBM PureData System for Hadoop, which should simplify and streamline the deployment of Hadoop in the enterprise. Hadoop has become the de facto open systems approach to organizing and analyzing vast amounts of unstructured as well as structured data, such as posts to social media sites, digital pictures and videos, online transaction records, and cell phone location data. The problem with Hadoop is that it is not intuitive for conventional relational DBMS staff and IT. Vendors everywhere are scrambling to overlay a familiar SQL approach on Hadoop’s map/reduce method.

The new IBM PureData System for Hadoop promises to reduce from weeks to minutes the ramp-up time organizations need to adopt enterprise-class Hadoop technology with powerful, easy-to-use analytic tools and visualization for both business analysts and data scientists. It also provides enhanced big data tools for management, monitoring, development, and integration with many more enterprise systems.  The product represents the next step forward in IBM’s overall strategy to deliver a family of systems with built-in expertise that leverages its decades of experience in reducing the cost and complexity associated with information technology.

Updated Software for IBM zEC12

October 11, 2012

Everyone gets excited by a new piece of hardware, but it is the software that enables the new machine to work its magic. This certainly is the case with the zEC12. On Oct. 3 IBM announced  upgrades to zEnterprise workhorse software like CICS, Omegamon, Cognos, and zSecure intended to better tap the capabilities of zEC12. Even IMS and Sterling are getting a refresh.

Also getting increased attention is Netezza, which has emerged as a key component of IBM’s data analytics approach. Netezza enables IBM to counter Oracle’s Exalytics, another in-memory data analytics appliance. In fact, IBM’s announcement of the newest PureSystems, the PureData System, earlier this week gives IBM another counter punch.

For the zEnterprise IBM adds a flexible storage capability that provides the performance of the IDAA while removing the cost of storage from the z. Netezza will work with whatever IBM storage the organization prefers.  A new incremental update capability propagates data changes as they occur, making it possible to analyze activity almost immediately. This resolves the problem of the data currency, in effect providing as close to real-time analytics as most organizations will get or need.

CICS, which already had become a mainframe workhorse through SOA and web services, now adds rich cloud capabilities too. CICS v5.1 brings new web app capabilities built on the WAS Liberty Profile. New PaaS capabilities enable it to host SaaS apps based on CICS applications. It also employs a new lightweight Java web container that combines Java Servlets and JSPs with fast local access to CICS applications.  IBM reports the enhanced CICS v5.1 delivers a 25% performance gain.

Various online discussion groups are buzzing about the zEC12 software enhancements.  A sampling:

  • IBM provides DB2 10 performance enhancements for z/OS. As importantly for mixed platform (hybrid) shops DB2 10 LUW (Linux UNIX Windows) also will provide similar performance improvements.
  • There is added support for Oracle’s PL/SQL for DB2 10 for stored procedures and Oracle application interfaces for Java, Pro*C, Pro*COBOL, and Forms.
  • IBM also announced significant transactional performance improvements when running WebSphere on the zEC12.
  • IBM has started a Beta Testing Program for the new CICS Transaction Server 5.1 release that has a significant number of enhancements to support Web Applications and CICS application modernization, mainly through IBM’s Rational HATS.
  •  IBM has also improved performance of the C/C++ V1.13 compiler, Metal C feature of the IBM z/OS XL C/C++ compiler; and PL/1 V4.3 compiler for the zEC12.

Maybe less of a buzz generator but IBM Sterling gets a boost with the Sterling B2B Integrator V5.2.4 and Sterling File Gateway V2.2.4 for integration and file-based exchanges. IBM’s zSecure suite V1.13.1 brings new integration with QRadar, expanded integration points with DB2, enhanced RACF database cleanup capabilities, and support for the new enhanced CICS Transaction Server.

IBM also used the announcement to promote the relaunch of zEnterprise Analytics System 9710 (previously called IBM Smart Analytics System 9710) an unusual combo data decision system for analytics. It joins high performance data warehouse management with System z availability and recoverability using the z114. When the IDAA is added the result is a hybrid system of MPP and SMP technologies that combines mixed workload capabilities—both transaction and high speed analytical applications—on single platform tuned for operational business analytics.

Independent Assessment, publisher of DancingDinosaur, has finally released its newest white paper, zEnterprise BladeCenter Extension (zBX): the Case for Adopting Hybrid Computing. It is the most updated look at the zBX yet, including details on the zEC12. Available for free. Click here.

IMS on System z gets faster and cheaper

July 5, 2010

IMS is decades-old hierarchical IBM’s transaction database technology.  It sits behind many of the big transaction processing systems used around the world, from airlines to telcos to banks. IBM has been teaching it new tricks—SOA, Web 2.0—but also others have taught it tricks too, including BMC and NEON.

NEON, which is battling IBM over its zPrime technology, apparently never misses an opportunity to get into IBM’s face. Last week NEON announced zPrime for IMS at a cost of just $1!  The catch: organizations must make a two-year commitment at $1 per year and install a new version of NEON zPrime in production by December 31, 2010. Between the $1 price tag and the considerable mainframe software licensing cost savings zPrime enables, this can be an incredible bargain for a gutsy organization looking to reduce costs in a big way.

Why do you need to be gutsy? Because IBM has unleashed a barrage of FUD (fear uncertainty doubt) against NEON, zPrime, and those organizations willing to try it. Give NEON credit; this is a brazen strategy to help zPrime gain traction while lawsuits and counter-suits fly. Initial jury selection has already been scheduled, but it is not until March of 2012. DancingDinosaur covered this here just a couple of weeks ago.

To recap: zPrime enables organizations to run traditional z/OS workloads on zIIP and zAAP specialty processors. The zIIP will be the target processor for IMS applications, no doubt. By doing so, the organization avoids the hefty software licensing charges entailed when running on z/OS.

BMC’s latest IMS announcement, by comparison, seems mundane since no big legal fireworks are involved. Still, for those organizations that rely on IMS for mission-critical work, the announcement of Fast Path Online Restructure/EP should be interesting. Basically, it allows organizations to implement database restructure changes with minimal downtime, as little as 10 minutes of downtime, maybe less, according to Nick Griffin, BMC’s IMS product manager.

The process is straightforward. The tool takes a mirror copy of the IMS database offline and captures copies of ongoing changes to the primary database, which continues working as usual while the admins do whatever restructuring they need to do offline. When they are done, the tool synchronizes the changes it has captured while the mirrored copy was offline. Then it flips the mirrored copy, making it the production copy, which is now restructured and up to date. By comparison, a conventional IMS restructuring without FastPath can take many hours, during which the IMS database is unavailable.

How many organizations can actually use this capability is not clear. BMC reports 27 banks currently run FastPath and that 80% of all banks use IMS. Griffin thinks maybe there are 50 likely candidates for the product. “We didn’t get into this for the money,” he notes.

Maybe or maybe not; IBM is a little vague on how many active IMS shops there are around. It could be a few thousand, especially if you count telcos, airlines, and other high volume IMS transaction processing shops. Over the last few years, BMC has introduced five new IMS products. They’re not doing it just for the fun, or even for a buck. IMS may be a niche product, but you can expect it to stick around for a long time to come.


Follow

Get every new post delivered to your Inbox.

Join 687 other followers

%d bloggers like this: