Posts Tagged ‘SQL’

Syncsort Brings z System Integration Software to Open Source Tools

October 13, 2015

In a series of announcements last month, Syncsort integrated its DMX-h data integration software with Apache Kafka, an open distributed messaging system. This will enable mainframe shops to tap DMX-h’s easy-to-use GUI to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

Spark graphic

Courtesy of IBM

Syncsort also delivered an open source contribution of an IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform. Not stopping there, Syncsort is integrating the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark too. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, on premise or in the cloud.

Said Tendü Yoğurtçu, General Manager of Syncsort’s big data business, in the latest announcement: “We are seeing increased demand for real-time analytics in industries such as healthcare, financial services, retail, and telecommunications.” With these announcements, Syncsort sees itself delivering the next generation streaming ETL and Internet of Things data integration platform.

Of course, the Syncsort offer should be unnecessary for most z System users except those that are long term Syncsort shops or are enamored of Syncsort’s GUI.  IBM already  offers Spark native on z/OS and Linux on z so there is no additional cost.  BTW, Syncsort itself was just acquired. What happens with its various products remains to be seen.

Still  IBM has been on a 12-year journey to expand mainframe workloads—Linux to Hadoop and Spark and beyond—the company has been urging mainframe shops as fast as fast as possible to become fully engaged in big data, open source, and more. The Syncsort announcements come at a precipitous time; mainframe data centers can more easily participate in the hottest use cases: real-time data analytics, streaming data analytics across diverse data sources, and more at the time when the need for such analytics is increasing.

Apache Spark and some of these other technologies should already be a bit familiar to z System data centers; Apache Kafka will be less familiar. DancingDinosaur noted Spark and others here, when LinuxOne was introduced.

To refresh, Apache Spark consists of a fast engine for large-scale data processing that provides over 80 high-level operators to make it easy to build parallel apps or use them interactively from the Scala, Python, and R shells. It also offers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.  As noted above Syncsort offers an open source version of the IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform.

Spark already has emerged as one of the most active big data open source projects, initially as a fast memory-optimized processing engine for machine learning and now as the single compute platform for all types of workloads including real-time data processing, interactive queries, social graph analysis, and others. Given Spark’s success, there is a growing need to securely access data from a diverse set of sources, including mainframes, and to transform the data into a format that is easily understandable by Spark.

Apache Kafka, essentially an enterprise service bus, is less widely known. Apache Kafka brings a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system. Kafka is often used in place of traditional message brokers like JMS and AMQP because of its higher throughput, reliability and replication. Syncsort has integrated its data integration software with Apache Kafka’s distributed messaging system to enable users to leverage DMX-h’s GUI as part of an effort to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

According to Matei Zaharia, creator of Apache Spark and co-founder & CTO of Databricks: “Organizations look to Spark to enable a variety of use cases, including streaming data analytics across diverse data sources”.  He continues: “Syncsort has recognized the importance of Spark in the big data ecosystem for real-time streaming applications and is focused on making it easy to bring diverse data sets into Spark.” IBM certainly recognizes this too, and the z System is the right platform for making all of this happen.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Next Generation zEnterprise Developers

April 19, 2013

Mainframe development keeps getting more complicated.  The latest complication can be seen in Doug Balog’s reference to mobile and social business on the zEnterprise, reported by DancingDinosaur here a few weeks ago. That is what the next generation of z developers face.

Forget talk about shortages of System z talent due to the retirement of mainframe veterans.  The bigger complication comes from need for non-traditional mainframe development skills required to take advantage mobile and social business as well as other recent areas of interest such as big data and analytics. These areas entail combining new skills like JSON, Atom, Rest, Hadoop, Java, SOA, Linux, hybrid computing along with traditional mainframe development skills like CICS and COBOL, z/VM, SQL, VSAM, and IMS. This combination is next to impossible to find in one individual. Even assembling a coherent team encompassing all those skills presents a serious challenge.

The mainframe industry has been scrambling to address this in various ways.  CA Technologies added GUI to its various tools and BMC has similarly modernized its various management and DB2 tools. IBM, of course, has been steadily bolstering the Rational RDz tool set.   RDz is a z/OS Eclipse-based software IDE.  RDz streamlines and refactors z/OS development processes into structured analysis, editing, and testing operations with modern GUI tools, wizards, and menus that, IBM notes, are perfect for new-to the-mainframe twenty- and thirty-something developers, the next generation of z developers.

Compuware brings its mainframe workbench, described as a modernized interactive developer environment that introduces a new graphical user interface for managing mainframe application development activities.  The interactive toolset addresses every phase of the application lifecycle.

Most recently, Micro Focus announced the release of its new Enterprise Developer for IBM zEnterprise.  The product enables customers to optimize all aspects of mainframe application delivery and promises to drive down costs, increase productivity, and accelerate innovation. Specifically, it enables both on- and off-mainframe development, the latter without consuming mainframe resources, to provide a flexible approach to the delivery of new business functions. In addition, it allows full and flexible customization of the IDE to support unique development processes and provides deep integration into mainframe configuration management and tooling for a more comprehensive development environment. It also boasts of improved application quality with measurable improvement in delivery times.  These capabilities together promise faster developer adoption.

Said Greg Lotko, Vice President and Business Line Executive, IBM System z, about the new Micro Focus offering:  We are continually working with our technology partners to help our clients maximize the value in their IBM mainframes, and this latest innovation from Micro Focus is a great example of that commitment.

Behind all of this development innovation is an industry effort to cultivate the next generation of mainframe developers. Using a combination of trusted technology (COBOL and mainframe) and new innovation (zEnterprise, hybrid computing, expert systems, and Eclipse), these new developers; having been raised on GUI and mobile and social, can leverage what they learned growing up to build the multi-platform, multi-device mainframe applications that organizations will need going forward.

As these people come on board as mainframe-enabled developers organizations will have more confidence in continuing to invest in their mainframe software assets, which currently amount to an estimated 200-300 billion lines of source code and may even be growing as mainframes are added in developing markets, considered a growth market by IBM.  It only makes sense to leverage this proven code base than try to replace it.

This was confirmed in a CA Technologies survey of mainframe users a year ago, which found that 1) the mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise; 2) the machine is viewed as an enabler of innovation as big data and cloud computing transform the face of enterprise IT—now add mobile; and 3) companies are seeking candidates with cross-disciplinary skill sets to fill critical mainframe workforce needs in the new enterprise IT thinking.

Similarly, a recent study by the Standish Group showed that 70 percent of CIOs saw their organizations’ mainframes as having a central and strategic role in their overall business success.  Using the new tools noted above organizations can maximize the value of the mainframe asset and cultivate the next generation mainframe developers.

BMC Tools for DB2 10 Drive z/OS Savings

April 25, 2011

This month BMC announced the upgrading of 23 tools for managing DB2 10 databases running on System z9, z10, and zEnterprise/z196.  When IBM introduced DB2 10 in 2010 it implied the database would reduce costs and optimize performance. Certainly running it on the z10 or the z196 with the latest zIIP engine would do both, but BMC’s updated tools make it easier to capture and expand those benefits.

IBM estimated 5-10% improvement in CPU performance out-of-the box. BMC’s solutions for DB2 10 for z/OS will help IT organizations further maximize cost savings as well as enhance the performance of their applications and databases by much as a 20% improvement if you deploy using their upgraded tools.

These DB2 improvements, which IBM refers to as operational efficiencies, revolve mainly around reducing CPU usage. This is possible because, as IBM explains it, DB2 10 optimizes processor times and memory access, leveraging the latest processor improvements, increased memory, and z/OS enhancements. Improved scalability and a reduced virtual storage constraint add to the savings. Continued productivity improvements for database and systems administrators can drive even more savings.

The key to the improvements may lie in your ability to fully leverage the zIIP assist processor. The zIIP co-processors take over some of the processing from the main CPU, saving money for those organizations that pay for their systems by MIPS (million instructions per second).

When IBM introduced version 10 of DB2 for z/OS in 2010, it promised customers that upgrading to this version would boost performance due to DB2’s use of these co-processors. Even greater gains in performance would be possible if the customer also would be willing to do some fine-tuning of the system. This is where the new BMC tools come in; some of tools specifically optimize the use the zIIP co-processors.

Some of BMC’s enhanced capabilities help offload the DB2 workload to the zIIP environment thereby reducing general purpose processor utilization. The amount of processing offloaded to zIIP engines varies. With the new release, for example, up to 80 percent of the data collection work for BMC SQL Performance for DB2 can be offloaded.

The BMC tools also help companies tune application and database performance in other ways that increase efficiency and lower cost. For example, BMC’s SQL Performance Workload Compare Advisor and Workload Index Advisor detect performance issues associated with changes in DB2 environments. Administrators can see the impact of changes before they are implemented, thereby avoiding performance problems.

An early adopter of BMC’s new DB2 10 tools is Florida Hospital, based in Orlando. The hospital, with seven campuses, considers itself the largest hospital in the US, and relies on DB2 running on a z10 to support dozens of clinical and administrative applications. The hospital currently runs a mix of DB2 8 and DB2 10, although it expects to be all DB2 10 within a year.

Of particular value to the hospital is DB2 10 support for temporal data or snapshots of data that let you see data changes over time. This makes it particularly valuable in answering time-oriented questions. Based on that capability, the hospital is deploying a second instance of DB2 10 for its data warehouse, for which it also will take full advantage of BMC’s SQL performance monitoring tools.

But the crowning achievement of the hospital’s data warehouse, says Robert Goodman, lead DBA at Florida Hospital, will be the deployment of IBM’s Smart Analytics Optimizer (SAO) with DB2 10 and the data warehouse. The SAO runs queries in a massively parallel in-memory infrastructure that bolts onto the z10 to deliver extremely fast performance. Watch for more details coming on this development.

DancingDinosuar doesn’t usually look at tool upgrades, but DB2 10, especially when combined with the updated BMC tools, promises to be a game changer. That certainly appears to be the case at Florida Hospital, even before it adds SAO capabilities.


Get every new post delivered to your Inbox.

Join 813 other followers

%d bloggers like this: