Posts Tagged ‘IMS’

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Latest New Mainframe puts Apache Spark Native on the z System

April 1, 2016

IBM keeps rolling out new versions of the z System.  The latest is the z/OS Platform for Apache Spark announced earlier this week. The new machine is optimized for marketers, data analysts, and developers eager to apply advanced analytics to the z’s rich, resident data sets for real-time insights.

ibm_zos_apache_spark_app

z/OS Platform for Apache Spark

Data is everything in the new economy; and the most and best data you can grab and the fastest you can analyze it, the more likely you will win. The z, already the center of a large, expansive data environment, is well positioned to drive winning data-fueled strategies.

IBM z/OS Platform for Apache Spark enables Spark, an open-source analytics framework, to run natively on z/OS. According to IBM, the new system is available now. Its key advantage:  to enable data scientists to analyze data in place on the system of origin. This eliminates the need to perform extract, transform and load (ETL), a cumbersome, slow, and costly process. Instead, with Spark the z breaks the bind between the analytics library and underlying file system.

Apache Spark provides an open-source cluster computing framework with in-memory processing to speed analytic applications up to 100 times faster compared to other technologies on the market today, according to IBM. Apache Spark can help reduce data interaction complexity, increase processing speed, and enhance mission-critical applications by enabling analytics that deliver deep intelligence. Considered highly versatile in many environments, Apache Spark is most regarded for its ease of use in creating algorithms that extract insight from complex data.

IBM’s goal lies not in eliminating the overhead of ETL but in fueling interest in cognitive computing. With cognitive computing, data becomes a fresh natural resource—an almost infinite and forever renewable asset—that can be used by computer systems to understand, reason and learn. To succeed in this cognitive era businesses must be able to develop and capitalize on insights before the insights are no longer relevant. That’s where the z comes in.

With this offering, according to IBM, accelerators from z Systems business partners can help organizations more easily take advantage of z Systems data and capabilities to understand market changes alongside individual client needs. With this kind of insight managers should be able to make the necessary business adjustments in real-time, which will speed time to value and advance cognitive business transformations among IBM customers.

At this point IBM has identified 3 business partners:

  1. Rocket Software, long a mainframe ISV, is bringing its new Rocket Launchpad solution, which allows z shops to try the platform using data on z/OS.
  1. DataFactZ is a new partner working with IBM to develop Spark analytics based on Spark SQL and MLlib for data and transactions processed on the mainframe.
  1. Zementis brings its in-transaction predictive analytics offering for z/OS with a standards-based execution engine for Apache Spark. The product promises to allow users to deploy and execute advanced predictive models that can help them anticipate end users’ needs, compute risk, or detect fraud in real-time at the point of greatest impact, while processing a transaction.

This last point—detecting problems in real time at the point of greatest impact—is really the whole reason for Spark on z/OS.  You have to leverage your insight before the prospect makes the buying decision or the criminal gets away with a fraudulent transaction. After that your chances are slim to none of getting a prospect to reverse the decision or to recover stolen goods. Having the data and logic processing online and in-memory on the z gives you the best chance of getting the right answer fast while you can still do something.

As IBM also notes, the z/OS Platform for Apache Spark includes Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLlib) and Graphx, combined with the industry’s only mainframe-resident Spark data abstraction solution. The new platform helps enterprises derive insights more efficiently and securely. In the processing the platform can streamline development to speed time to insights and decision and simplify data access through familiar data access formats and Apache Spark APIs.

Best of all, however, is the in-memory capabilities as noted above. Apache Spark uses an in-memory approach for processing data to deliver results quickly. The platform includes data abstraction and integration services that enable z/OS analytics applications to leverage standard Spark APIs.  It also allows analysts to collect unstructured data and use their preferred formats and tools to sift through data.

At the same time developers and analysts can take advantage of the familiar tools and programming languages, including Scala, Python, R, and SQL to reduce time to value for actionable insights. Of course all the familiar z/OS data formats are available too: IMS, VSAM, DB2 z/OS, PDSE or SMF along with whatever you get through the Apache Spark APIs.

This year we already have seen the z13s and now the z/OS Platform for Apache Spark. Add to that the z System LinuxOne last year. z-Based data centers suddenly have a handful of radically different new mainframes to consider.  Can Watson, a POWER-based system, be far behind? Your guess is as good as anyone’s.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Edge Rocks 6000 Strong for Digital Transformation

May 15, 2015

Unless you’ve been doing the Rip Van Winkle thing, you have to have noticed that a profound digital transformation is underway fueled, in this case,from the bottom. “This is being driven by people embracing technology,” noted Tom Rosamilia, Senior Vice President, IBM System. And it will only get greater with quantum computing, a peak into it provided at Edge2015 by Arvind Krishna, senior vice president and director, IBM Research.

ibm_infographic_rough draft_r5

(Quantum computing, courtesy of IBM, click to enlarge)

Need proof? Just look around. New cars are now hot spots, and it’s not just luxury cars. Retailers are adding GPS inside their store and are using it to follow and understand the movement of shoppers in real time. Eighty-two percent of millennials do their banking from their mobile phone.  As Rosamilia noted, it amounts to “an unprecedented digital disruption” in the way people go about their lives. Dealing with this digital transformation and the challenges and opportunities it presents was what IBM Edge 2015 was about. With luck you can check out much from Edge2015 at the media center here.

The first day began with a flurry of product announcements starting with a combined package of new servers and storage software and solutions aimed to accelerate the development of hybrid cloud computing.  Hybrid cloud computing was big at Edge2015. To further stimulate hybrid computing IBM introduced new flexible software licensing of its middleware to help companies speed their adoption of hybrid cloud environments.

Joining in the announcement was Rocket Software, which sponsored the entertainment, including the outstanding Grace Potter concert. As for Rocket’s actual business, the company announced Rocket Data Access Service on Bluemix for z Systems, intended to provide companies a simplified connection to data on the IBM z Systems mainframe for development of mobile applications through Bluemix. Starting in June, companies can access a free trial of the service, which works with a range of database storage systems, including VSAM, ADABASE, IMS, CICS, and DB2, and enables access through common mobile application interfaces, including MongoDB, JDBC, and the REST protocol.  Now z shops have no excuse not to connect their systems with mobile and social business.

Storage also grabbed the spotlight. IBM introduced new storage systems, including the IBM Power System E850, a four-socket system with flexible capacity and up to 70% guaranteed utilization. The E850 targets cloud service providers and medium or large enterprises looking to securely and efficiently deploy multi-tenancy workloads while speeding access to data through larger in-memory databases with up to 4TB of installed memory.

The IBM Power System E880, designed to scale to 192 cores, is suitable for IBM DB2 with BLU Acceleration, enhancing the efficiency of cloud deployments; and the PurePOWER System, a converged infrastructure for cloud. It is intended to help deliver insights via the cloud, and is managed with OpenStack.

The company also will be shipping IBM Spectrum Control Storage Insights, a new software-defined storage offering that provides data management as a hybrid cloud service to optimize on-premises storage infrastructures. Storage Insights is designed to simplify storage management by improving storage visibility while applying analytics to ease capacity planning, enhance performance monitoring, and improve storage utilization. It does this by reclaiming under-utilized storage. Thank you analytics.

Finally for storage, the company announced IBM XIV GEN 3, designed for cloud with real-time compression that enables scaling as demand for data storage capacity expands. You can get more details on all the announcements at Edge 2015 here.

Already announced is IBM Edge 2016, again at the Venetian in Las Vegas in October 2016. That gives IBM 18 months to pack it with even more advances. Doubt there will be a new z by then; a new business class version of the z13 is more likely.

DancingDinosaur will take up specific topics from Edge2015 in the coming week. These will include social business on z, real-time analytics on z, and Jon Toigo sorting through the hype on SDS.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

Rocket z/SQL Accesses Non-SQL Mainframe Data

August 2, 2013

Rocket Software’s z/SQL enables access to non-SQL mainframe data using standard SQL commands and queries.  The company is offering a z/SQL free trial; you can install it no charge and get full access for as many users as you want. The only caveat, the free version is limited to three files. You can download the free trial here.

z/SQL will run SQL queries against any data source that speaks ANSI 92. “The tool won’t even know it is running relational data,” explained Gregg Willhoit, managing director of the Rocket Data Lab. That means you can run it against VSAM, IMS, Adabas, DB2 for z/OS, and physical sequential files.  In addition, you can use z/SQL to make real-time SQL queries directly to mainframe programs, including CICS TS, IMS TM, CA IDMS, and Natural.

By diverting up to 99% of processing-intensive data mapping and transformation from the mainframe’s CPU to the zIIP, z/SQL lowers MIPS capacity usage and its associated costs, effectively reducing TCO. And, it opens up the zIIP to extend programs and systems of record data to the full range of environments noted above.

z/SQL’s ability to automatically detect the presence of the z’s zIIP assist processor allows it to apply its patent pending technology to further boost the zIIP’s performance advantages.  The key attributes of the zIIP processor—low  cost,  speeds often greater than the speed of the mainframe engines (sub-capacity mainframe license), and its typical low utilization—are fully exploited by z/SQL for lowering a mainframe shop’s  TCO while providing for an accelerated ROI.

Rocket z/SQL is built on Metal C, a z/OS compiler option that provides C-language extensions allowing you to specify assembly statements that call system services directly. The DRDA support and the ANSI 92 SQL engine have been developed using what amounts to a new language that allows even more of z/SQL’s work to continue to run on the zIIP.  One of the key features in Metal C is allowing z/SQL to optimize its code paths for the hardware that it’s running on.  So, no matter if you’re running on older z9 or z10 or the latest zEC12 and zBC12 processors, z/SQL chooses the code path most optimized for your hardware.

With z/SQL you can expand your System z analytics effort and push a wider range of mainframe data analytics to near real time.  Plus, the usual ETL and all of its associated disadvantages are no longer a factor.  As such z/SQL promises to be a disruptive technology that eliminates the need for ETL while pushing the analytics to where the data resides as opposed to ETL, which must bring the data to the analytics.  The latter, noted Willhoit, is fraught with performance and data currency issues.

It’s not that you couldn’t access non-SQL data before z/SQL, but it was more cumbersome and slower.  You would have to replicate data, often via FTP to something like Excel. Rocket, instead, relies on assembler to generate an optimized SQL engine for the z9, z10, z196, zEC12, and now the zBC12.  With z/SQL the process is remarkably simple: no replication, no rewriting of code, just recompile. It generates the optimized assembler (so no assembler work required on your part).

Query performance, reportedly, is quite good.  This is due, in part, because it is written in assembler, but also because it takes advantage of the z’s multi-threading. It reads the non-relational data source with one thread and uses a second thread to process the network I/O.  This parallel I/O architecture for data promises game changing performance, especially for big data, through significant parallelism of network and database I/O.  It also takes full advantage of the System z hardware by using buffer pools and large frames, essentially eliminating dynamic address translation.

z/SQL brings its own diagnostic capabilities, providing a real-time view into transaction threads with comprehensive trace/browse capabilities for diagnostics.  It enables a single, integrated approach to identifying, diagnosing and correcting data connectivity issues between distributed ODBC, ADO.NET, and JDBC client drivers and mainframes. Similarly z/SQL provides dynamic load balancing and a virtual connection facility that reduces the possibility of application failures, improves application availability and performance, as well as supports virtually unlimited concurrent users and transaction rates, according to the company. Finally, it integrates with mainframe RACF, CA-TopSecret, and CA-ACF2 as well as SSL and client-side, certificate-based authentication on distributed platforms. z/SQL fully participates in the choreography of SSL between the application platform and the mainframe.

By accessing mainframe programs and data stored in an array of relational and non-relational formats z/SQL lets you leave mainframe data in place, on the z where it belongs, and avoids the cost and risk of replication or migration. z/SQL becomes another way to turn the z into an enterprise analytics server for both SQL and non-SQL data.

Rocket calls z/SQL the world’s most advanced mainframe access and integration software. A pretty bold statement that begs to be proven through data center experience. Test it in your data center for free.  As noted above, you can download the free trial here. If you do, please let me know how it works out. (Promise it won’t be publicized here.)

Next Generation zEnterprise Developers

April 19, 2013

Mainframe development keeps getting more complicated.  The latest complication can be seen in Doug Balog’s reference to mobile and social business on the zEnterprise, reported by DancingDinosaur here a few weeks ago. That is what the next generation of z developers face.

Forget talk about shortages of System z talent due to the retirement of mainframe veterans.  The bigger complication comes from need for non-traditional mainframe development skills required to take advantage mobile and social business as well as other recent areas of interest such as big data and analytics. These areas entail combining new skills like JSON, Atom, Rest, Hadoop, Java, SOA, Linux, hybrid computing along with traditional mainframe development skills like CICS and COBOL, z/VM, SQL, VSAM, and IMS. This combination is next to impossible to find in one individual. Even assembling a coherent team encompassing all those skills presents a serious challenge.

The mainframe industry has been scrambling to address this in various ways.  CA Technologies added GUI to its various tools and BMC has similarly modernized its various management and DB2 tools. IBM, of course, has been steadily bolstering the Rational RDz tool set.   RDz is a z/OS Eclipse-based software IDE.  RDz streamlines and refactors z/OS development processes into structured analysis, editing, and testing operations with modern GUI tools, wizards, and menus that, IBM notes, are perfect for new-to the-mainframe twenty- and thirty-something developers, the next generation of z developers.

Compuware brings its mainframe workbench, described as a modernized interactive developer environment that introduces a new graphical user interface for managing mainframe application development activities.  The interactive toolset addresses every phase of the application lifecycle.

Most recently, Micro Focus announced the release of its new Enterprise Developer for IBM zEnterprise.  The product enables customers to optimize all aspects of mainframe application delivery and promises to drive down costs, increase productivity, and accelerate innovation. Specifically, it enables both on- and off-mainframe development, the latter without consuming mainframe resources, to provide a flexible approach to the delivery of new business functions. In addition, it allows full and flexible customization of the IDE to support unique development processes and provides deep integration into mainframe configuration management and tooling for a more comprehensive development environment. It also boasts of improved application quality with measurable improvement in delivery times.  These capabilities together promise faster developer adoption.

Said Greg Lotko, Vice President and Business Line Executive, IBM System z, about the new Micro Focus offering:  We are continually working with our technology partners to help our clients maximize the value in their IBM mainframes, and this latest innovation from Micro Focus is a great example of that commitment.

Behind all of this development innovation is an industry effort to cultivate the next generation of mainframe developers. Using a combination of trusted technology (COBOL and mainframe) and new innovation (zEnterprise, hybrid computing, expert systems, and Eclipse), these new developers; having been raised on GUI and mobile and social, can leverage what they learned growing up to build the multi-platform, multi-device mainframe applications that organizations will need going forward.

As these people come on board as mainframe-enabled developers organizations will have more confidence in continuing to invest in their mainframe software assets, which currently amount to an estimated 200-300 billion lines of source code and may even be growing as mainframes are added in developing markets, considered a growth market by IBM.  It only makes sense to leverage this proven code base than try to replace it.

This was confirmed in a CA Technologies survey of mainframe users a year ago, which found that 1) the mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise; 2) the machine is viewed as an enabler of innovation as big data and cloud computing transform the face of enterprise IT—now add mobile; and 3) companies are seeking candidates with cross-disciplinary skill sets to fill critical mainframe workforce needs in the new enterprise IT thinking.

Similarly, a recent study by the Standish Group showed that 70 percent of CIOs saw their organizations’ mainframes as having a central and strategic role in their overall business success.  Using the new tools noted above organizations can maximize the value of the mainframe asset and cultivate the next generation mainframe developers.

IBM Big Data Innovations Heading to System z

April 4, 2013

Earlier this week IBM announced new technologies intended to help companies and governments tackle Big Data by making it simpler, faster and more economical to analyze massive amounts of data. Its latest innovations, IBM suggested, would drive reporting and analytics results as much as 25 times faster.

The biggest of IBM’s innovations is BLU Acceleration, targeted initially for DB2. It combines a number of techniques to dramatically improve analytical performance and simplify administration. A second innovation, referred to as the enhanced Big Data Platform, improves the use and performance of the InfoSphere BigInsights and InfoSphere Streams products. Finally, it announced the new IBM PureData System for Hadoop, designed to make it easier and faster to deploy Hadoop in the enterprise.

BLU Acceleration is the most innovative of the announcements, probably a bona fide industry first, although others, notably Oracle, are scrambling to do something similar. BLU Acceleration enables much faster access to information by extending the capabilities of in-memory systems. It allows the loading of data into RAM instead of residing on hard disks for faster performance and dynamically moves unused data to storage.  It even works, according to IBM, when data sets exceed the size of the memory.

Another innovation included in BLU Acceleration is data skipping, which allows the system to skip over irrelevant data that doesn’t need to be analyzed, such as duplicate information. Other innovations include the ability to analyze data in parallel across different processors; the ability to analyze data transparently to the application, without the need to develop a separate layer of data modeling; and actionable compression, where data no longer has to be decompressed to be analyzed because the data order has been preserved.   Finally, it leverages parallel vector processing, which enables multi-core and SIMD (Single Instruction Multiple Data) parallelism.

During testing, IBM reported, some queries in a typical analytics workload ran more than 1000x faster when using the combined innovations of BLU Acceleration. It also resulted in 10x storage space savings during beta tests. BLU acceleration will be used first in DB2 10.5 and Informix 12.1 TimeSeries for reporting and analytics. It will be extended for other data workloads and to other products in the future.

BLU Acceleration promises to be as easy to use as load-and-go.  BLU tables coexist with traditional row tables; using the same schema, storage, and memory. You can query any combination of row or BLU (columnar) tables, and IBM assures easy conversion of conventional tables to BLU tables.

DancingDinosaur likes seeing the System z included as an integral part of the BLU Acceleration program.  The z has been a DB2 workhorse and apparently will continue to be as organizations move into the emerging era of big data analytics. On top of its vast processing power and capacity, the z brings its unmatched quality of service.

Specifically, IBM has called out the z for:

  • InfoSphere BigInsights via the zEnterprise zBX for data exploration and online archiving
  • IDAA (in-memory Netezza technology) for reporting and analytics as well as operational analytics
  • DB2 for SQL and NoSQL transactions with enhanced Hadoop integration in DB2 11 (beta)
  • IMS for highest performance transactions with enhanced Hadoop integration  in IMS 13 (beta)

Of course, the zEnterprise is a full player in hybrid computing through the zBX so zEnterprise shops have a few options to tap when they want to leverage BLU Accelerator and IBM’s other big data innovations.

Finally, IBM announced the new IBM PureData System for Hadoop, which should simplify and streamline the deployment of Hadoop in the enterprise. Hadoop has become the de facto open systems approach to organizing and analyzing vast amounts of unstructured as well as structured data, such as posts to social media sites, digital pictures and videos, online transaction records, and cell phone location data. The problem with Hadoop is that it is not intuitive for conventional relational DBMS staff and IT. Vendors everywhere are scrambling to overlay a familiar SQL approach on Hadoop’s map/reduce method.

The new IBM PureData System for Hadoop promises to reduce from weeks to minutes the ramp-up time organizations need to adopt enterprise-class Hadoop technology with powerful, easy-to-use analytic tools and visualization for both business analysts and data scientists. It also provides enhanced big data tools for management, monitoring, development, and integration with many more enterprise systems.  The product represents the next step forward in IBM’s overall strategy to deliver a family of systems with built-in expertise that leverages its decades of experience in reducing the cost and complexity associated with information technology.

Updated Software for IBM zEC12

October 11, 2012

Everyone gets excited by a new piece of hardware, but it is the software that enables the new machine to work its magic. This certainly is the case with the zEC12. On Oct. 3 IBM announced  upgrades to zEnterprise workhorse software like CICS, Omegamon, Cognos, and zSecure intended to better tap the capabilities of zEC12. Even IMS and Sterling are getting a refresh.

Also getting increased attention is Netezza, which has emerged as a key component of IBM’s data analytics approach. Netezza enables IBM to counter Oracle’s Exalytics, another in-memory data analytics appliance. In fact, IBM’s announcement of the newest PureSystems, the PureData System, earlier this week gives IBM another counter punch.

For the zEnterprise IBM adds a flexible storage capability that provides the performance of the IDAA while removing the cost of storage from the z. Netezza will work with whatever IBM storage the organization prefers.  A new incremental update capability propagates data changes as they occur, making it possible to analyze activity almost immediately. This resolves the problem of the data currency, in effect providing as close to real-time analytics as most organizations will get or need.

CICS, which already had become a mainframe workhorse through SOA and web services, now adds rich cloud capabilities too. CICS v5.1 brings new web app capabilities built on the WAS Liberty Profile. New PaaS capabilities enable it to host SaaS apps based on CICS applications. It also employs a new lightweight Java web container that combines Java Servlets and JSPs with fast local access to CICS applications.  IBM reports the enhanced CICS v5.1 delivers a 25% performance gain.

Various online discussion groups are buzzing about the zEC12 software enhancements.  A sampling:

  • IBM provides DB2 10 performance enhancements for z/OS. As importantly for mixed platform (hybrid) shops DB2 10 LUW (Linux UNIX Windows) also will provide similar performance improvements.
  • There is added support for Oracle’s PL/SQL for DB2 10 for stored procedures and Oracle application interfaces for Java, Pro*C, Pro*COBOL, and Forms.
  • IBM also announced significant transactional performance improvements when running WebSphere on the zEC12.
  • IBM has started a Beta Testing Program for the new CICS Transaction Server 5.1 release that has a significant number of enhancements to support Web Applications and CICS application modernization, mainly through IBM’s Rational HATS.
  •  IBM has also improved performance of the C/C++ V1.13 compiler, Metal C feature of the IBM z/OS XL C/C++ compiler; and PL/1 V4.3 compiler for the zEC12.

Maybe less of a buzz generator but IBM Sterling gets a boost with the Sterling B2B Integrator V5.2.4 and Sterling File Gateway V2.2.4 for integration and file-based exchanges. IBM’s zSecure suite V1.13.1 brings new integration with QRadar, expanded integration points with DB2, enhanced RACF database cleanup capabilities, and support for the new enhanced CICS Transaction Server.

IBM also used the announcement to promote the relaunch of zEnterprise Analytics System 9710 (previously called IBM Smart Analytics System 9710) an unusual combo data decision system for analytics. It joins high performance data warehouse management with System z availability and recoverability using the z114. When the IDAA is added the result is a hybrid system of MPP and SMP technologies that combines mixed workload capabilities—both transaction and high speed analytical applications—on single platform tuned for operational business analytics.

Independent Assessment, publisher of DancingDinosaur, has finally released its newest white paper, zEnterprise BladeCenter Extension (zBX): the Case for Adopting Hybrid Computing. It is the most updated look at the zBX yet, including details on the zEC12. Available for free. Click here.

IMS on System z gets faster and cheaper

July 5, 2010

IMS is decades-old hierarchical IBM’s transaction database technology.  It sits behind many of the big transaction processing systems used around the world, from airlines to telcos to banks. IBM has been teaching it new tricks—SOA, Web 2.0—but also others have taught it tricks too, including BMC and NEON.

NEON, which is battling IBM over its zPrime technology, apparently never misses an opportunity to get into IBM’s face. Last week NEON announced zPrime for IMS at a cost of just $1!  The catch: organizations must make a two-year commitment at $1 per year and install a new version of NEON zPrime in production by December 31, 2010. Between the $1 price tag and the considerable mainframe software licensing cost savings zPrime enables, this can be an incredible bargain for a gutsy organization looking to reduce costs in a big way.

Why do you need to be gutsy? Because IBM has unleashed a barrage of FUD (fear uncertainty doubt) against NEON, zPrime, and those organizations willing to try it. Give NEON credit; this is a brazen strategy to help zPrime gain traction while lawsuits and counter-suits fly. Initial jury selection has already been scheduled, but it is not until March of 2012. DancingDinosaur covered this here just a couple of weeks ago.

To recap: zPrime enables organizations to run traditional z/OS workloads on zIIP and zAAP specialty processors. The zIIP will be the target processor for IMS applications, no doubt. By doing so, the organization avoids the hefty software licensing charges entailed when running on z/OS.

BMC’s latest IMS announcement, by comparison, seems mundane since no big legal fireworks are involved. Still, for those organizations that rely on IMS for mission-critical work, the announcement of Fast Path Online Restructure/EP should be interesting. Basically, it allows organizations to implement database restructure changes with minimal downtime, as little as 10 minutes of downtime, maybe less, according to Nick Griffin, BMC’s IMS product manager.

The process is straightforward. The tool takes a mirror copy of the IMS database offline and captures copies of ongoing changes to the primary database, which continues working as usual while the admins do whatever restructuring they need to do offline. When they are done, the tool synchronizes the changes it has captured while the mirrored copy was offline. Then it flips the mirrored copy, making it the production copy, which is now restructured and up to date. By comparison, a conventional IMS restructuring without FastPath can take many hours, during which the IMS database is unavailable.

How many organizations can actually use this capability is not clear. BMC reports 27 banks currently run FastPath and that 80% of all banks use IMS. Griffin thinks maybe there are 50 likely candidates for the product. “We didn’t get into this for the money,” he notes.

Maybe or maybe not; IBM is a little vague on how many active IMS shops there are around. It could be a few thousand, especially if you count telcos, airlines, and other high volume IMS transaction processing shops. Over the last few years, BMC has introduced five new IMS products. They’re not doing it just for the fun, or even for a buck. IMS may be a niche product, but you can expect it to stick around for a long time to come.


Follow

Get every new post delivered to your Inbox.

Join 885 other followers

%d bloggers like this: