Posts Tagged ‘UNIX’

Syncsort’s 2015 State of the Mainframe: Little Has Changed

November 30, 2015

Syncsort’s annual survey of almost 200 mainframe shops found that 83 percent of respondents cited security and availability as key strengths of the mainframe. Are you surprised? You can view the detailed results here for yourself.

synsort mainframes Role Big Data Ecosystem

Courtesy; Syncsort

Security and availability have been hallmarks of the z for decades. Even Syncsort’s top mainframe executive, Harvey Tessler, could point to little unexpected in the latest results “Nothing surprising. At least no big surprises. Expect the usual reliability, security,” he noted. BTW, in mid-November Clearlake Capital Group, L.P. (Clearlake) announced that it had completed the acquisition of Syncsort Incorporated. Apparently no immediate changes are being planned.

The 2015 study also confirmed a few more recent trends that DancingDinosaur has long suspected. More than two-thirds (67 percent) of respondents cited integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe.

Similarly, the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe. That, in fact, may be the most surprising response. Mainframe shops (or more likely the line-of-business managers they work with) are notorious for moving data off the mainframe for analytics, usually to distributed x86 platforms. The study showed respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis.

Many of the respondents no doubt will continue to do so, but it makes little sense in 2015 with a modern z System running a current configuration. In truth, it makes little sense from either a performance or a cost standpoint to move data off the z to perform analytics elsewhere. The z runs Hadoop and Spark natively. With your data and key analytics apps already on the z, why bother incurring both the high overhead and high latency entailed in moving data back and forth to run on what is probably a slower platform anyway.

The only possible reason might be that the mainframe shop doesn’t run Linux on the mainframe at all. That can be easily remedied, however, especially now with the introduction of Ubuntu Linux for the z. C’mon, it’s late 2015; modernize your z for the cloud-mobile-analytics world and stop wasting time and resources jumping back and forth to distributed systems that will run natively on the z today.

More encouraging is the interest of the respondents in big data and analytics. “The survey demonstrates that many big companies are using the mainframe as the back-end transaction hub for their Big Data strategies, grappling with the same data, cost, and management challenges they used it to tackle before, but applying it to more complex use cases with more and dauntingly large and diverse amounts of data,” said Denny Yost, associate publisher and editor-in-chief for Enterprise Systems Media, which partnered with Syncsort on the survey. The results show the respondents’ interest in mainframe’s ability to be a hub for emerging big data analytics platforms also is growing.

On other issues, almost one-quarter of respondents ranked as very important the ability of the mainframe to run other computing platforms such as Linux on an LPAR or z/VM virtual machines as a key strength of the mainframe at their company. Over one-third of respondents ranked as very important the ability of the mainframe to integrate with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of the mainframe at their company.

Maybe more surprising; only 70% on the respondents ranked as very important their organizations use of the mainframe for performing large-scale transaction processing or use of the mainframe for hosting mission-critical applications. Given that the respondents appeared to come from large, traditional mainframe shops you might have expected those numbers to be closer to 85-90%. Go figure.

When asked to rank their organization’s use of the mainframe to supplement or replace non-mainframe servers (i.e. RISC or x86-based servers) just 10% of the respondents considered it important. Clearly the hybrid mainframe-based data center is not a priority with these respondents.

So, what are they looking to improve in the next 12 months? The respondents’ top three initiatives are:

  1. Meeting Security and Compliance Requirements
  2. Reducing CPU usage and related costs
  3. Meeting Service Level Agreements (SLAs)

These aren’t the most ambitious goals DancingDinosaur has ever encountered but they should be quite achievable in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort Brings z System Integration Software to Open Source Tools

October 13, 2015

In a series of announcements last month, Syncsort integrated its DMX-h data integration software with Apache Kafka, an open distributed messaging system. This will enable mainframe shops to tap DMX-h’s easy-to-use GUI to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

Spark graphic

Courtesy of IBM

Syncsort also delivered an open source contribution of an IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform. Not stopping there, Syncsort is integrating the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark too. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, on premise or in the cloud.

Said Tendü Yoğurtçu, General Manager of Syncsort’s big data business, in the latest announcement: “We are seeing increased demand for real-time analytics in industries such as healthcare, financial services, retail, and telecommunications.” With these announcements, Syncsort sees itself delivering the next generation streaming ETL and Internet of Things data integration platform.

Of course, the Syncsort offer should be unnecessary for most z System users except those that are long term Syncsort shops or are enamored of Syncsort’s GUI.  IBM already  offers Spark native on z/OS and Linux on z so there is no additional cost.  BTW, Syncsort itself was just acquired. What happens with its various products remains to be seen.

Still  IBM has been on a 12-year journey to expand mainframe workloads—Linux to Hadoop and Spark and beyond—the company has been urging mainframe shops as fast as fast as possible to become fully engaged in big data, open source, and more. The Syncsort announcements come at a precipitous time; mainframe data centers can more easily participate in the hottest use cases: real-time data analytics, streaming data analytics across diverse data sources, and more at the time when the need for such analytics is increasing.

Apache Spark and some of these other technologies should already be a bit familiar to z System data centers; Apache Kafka will be less familiar. DancingDinosaur noted Spark and others here, when LinuxOne was introduced.

To refresh, Apache Spark consists of a fast engine for large-scale data processing that provides over 80 high-level operators to make it easy to build parallel apps or use them interactively from the Scala, Python, and R shells. It also offers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.  As noted above Syncsort offers an open source version of the IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform.

Spark already has emerged as one of the most active big data open source projects, initially as a fast memory-optimized processing engine for machine learning and now as the single compute platform for all types of workloads including real-time data processing, interactive queries, social graph analysis, and others. Given Spark’s success, there is a growing need to securely access data from a diverse set of sources, including mainframes, and to transform the data into a format that is easily understandable by Spark.

Apache Kafka, essentially an enterprise service bus, is less widely known. Apache Kafka brings a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system. Kafka is often used in place of traditional message brokers like JMS and AMQP because of its higher throughput, reliability and replication. Syncsort has integrated its data integration software with Apache Kafka’s distributed messaging system to enable users to leverage DMX-h’s GUI as part of an effort to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

According to Matei Zaharia, creator of Apache Spark and co-founder & CTO of Databricks: “Organizations look to Spark to enable a variety of use cases, including streaming data analytics across diverse data sources”.  He continues: “Syncsort has recognized the importance of Spark in the big data ecosystem for real-time streaming applications and is focused on making it easy to bring diverse data sets into Spark.” IBM certainly recognizes this too, and the z System is the right platform for making all of this happen.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Lessons from IBM Eagle- zEnterprise TCO Analyses

March 18, 2013

Lessons from IBM Eagle-IBM Systems z  TCO Analyses

A company running an obsolete z890 2-way machine with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server multi-core (41x more cores than the z890) distributed environment only its 4-yearTCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS shop which budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, already 6 months late, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the dual-running period at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. It also could be used to secure a new opportunity for the z. Since 2007, the team reports that its TCO studies secured wins amounting to over $1.6 billion in revenue.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing z shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; for example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Take advantage of sub-capacity, which may produce free workloads
  • Consolidate System z Linux, which invariably saves money; many IT people don’t realize how many Linux virtual servers can run on a z core. (A debate raging on LinkedIn focused on how many virtual instances can run on an IFL with quite a few suggesting a max of 20. The official IBM figure:  consolidate up to 60 distributed cores or more on a single System z core, thousands on a single footprint; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs
  • Workloads amenable to specialty processors, like the IFL, zIIP, and zAAP, reduce mainframe costs through lower cost/MIPS and fewer general processor cycles
  • Consider the  System z Solution Edition (DancingDinosaur has long viewed the Solution Edition program as the best System z  deal going although you absolutely must be able to operate within the strict usage constraints the deal imposes.)

The Eagle team also suggests other things to consider, especially when the initial cost of a distributed platform looks attractive to management. To begin the System z responds flexibly to unforeseen business events; a distributed system may have to be augmented or the deployment re-architected, both of which drive up cost and slow responsiveness.  Also, the cost of adding incremental workloads to System z is less than linear. Similarly, the cost of administrative labor is lower on System z, and the System z cost per unit of work is much lower than with distributed systems.

DancingDinosaur generally is skeptical of TCO analyses from vendors. To be useful the analysis needs context, technical details (components, release levels, and prices), and specific verifiable quantitative results.  In addition, there are soft costs that must be considered.  In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

HP-UX and AIX : The Difference is POWER7

July 10, 2012

HP’s enterprise-class UNIX operating system, HP-UX, faces a stark future compared to IBM’s AIX. The difference comes down to the vitality of the underlying platforms. IBM runs AIX on the POWER platform, now at POWER7 and evolving to POWER8 and even POWER9 (although the naming may change)—a dynamic platform if ever there was one. Meanwhile, HP-UX has been effectively stranded on the withering Itanium platform. Oracle has stopped development for Itanium, and Intel, HP’s partner in Itanium, has been, at best, lackluster in its support.

It not clear whether HP-UX is a better UNIX than AIX, but in an industry driven by ever increasing demands for speed, throughput, cost-efficiency, and energy efficiency, the underlying platform matters. HP-UX customers surely will outgrow their Itanium-based systems without a platform boost.

“There’s no question that [our] Business Critical Server business has been hurt by this,” said HP CEO Meg Whitman in the transcript of an interview with the Wall Street Journal’s All Things D column. The business, which had been growing 10% a year before Oracle spurned further support of Itanium now is declining by 20-30% a year (Ouch!).  So Whitman is counting on two things: 1) winning its lawsuit against Oracle, which is still making its way through the courts and 2) porting HP-UX to an advanced x86 platform, namely Xeon. “Ultimately we’ve got to build UNIX on a Xeon chip, and so we will do that,” she told All Things D. All spring long there had been hints that this was imminent, but an official HP announcement never materialized.

Of course Oracle wants the HP customers running Oracle on HP-UX with Itanium to jump to its Sun platform.  IBM, however, has been wooing and winning those same customers to its System z or POWER platforms. Oracle runs on both the z and POWER platforms.  Running Oracle on Linux on System z yields substantial savings on Oracle licensing. But IBM wants to do even better by migrating the Oracle shops to DB2 as well, with incentives and tools to ease the transition.

What HP customers also get when they move to POWER or to the z is a platform in both cases with a real platform future, unlike either Itanium or Sun’s server platforms. DancingDinosaur has long extolled the zEnterprise and hybrid computing, but POWER is dynamic in its own right and when you look at the role it now plays in IBM’s new PureSystems, another IBM hybrid platform, POWER becomes all that more attractive.

From the start HP with HP-UX and Itanium was bound to have to settle for compromises given the different parties—HP, Intel, Oracle—involved. With POWER7, IBM system developers got exactly what they wanted, no compromises. “We gave the silicon designers a bunch of requirements and they gave us our wish list,” says Ian Robinson, IBM’s PowerVM virtualization and cloud product line manager. As a result POWER7, which runs AIX, Linux, and System i on the same box, got a slew of capabilities, including more memory bandwidth and better ways to divide cores.

POWER7, which amazed the IT world with its stunning Watson victory at Jeopardy, also is turning out to be an ideal virtualization and cloud machine. The rate of virtualization and cloud adoption by POWER7 shops is running something north of 90%, notes Robinson. The adoption of PowerVM, the POWER7 hypervisor built in at both the motherboard and firmware levels is close to 100%. And now POWER7 is a key component of IBM’s PureFlex initiative, a major IBM strategic direction.

Meanwhile, Whitman is fighting a costly court battle in the hope of coercing grudging support for the Itanium platform from Oracle. The trial began in June and mud has been flying ever since. Even if HP wins the case, don’t expect the story to end soon. Using appeals and delay tactics Oracle could put off the final outcome so long that Itanium will have shriveled to nothing while POWER7 continues along IBM’s ambitious roadmap.


%d bloggers like this: