Posts Tagged ‘hadoop’

IBM LinuxONE Can Uberize x86-Based IT

November 13, 2015

Uberization—industry disruption caused by an unlikely competitor—emerged as a dominant concern of C-suite executives in a recently announced IBM-Institute of Business Value study. According to the study, the percentage of C-suite leaders who expect to contend with competition from outside their industry increased from 43% in 2013 to 54% today.

IBM Csuite Study_Tiles_10_30_2 competition data

These competitors, future Ubers, aren’t just resulting from new permutations of old industries; they also are coming from digital invaders with totally different business models. Consider IBM LinuxONE, a powerful open source Linux z13 mainframe supported by two open communities, the Open Mainframe Project and the Linux Foundation. For the typical mass market Linux shop, usually an x86-based data center, LinuxONE can deliver a standard Linux distribution with both KVM and Ubuntu as part of a new pricing model that offers a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores.

Talk about disruptive; plus it brings scalability, reliability, high performance, and rock-solid security of the latest mainframe. LinuxONE can handle 8000 virtual servers in a single system, tens of thousands of containers. Try doing that with an x86 machine or even a dozen.

Customers of traditional taxi companies or guests at conventional hotels have had to rethink their transportation or accommodation options in the face of Uberization and the arrival of other disruptive alternatives like Airbnb. So too, x86 platform shops will have to rethink their technology platform options. On either a per-workload basis or a total cost of ownership (TCO) basis, the mainframe has been cost competitive for years. Now with the Uberization of the Linux platform by LinuxONE and IBM’s latest pricing options for it, the time to rethink an x86 platform strategy clearly has arrived. Many long-held misconceptions about the mainframe will have to be dropped or, at least, updated.

The biggest risk to businesses used to come from a new rival with a better or cheaper offering, making it relatively simple to alter strategies. Today, entrenched players are being threatened by new entrants with completely different business models, as well as smaller, more agile players unencumbered by legacy infrastructure. Except for the part of being smaller, IBM’s LinuxONE definitely meets the criteria as a threatening disruptive entrant in the Linux platform space.

IBM even is bring new business models to the effort too, including hybrid cloud and a services-driven approach as well as its new pricing. How about renting a LinuxONE mainframe short term? You can with one of IBM’s new pricing options: just rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Try that with enterprise-class x86 machines.

The introduction of support for both KVM and Ubuntu on the z platform opens even more possibilities. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make LinuxONE very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs and containers. And don’t forget a broader range of tools, including an expanded set of open-source and industry tools and software, including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Coker.

Deon Newman, VP of Marketing for IBM z Systems, can recite the LinuxONE scalability stats off the top of his head: The entry-level, single-frame LinuxONE server, named Rockhopper, starts at 80 virtual Linux machines, and hundreds and hundreds of containers while the high-end double-frame server, Emperor, features six IFLs that support up to 350 virtual machines and can scale all the way to 8,000 virtual machines. On the Emperor server, you can literally have hundreds of thousands of containers on a single platform. Newman deliberately emphasizes that LinuxONE machines are servers.  x86 server users take note. LinuxONE definitely is not your father’s mainframe.

In the latest C-suite study all C-suite executives—regardless of role—identified for the first time technology as the most important external force impacting their enterprise. These executives believe cloud computing, mobile solutions, the Internet of Things, and cognitive computing are the technologies most likely to revolutionize or Uberize their business.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.






Syncsort Brings z System Integration Software to Open Source Tools

October 13, 2015

In a series of announcements last month, Syncsort integrated its DMX-h data integration software with Apache Kafka, an open distributed messaging system. This will enable mainframe shops to tap DMX-h’s easy-to-use GUI to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

Spark graphic

Courtesy of IBM

Syncsort also delivered an open source contribution of an IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform. Not stopping there, Syncsort is integrating the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark too. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, on premise or in the cloud.

Said Tendü Yoğurtçu, General Manager of Syncsort’s big data business, in the latest announcement: “We are seeing increased demand for real-time analytics in industries such as healthcare, financial services, retail, and telecommunications.” With these announcements, Syncsort sees itself delivering the next generation streaming ETL and Internet of Things data integration platform.

Of course, the Syncsort offer should be unnecessary for most z System users except those that are long term Syncsort shops or are enamored of Syncsort’s GUI.  IBM already  offers Spark native on z/OS and Linux on z so there is no additional cost.  BTW, Syncsort itself was just acquired. What happens with its various products remains to be seen.

Still  IBM has been on a 12-year journey to expand mainframe workloads—Linux to Hadoop and Spark and beyond—the company has been urging mainframe shops as fast as fast as possible to become fully engaged in big data, open source, and more. The Syncsort announcements come at a precipitous time; mainframe data centers can more easily participate in the hottest use cases: real-time data analytics, streaming data analytics across diverse data sources, and more at the time when the need for such analytics is increasing.

Apache Spark and some of these other technologies should already be a bit familiar to z System data centers; Apache Kafka will be less familiar. DancingDinosaur noted Spark and others here, when LinuxOne was introduced.

To refresh, Apache Spark consists of a fast engine for large-scale data processing that provides over 80 high-level operators to make it easy to build parallel apps or use them interactively from the Scala, Python, and R shells. It also offers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.  As noted above Syncsort offers an open source version of the IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform.

Spark already has emerged as one of the most active big data open source projects, initially as a fast memory-optimized processing engine for machine learning and now as the single compute platform for all types of workloads including real-time data processing, interactive queries, social graph analysis, and others. Given Spark’s success, there is a growing need to securely access data from a diverse set of sources, including mainframes, and to transform the data into a format that is easily understandable by Spark.

Apache Kafka, essentially an enterprise service bus, is less widely known. Apache Kafka brings a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system. Kafka is often used in place of traditional message brokers like JMS and AMQP because of its higher throughput, reliability and replication. Syncsort has integrated its data integration software with Apache Kafka’s distributed messaging system to enable users to leverage DMX-h’s GUI as part of an effort to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

According to Matei Zaharia, creator of Apache Spark and co-founder & CTO of Databricks: “Organizations look to Spark to enable a variety of use cases, including streaming data analytics across diverse data sources”.  He continues: “Syncsort has recognized the importance of Spark in the big data ecosystem for real-time streaming applications and is focused on making it easy to bring diverse data sets into Spark.” IBM certainly recognizes this too, and the z System is the right platform for making all of this happen.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM z System After Moore’s Law

October 2, 2015

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

third dimension chip

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future.  This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Continues to Bolster Bluemix PaaS

September 10, 2015

In the last 10 years the industry, led by IBM, has gotten remarkably better at enabling nearly coding-free development. This is important given how critical app development has become. Today it is impossible to launch any product without sufficient app dev support.  At a minimum you need a mobile app and maybe a few micro-services. To that end, since May IBM has spent the summer introducing a series of Bluemix enhancements. Find them here and here and here and here.  DancingDinosaur, at best a mediocre programmer, hasn’t written any code for decades but in this new coding environment he has started to get the urge to participate in a hack-a-thon. Doesn’t that (below) look like fun?

Bluemix Garage Toronto 1

IBM’s Bluemix Garage in Toronto (click to enlarge)

The essential role of software today cannot be overestimated. Even companies introducing non-technical products have to support them with apps and digital services that must be continually refreshed.  When IoT really starts to ramp up bits and pieces of code everywhere will be needed to handle the disparate pieces, get everything to interoperate, collect the data, and then use it or analyze it and initiate the next action.

Bluemix, a cloud-based PaaS product, comes as close to an all-in-one Swiss army knife development and deployment platform for today’s kind of applications as you will find. Having only played around with a demo it appears about as intuitive as an enterprise-class product can get.

The most recent of IBM’s summer Bluemix announcement promises more flexibility to integrate Java-based resources into Bluemix.  It offers a set of services to more seamlessly integrate Java-based resources into cloud-based applications. For instance, according to IBM, it is now possible to test and run applications in Bluemix with Java 8. Additionally, among other improvements, the jsp-2.3, el-3.0, and jdbc-4.1 Liberty features, previously in beta, are now available as production-ready. Plus, Eclipse Tools for Bluemix now includes JavaScript Debug, support for Node.js applications, Java 8 Liberty for Java integration, and Eclipse Mars support for the latest Eclipse Mars version as well as an improved trust self-signed certificates capability. Incremental publish support for JEE applications also has been expanded to handle web fragment projects.

In mid-August IBM announced the use of streaming analytics and data warehouse services on Bluemix. This should enable developers to expand the capabilities of their applications to give users a more robust cloud experience by facilitating the integration of data analytics and visualization seamlessly in their apps. Specifically, according to IBM, a new streaming analytics capability was put into open beta; the service provides the capability to instantaneously analyze data while scaling to thousands of sources on the cloud. IBM also added MPP (massively parallel processing) capabilities to enable faster query processing and overall scalability. The announcement also introduces built-in Netezza analytics libraries integrated with Watson Analytics, and more.

Earlier in August, IBM announced the Bluemix Garage opening in Toronto (pictured above). Toronto is just the latest in a series coding workspaces IBM intends to open worldwide. Next up appear to be Nice, France and Melbourne, Australia later this year.  According to IBM, Bluemix Garages create a bridge between the scale of enterprises and the culture of startups by establishing physical collaboration spaces housed in the heart of thriving entrepreneurial communities around the world. Toronto marks the third Bluemix Garage. The Toronto Bluemix Garage is located at the DMZ at Ryerson University, described as the top-ranked university-based incubator in Canada. Experts there will mentor the rising numbers of developers and startups in the region to create of the next generation of cloud apps and services using IBM’s Bluemix.

Members of the Toronto Bluemix Garage include Tangerine, a bank based in Canada that is using Bluemix to implement its mobile strategy. Through the IBM Mobile Quality Assurance for Bluemix service, Tangerine gathers customer feedback and actionable insight on its mobile banking app, effectively streamlining its implementation and development processes.

Finally, back in May IBM introduced new Bluemix Services to help developers create analytics-driven cloud applications. Bluemix, according to IBM, is now the largest Cloud Foundry deployment in the world. And the services the company announced promise to make it easier for developers to create cloud applications for mobile, IoT, supply chain analytics, and intelligent infrastructure solutions. The new capabilities will be added to over 100 services already available in the Bluemix catalog.

At the May announcement, IBM reported bringing more of its own technology into Bluemix, including:

  • Bluemix API Management, which allows developers to rapidly create, deploy, and share large-scale APIs and provides a simple and consumable way of controlling critical APIs not possible with simpler connector services
  • New mobile capabilities available on Bluemix for the IBM MobileFirst Platform, which provide the ability to develop location-based mobile apps that connect insights from digital engagement and physical presence

It also announced a handful of ecosystem and third-party services being added into Bluemix, including several that will facilitate working with .NET capabilities. In short, it will enable Bluemix developers to take advantage of Microsoft development approaches, which should make it easier to integrate multiple mixed-platform cloud workloads.

Finally, as a surprise note at the end of the May announcement IBM added that the company’s total cloud revenue—covering public, private and hybrid engagements—was $7.7 billion over the previous 12 months as of the end of March 2015, growing more than 60% in first quarter 2015.  Hope you’ve noticed that IBM is serious about putting its efforts into the cloud and openness. And it’s starting to pay off.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM LinuxONE and Open Mainframe Project Expand the z System

August 20, 2015

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Primary LinuxOne emperor

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement:  IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine.  IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z.  As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities.  Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs.  LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

  • Distributions: Red Hat, SuSE and Ubuntu
  • Hypervisors: PR/SM, z/VM, and KVM
  • Languages: Python, Perl, Ruby, Rails, Erlang, Java, Node.js
  • Management: WAVE, IBM Cloud Manager, Urban Code Openstack, Docker, Chef, Puppet, VMware vRealize Automation
  • Database: Oracle, DB2LUW, MariaDB, MongoDB, PostgreSQL
  • Analytics: Hadoop, Big Insights, DB2BLU and Spark

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM POWER8 Tops STAC-A2 Benchmark in Win for OpenPOWER

June 25, 2015

In mid-March the Security Technology Analysis Center (STAC) released the first audited STAC-A2 Benchmark results for a server using the IBM Power8 architecture. STAC provides technology research and testing tools based on community-source standards. The March benchmark results showed that an IBM POWER8-based server can deliver more than twice the performance of the best x86 server when running standard financial industry workloads.

stac benchmark power8

IBM Power System S824

This is not IBM just blowing its own horn. The STAC Benchmark Council consists of a group of over 200 major financial firms and other algorithmic-driven enterprises as well as more than 50 leading technology vendors. Their mission is to explore technical challenges and solutions in financial services and develop technology benchmark standards that are useful to financial organizations.

The POWER8 system not only delivered more than twice the performance of the nearest x86 system but its set four new performance records for financial workloads, 2 of which apparently were new public records.  This marked the first time the IBM Power8 architecture has gone through STAC-A2 testing.

The community developed STAC-A2 benchmark set represents a class of financial risk analytics workloads characterized by Monte Carlo simulation and Greeks computations. Greeks computations cover theta, rho, delta, gamma, cross-gamma, model vega, and correlation vega. Together they are referred to as the Greeks. Quality is assessed for single assets by comparing the Greeks obtained from the Monte Carlo with Greeks obtained from a Heston closed form formula for vanilla puts and calls.  Suffice to say, this as an extremely CPU-intensive set of computations. For more detail, click here.

In this case, results were compared to other publicly-released results of warm runs on the Greeks benchmark (STAC-A2.β2.GREEKS.TIME.WARM). The two-socket Power8 server, outfitted with two 12-core 3.52 GHz Power8 processor cards, achieved:

  • 2.3x performance over the comparable x86 setup, an Intel white box with two Xeon E5-2699 v3 (Haswell EP) @ 2.30GHz.
  • 1.7x the performance of the best-performing x86 solution, an Intel white box with two Intel Xeon E5-2699 v3 processors (Haswell EP) @ 2.30GHz and one Intel Xeon Phi 7120A coprocessor.
  • Only 10% less performance than the best-performing solution, a Supermicro server with two 10-core Intel Xeon E5-2690 v2 @ 3.0GHz (Ivy Bridge) and one NVIDIA K80 GPU accelerator.

The Power server also set new records for path scaling (STAC-A2.β2.GREEKS.MAX_PATHS) and asset capacity (STAC-A2.β2.GREEKS.MAX_ASSETS). Compared to the best four-socket x86-based solution — a server comprised of four Xeon E7-4890 v2 (Ivy Bridge EX) parts running at 2.80 GHz — the Power8 server delivered:

  • Double the throughput.
  • 16 percent increase for asset capacity.

The STAC test system consisted of an IBM Power System S824 server with two 12-core 3.52 GHz POWER8 processor cards, equipped with 1TB of DRAM and running Red Hat Enterprise Linux version 7. The solution stack included the IBM-authored STAC-A2 Pack for Linux on Power Systems (Rev A), which used IBM XL, a suite for C/C++ developers that includes the C++ Compiler and the Mathematical Acceleration Subsystem libraries (MASS), and the Engineering and Scientific Subroutine Library (ESSL).

POWER8 processors are based on high performance, multi-threaded cores with each core of the Power System S824 server running up to eight simultaneous threads at 3.5 GHz. With POWER8 IBM also is able to tap the innovations of the OpenPOWER Foundation including CAPI and a variety of accelerators that have started to ship.

The S824 also brings a very high bandwidth memory interface that runs at 192 GB/s per socket which is almost three times the speed of a typical x86 processor. These factors along with a balanced system structure including a large internal 8MB per core L3 cache are the primary reasons why financial computing workloads run significantly faster on POWER8-based systems than alternatives, according to IBM.

Sumit Gupta, vice president of HPC and OpenPOWER operations at IBM, reports STAC-A2 gives a much more accurate view of the expected performance as compared to micro benchmarks or simple code loops. This is especially important when the challenge is big data.

In his blog on the topic, Gupta elaborated on the big data challenge in the financial industry and the POWER8 advantages. STAC-A2 is a set of standard benchmarks that help estimate the relative performance of full systems running complete financial applications. This enables clients in the financial industry to evaluate how systems will perform on real applications. “Those are the kind of results that matter—real results for real client challenges,” Gupta wrote.

Gupta went on to note that the S824 also has a very high bandwidth memory interface. Combined with the large L3 cache noted above it can run financial applications noticeably faster than alternatives.  Combine the STAC results with data recently published by Cabot Partners and you have convincing proof that IBM POWER8-based systems have taken the performance lead in the financial services space (and elsewhere). The Cabot Partners report evaluates functionality, performance, and price/performance across several industries, including life sciences, financial services, oil and gas, and analytics while referencing standard benchmarks as well as application-oriented benchmark data.

Having sat through numerous briefings on POWER8 performance, DancingDinosaur felt reassured, but he doesn’t have to actually run these workloads. It is encouraging, however, to see proof in the form of 3rd party benchmarks like STAC and reports from Cabot Partners. Check out Cabot’s OpenPOWER report here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

Real Time Analytics on the IBM z13

June 4, 2015

For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.

IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.


Courtesy of IBM (click to enlarge)

The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.

The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.

You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.

IBM SPSS improves accuracy by scoring directly within the transactional application against the latest committed data. As such it delivers the performance needed to meet operations SLAs and avoid data governance and security issues, effectively saving network bandwidth, data copying latency, and disk storage.

In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.

Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.

In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics.  In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.

The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).

The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

POWER Systems for Cloud & Linux at IBM Edge2015

April 23, 2015

In October, IBM introduced a new range of POWER systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 processor-based systems, delivering to clients a superior alternative to closed, commodity-based data center servers. DancingDinosaur covered it last October here. Expect this theme to play out big at IBM

Edge2015 in Las Vegas, May 10-15. Just a sampling of a few of the many POWER sessions makes that clear:

IBM Power S824L

Courtesy of Studio Stence, Power S824L (click to enlarge)

(lCV1655) Linux on Power and Linux on Intel: Side By Side, IT Economics Positioning; presenter Susan Proietti Conti

Based on real cases studied by the IBM Eagle team for many customers in different industries and geographies, this session explains where and when Linux on Power provides a competitive alternative to Linux on Intel. The session also highlights the IT economic value of architecture choices provided by the Linux/KVM/Power stack, based on open technologies brought by POWER8 and managed through OpenStack. DancingDinosaur periodically covers studies like these here and here.

(lCV1653) Power IT Economics Advantages for Cloud Service Providers and Private Cloud Deployment; presenter Susan Proietti Conti

Since the announcement of POWER8 and building momentum of the OpenPOWER consortium, there are new reasons for cloud service providers to look at Power technology to support their offerings. As an alternative open-based technology to traditional proprietary technologies, Power offers many competitive advantages that can be leveraged for cloud service providers to deliver IaaS services and other types of service delivery. This session illustrates what Power offers by highlighting client examples and the results of IT economics studies performed for different cloud service providers.

(lSY2653) Why POWER8 Is the Platform of Choice for Linux; presenter Gary Andrews

Linux is the platform of choice for running next generation workloads. With POWER8, IBM is investing heavily into Linux and is adding major enhancements to the Power platform to make it the server of choice for running Linux workloads. This session discusses the new features and how they can help run business faster and at lower costs on the Power platform. Andrews also points out many advanced features of Linux on Power that you can’t do with Linux on x86. He shows how competitive comparisons and performance tests demonstrate that POWER8 increases the lead over x86 latest processor family. In short, attend this session to understand the competitive advantages that POWER8 on Linux can deliver compared to Linux on x86.

(pBA1244) POWER8: Built for Big Data; presenter William Starke

Starke explains how IBM technologies from semiconductors through micro-architecture, system design, system software, and database and analytic software culminate in the POWER8 family of products optimized around big data analytics workloads. He shows how the optimization across these technologies delivers order-of-magnitude improvements via several example scenarios.

 (pPE1350) Best Practices Guide to Get Maximum Performance from IBM POWER8; presenter Archana Ravindar

This session presents a set of best practices that have been tried and tested in various application domains to get the maximum performance of an application on a POWER8 processor. Performance improvement can be gained at various levels: the system level, where system parameters can be tuned; the application level, where some parameters can be tuned as there is no one-size-fits-all scenario; and the compiler level, where options for every kind of application have shown to improve performance. Some options are unique to IBM and give an edge over competition in gaming applications. In cases where applications are still under development, Ravindar presents guidelines to ensure the code runs fastest on Power.

DancingDinosaur supports strategies that enable data centers to reuse existing resources like this one. (pCV2276) Developing a POWERful Cloud Strategy; presenter, Susan Schreitmueller

Here you get to examine decision points for how and when to use an existing Power infrastructure in a cloud environment. This session covers on-premises and off-premises, single vs. multi-tenant hosting, and security concerns. You also review IaaS, PaaS, and hybrid cloud solutions incorporating existing assets into a cloud infrastructure. Discover provisioning techniques to go from months to days and then to hours for new instances.

One session DancingDinosaur hasn’t found yet is whether it is less costly for an enterprise to virtualize a couple of thousand Linux virtual machines on one of the new IBM Power servers pictured above or on the z13 as an Enterprise Linux server purchased under the System z Solution Edition Program. Hmm, will have to ask around about that. But either way you’d end up with very low cost VMs compared to x86.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here, there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here. Please join DancingDinosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.


Get every new post delivered to your Inbox.

Join 813 other followers

%d bloggers like this: