Posts Tagged ‘zEnterprise’

IBM LinuxONE Can Uberize x86-Based IT

November 13, 2015

Uberization—industry disruption caused by an unlikely competitor—emerged as a dominant concern of C-suite executives in a recently announced IBM-Institute of Business Value study. According to the study, the percentage of C-suite leaders who expect to contend with competition from outside their industry increased from 43% in 2013 to 54% today.

IBM Csuite Study_Tiles_10_30_2 competition data

These competitors, future Ubers, aren’t just resulting from new permutations of old industries; they also are coming from digital invaders with totally different business models. Consider IBM LinuxONE, a powerful open source Linux z13 mainframe supported by two open communities, the Open Mainframe Project and the Linux Foundation. For the typical mass market Linux shop, usually an x86-based data center, LinuxONE can deliver a standard Linux distribution with both KVM and Ubuntu as part of a new pricing model that offers a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores.

Talk about disruptive; plus it brings scalability, reliability, high performance, and rock-solid security of the latest mainframe. LinuxONE can handle 8000 virtual servers in a single system, tens of thousands of containers. Try doing that with an x86 machine or even a dozen.

Customers of traditional taxi companies or guests at conventional hotels have had to rethink their transportation or accommodation options in the face of Uberization and the arrival of other disruptive alternatives like Airbnb. So too, x86 platform shops will have to rethink their technology platform options. On either a per-workload basis or a total cost of ownership (TCO) basis, the mainframe has been cost competitive for years. Now with the Uberization of the Linux platform by LinuxONE and IBM’s latest pricing options for it, the time to rethink an x86 platform strategy clearly has arrived. Many long-held misconceptions about the mainframe will have to be dropped or, at least, updated.

The biggest risk to businesses used to come from a new rival with a better or cheaper offering, making it relatively simple to alter strategies. Today, entrenched players are being threatened by new entrants with completely different business models, as well as smaller, more agile players unencumbered by legacy infrastructure. Except for the part of being smaller, IBM’s LinuxONE definitely meets the criteria as a threatening disruptive entrant in the Linux platform space.

IBM even is bring new business models to the effort too, including hybrid cloud and a services-driven approach as well as its new pricing. How about renting a LinuxONE mainframe short term? You can with one of IBM’s new pricing options: just rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Try that with enterprise-class x86 machines.

The introduction of support for both KVM and Ubuntu on the z platform opens even more possibilities. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make LinuxONE very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs and containers. And don’t forget a broader range of tools, including an expanded set of open-source and industry tools and software, including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Coker.

Deon Newman, VP of Marketing for IBM z Systems, can recite the LinuxONE scalability stats off the top of his head: The entry-level, single-frame LinuxONE server, named Rockhopper, starts at 80 virtual Linux machines, and hundreds and hundreds of containers while the high-end double-frame server, Emperor, features six IFLs that support up to 350 virtual machines and can scale all the way to 8,000 virtual machines. On the Emperor server, you can literally have hundreds of thousands of containers on a single platform. Newman deliberately emphasizes that LinuxONE machines are servers.  x86 server users take note. LinuxONE definitely is not your father’s mainframe.

In the latest C-suite study all C-suite executives—regardless of role—identified for the first time technology as the most important external force impacting their enterprise. These executives believe cloud computing, mobile solutions, the Internet of Things, and cognitive computing are the technologies most likely to revolutionize or Uberize their business.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.






IBM Acquires Weather Company to Supercharge Cloud Data Analytics

November 6, 2015

Last week IBM announced it would acquire The Weather Company’s B2B, mobile and cloud-based web properties, including WSI,, Weather Underground, and The Weather Company brand in a move intended to boost its data analytics capabilities. The company has big plans for the acquisition, especially for Watson but is probably not thinking of streaming weather images onto the z System.

IBM Weather Company

IBM to acquire the Weather Company

Maybe it should. How many of the z Systems logistics, supply chain management, scheduling, and reservation systems rely on weather? Might be nice to get access to a couple of Weather Company APIs to pop weather data and analytics into z production systems.

Instead most of the weather goodies will go the Watson as IBM aims to improve the precision of weather forecasts by further deepening Watson’s IoT capabilities through the integration of global atmosphere and weather insights with enterprise information (hello zSystem) to create disruptive industry solutions that optimize decision-making. For instance, IBM reports that airlines can save millions of dollars annually by tapping multiple real-time and historical data sources to optimize fuel consumption, reduce delays and airport congestion, and improve passenger safety during disruptive conditions.

In short, the planned acquisition would bring together IBM’s powerful cognitive and analytics platform and The Weather Company’s dynamic cloud data platform, which powers the fourth most-used mobile app daily in the United States and handles 26 billion inquiries (more than its fair share from DancingDinosaur) to its cloud-based services each day. The plan calls to integrate real-time weather insights into business to improve operational performance and decision-making.

A few days earlier, IBM announced what it describes as a transformational approach to making the most of data, with the introduction of IBM Insight Cloud Services. Through collaboration with Twitter and The Weather Company, as well as the use of open data sets and business-owned data, IBM believes it can help clients cut through the noise of unstructured data, help turn streaming data into insights, and change critical business outcomes across industries such as retail, insurance, and media/entertainment.

As part of that announcement, IBM identified three specific actions it is taking:

  1. Provide four new APIs that developers can access from IBM Bluemix, IBM’s cloud platform, to incorporate historical and forecasted weather data from The Weather Company into web and mobile apps; and two APIs that allow developers to incorporate Twitter content enriched with sentiment insights
  2. Introduce new bundled data sets from IBM and The Weather Company customized for key industries and available on the IBM Cloud. The data packages can help insurers use weather data to alert policyholders ahead of hail storms that may cause property damage, help utilities forecast demand and identify likely service outages, help local governments to develop detailed emergency planning in advance of severe weather, and enable many industries such as retail to use data to help optimize their operations, reduce costs, and uncover revenue opportunities ahead of changes in weather.
  3. Offer a set of pre-built solutions that leverage IBM Insight Cloud Services cognitive techniques to help enable business users to tackle very specific industry challenges. This expands a set of industry solutions IBM introduced in May 2015 that provide businesses with the ability to generate new types of insights based on customer behavior.

With the pending acquisition of the Weather Company properties IBM is able to further advance its capabilities in big data, analytics, cloud computing, and cognitive computing. These encompass what the company refers to as its strategic imperatives and, alongside the z, they delivered the only bright spot in IBM’s 3Q15 financials. As reported by DancingDinosaur here a few weeks ago: strategic imperatives revenue: up 27 percent year-to-year; Cloud revenue up more than 65 percent year-to-date.  Total cloud revenue hit $9.4 billion over the trailing 12 months. Cloud delivered as a service had an annual run rate of $4.5 billion vs. $3.1 billion in third-quarter 2014.  Business analytics revenue was up 19 percent year-to-date. With its plans for the Weather Company expect the numbers to grow in upcoming quarters. The Weather Company can also show IBM a thing or two about mobile, another top priority.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Enhances the DS8000 Storage Family for New Challenges

October 30, 2015

Earlier this month IBM introduced a family of business-critical hybrid data storage systems that span a wide range of price points. The family is powered by the next generation of IBM’s proven DS8000 storage platform and delivers critical application acceleration, 6-nines (99.9999) availability, and industry-leading capabilities, like integrated high performance flash.  And coming along in November and December will be new tape storage products.


DS8880, courtesy of IBM (click to enlarge)

The company sees demand for the new storage being driven by cloud, mobile, analytics, and security. As IBM continues to encourage data centers to expand into new workloads, it is introducing a new family of business-critical hybrid flash data systems primarily to support the latest requirements of z System- and Power-based data centers. If your shop hasn’t started to experience a ramp up of new workloads it likely will soon enough.

The new storage family, all based on POWER8 and the DS8000 software stack, currently consists 3 models:

  1. The entry model, the DS8884, delivers fast hybrid flash starting at under $50K. It offers up to 12 cores, 256 GB total system memory, 64 16GB FCP/FICON ports, and 768 HDD/SSD + 120 Flash cards in a 19”, 40u rack.
  2. The DS8886 brings a 2x performance boost, up to 48 cores, 2 TB total system memory, 128 16GB FCP/FICON ports, and 1536 HDD/SSD’s + 240 Flash cards packed into a 19”, 46u rack.
  3. The high end DS8888, according to IBM, is the industry’s fastest T1 Subsystem. It offers all-flash with up to 96 cores, 2 TB total system memory, 128 16GB FCP/FICON ports, and 480 Flash cards packed in the 19”, 40u rack. Won’t be available until spring 2016.

Being built on the DS8000 software stack, the new storage brings unparalleled integration with IBM z System. The systems are especially tuned for insight and cloud environments. They also deliver top efficiency and maximum utilization of resources including staff productivity, space utilization and lower cost through streamlined operations and a 30% reduction in footprint vs. 33″-34” racks.

The DS8888 family comes with two license options: Base function license provides Logical Configuration support for FB, Original Equipment License (OEL), IBM Database Protection, Thin Provisioning, Encryption Authorization, Easy Tier, and I/O Priority Manager. The z Synergy Service  Function license brings PAV, and Hyper-PAV, FICON and High Performance FICON (zHPF), IBM z/OS Distributed Data Backup, and a range of Copy Services Functions including FlashCopy, Metro Mirror, Global MirrorMetro/Global Mirror, z/Global Mirror & z/Global Mirror Resync, and Multi-Target PPRC .

The DS8880 family also provides 99.9999% uptime, an increase over the typical industry uptime benchmark of 99.999% uptime. That extra decimal point translates into 365.243 continuous days of uptime per year. Even the most mission-critical application can probably live with that.

The High-Performance Flash Enclosure for the DS8880 family redefines what IBM considers true enterprise hybrid flash data systems should be, especially in terms of performance for critical applications. Usually, hybrid systems combine flash and traditional spinning drives to be deployed among a variety of mixed workloads of private or public clouds, while reserving more costly all-flash storage for delivering the most extreme performance for only those applications that require it. Now IBM recommends hybrid configurations for consolidation of virtually all workloads since the DS8880 preserves the flexibility to deliver flash performance exactly where and when it is needed automatically through Easy Tier, which optimizes application performance dynamically across any DS8880 configuration without requiring administrators to manually tune and retune applications and storage.

The DS8880 also supports a wide variety of enterprise server and virtual server platforms, but not all are created equal. It includes special integration with z Systems and IBM Power Systems. This is due to the advanced microcode that has been developed and enhanced in lockstep with the mainframe’s I/O architecture over the past several decades. For Power shops the DS8880 copy services are tightly integrated with IBM PowerHA SystemMirror for AIX and IBM i, which add another level of assurance for users who need 24×7 business continuity for their critical Power systems.

For shops dealing with VMware, the DS8880 includes interoperability with VMware vStorage APIs for Array Integration, VMware vCenter Site Recovery Manager, and a VMware vCenter plug-in that allows users to offload storage management operations in VMware environments to the DS8880. Should you prefer to go the other direction, the DS8880 supports IBM Storage Management Console for VMware vCenter to help VMware administrators independently monitor and control their storage resources from the VMware vSphere Client GUI.

If you didn’t notice, there have been a series of interesting announcements coming out of IBM Insight, which wrapped up yesterday in Las Vegas. DancingDinosaur intends to recap some of the most interesting announcements in case you missed them.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM z System After Moore’s Law

October 2, 2015

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

third dimension chip

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future.  This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.


Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

Real Time Analytics on the IBM z13

June 4, 2015

For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.

IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.


Courtesy of IBM (click to enlarge)

The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.

The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.

You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.

IBM SPSS improves accuracy by scoring directly within the transactional application against the latest committed data. As such it delivers the performance needed to meet operations SLAs and avoid data governance and security issues, effectively saving network bandwidth, data copying latency, and disk storage.

In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.

Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.

In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics.  In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.

The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).

The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.

IBM zSystem for Social—Far From Forgotten at Edge2015

May 28, 2015

Dexter Doyle and Chris Gamin (z System Middleware) titled their session at Edge2015 IBM z Systems: The Forgotten Platform in Your Social Business. They were only half joking. As systems of engagement play bigger roles in the enterprise the z is not quite as forgotten as it may once have been.  In fact, at IBM the z runs the company’s own deployment of IBM Connections, the company’s flagship social business product.

Doyle used the graphic below (copyright John Atkinson, Wrong Hands) to make the point that new tools replace familiar conventional tools in a social business world.

 social desktop

 (copyright John Atkinson, Wrong Hands, click to enlarge)

Looks almost familiar, huh? Social business is not so radical. The elements of social business have been with us all along. It’s not exactly a one-to-one mapping, but Twitter and Pinterest instead of post-it notes, LinkedIn replaces the rolodex, Instagram instead of photos on your desk, and more.  Social business done right with the appropriate tools enables efficiency, Doyle observed. You don’t see the z in this picture, but it is there connecting all the dots in the social sphere

Many traditional mainframe data centers are struggling to come to grips with social business even as mobile and social workloads increasingly flow through the z. “The biggest thing with social is the change in culture,” said Doyle in his Forgotten Platform session. You end up using different tools to do business in a more social way. Even email appears antiquated in social business.

For data centers still balking at the notion of social business, Doyle noted that by 2016, 50% of large organizations will have internal Facebook-like social networks, a widely reported Gartner finding, and 30% of these will be considered as essential as email and telephones are today. The message: social business is real and z data centers should be a big part of it.

So what parts of social business will engage with the z? Doyle suggested five to start:

  1. Social media analytics
  2. Customer sentiment
  3. Customer and new market opportunity identification
  4. Identification of illegal or suspicious activities
  5. Employee and customer experiences

And the z System’s role? Same as it has always been:

  • Build an agile approach to deliver applications
  • Make every transaction secure
  • Use analytics to improve outcomes at every moment

These are things every z data center should be good at. To get started with social business on z visit the IBM Connections webpage here. There happens to be an offer for the 60-day free trail (it’s a cloud app) here. Easy and free, at least should be worth a try.

IBM Connections delivers a handful of social business capabilities. The main components are home, profiles, communities, and social analytics. Other capabilities include blogs, wikis, bookmarks, and forums for idea generation and sharing. You can use the activities capability to organize your work and that of a team, and another lets you vote on ideas. Finally, it brings a media library, content management capabilities, and file management.

Along with Connections you also might want to deploy WebSphere and Java, if you haven’t already. Then, if you are serious about building out a social business around the z you’ll want to check out Bluemix and MobileFirst. Already there is an IBM Red Book out for mobile app dev on the z13. The idea, of course, is to create engaging mobile and social business apps with the z as the back end.

The biggest payoff from social business on the z comes when you add analytics, especially real-time analytics. DancingDinosaur attended a session on that topic at Edge2015 and will be taking it up in a coming post.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on and here.


Get every new post delivered to your Inbox.

Join 813 other followers

%d bloggers like this: