IBM z System After Moore’s Law

October 2, 2015

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

third dimension chip

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future.  This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Makes a Big Play for the API Economy with StrongLoop

September 25, 2015

APIs have become essential in connecting systems of engagement with the systems of record typically found on the IBM z System. That’s one reason why IBM earlier this month acquired StrongLoop, Inc., a software provider that helps developers connect enterprise applications to mobile, Internet of Things (IoT) and web applications in the cloud mainly through rapidly proliferating and changing APIs.  Take this as a key signal IBM intends to be a force in the emerging API economy. Its goal is to connect existing enterprise apps, data, and SOA services to new channels via APIs.

api economy ibm

Courtesy: (click to enlarge)

Key to the acquisition is StrongLoop’s position as a leading provider of Node.js, a scripting language that has become a favorite among developers needing to build applications using APIs. According to IBM it intends to integrate Node.js capabilities from StrongLoop with its own software portfolio, which already includes MobileFirst and WebSphere, to help organization better use enterprise data and conduct transactions whether in the cloud or on-premises.

These new capabilities, IBM continues, will enable organizations and developers to build scalable APIs, and more easily connect existing back-end enterprise processes with front-end mobile, IoT, and web apps in an open hybrid cloud. Node.js is one of the fastest growing development frameworks for creating and delivering APIs in part due to it similarities with JavaScript. This shortens the learning curve.

Although Node.js is emerging as the standard for APIs and micro-services, APIs still present challenges. These include the lack of an architected approach, limited scalability, multiple languages and point products, limited data connectors, and large, fragile monolithic applications.

Mainframe data centers, in particular, are sitting on proven software assets that beg to be broken out as micro-services to be combined and recombined to create new apps for use in mobile and Web contexts. As IoT ramps up the demand for these APIs and more will skyrocket.  And the mainframe data center will sit at the center of all this, possibly even becoming a revenue generator.

In response, StrongLoop brings API creation and lifecycle support and back end data connectors. It also will integrate with IBM’s API management, creating an API Platform that can enable polyglot run-times, integration, and API performance monitoring. It also will integrate with IBM’s MobileFirst Platform, WebSphere and other products, such as Bluemix, to enable Node across the product portfolio. StrongLoop also brings Arc and its LoopBack framework, which handles everything from API visual modeling to process manager to scale APIs, and a security gateway. Together StrongLoop Arc along with IBM’s API Management can deliver the full API lifecycle. IBM also will incorporate select capabilities from StrongLoop into its IoT Foundation, a topic DancingDinosaur expects to take up in the future.

At the initial StrongLoop acquisition announcement Marie Wieck, general manager, Middleware, IBM Systems, alluded to the data center possibilities, as noted above: “Enterprises are focused on digital transformation to reach new channels, tap new business models, and personalize their engagement with clients. APIs are a critical ingredient.” The fast adoption of Node.js for rapidly creating APIs combined with IBM’s strength in Java and API management on the IBM cloud platform promises a winning strategy.

To make this even more accessible, IBM is adding Node.js to Bluemix, following a summer of enhancements to Bluemix covered here by DancingDinosaur just a few weeks ago. Java remains the leading language for web applications and transaction systems. Combining StrongLoop’s Node.js tools and services with IBM’s WebSphere and Java capabilities will help organizations bridge Java and Node.js development platforms, enabling enterprises to extract greater value from their application investments. Throw in integration on IBM Bluemix and the Java and Node.js communities will gain access to many other IBM and third-party services including access to mobile services, data analytics, and Watson, IBM’s crown cognitive computing jewel.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Expands Spectrum Storage in the Cloud with Spectrum Protect

September 18, 2015

IBM is targeting storage for hybrid clouds with Spectrum Protect. Specifically, it brings new cloud backup and a new management dashboard aimed to help businesses back up data to on-premises object storage or the cloud without the expense of cloud-gateway appliances. It also enables advanced data placement across all storage types to maximize performance, availability, and cost efficiency. Spectrum Protect represents the latest part of the IBM Spectrum storage family; which provides advanced software defined storage (SDS) storage capabilities and flexible storage either as software, an appliance, or a cloud service.  IBM announced Spectrum Protect at the end of August.

ibm Spectrum Protect Dashboard dino

Courtesy IBM: Spectrum Protect dashboard (click to enlarge)

Introduced early this year, IBM Spectrum brings a family of optimized SDS solutions designed to work together. It offers SDS file, object, and block storage with common management and a consistent user and administrator experience.  Although it is based on IBM’s existing storage hardware products like XIV, Storwize, IBM FlashSystem, and SVC you can deploy it as software on some non IBM hardware too. It also offers support for VMware environments and includes VMware API support for VASA, VAAI, and VMware SRM. With Spectrum, IBM appears to have come up with a winner; over the last six months, IBM reports more than 1,000 new clients have chosen products from the IBM Spectrum Storage portfolio.

Specifically, IBM Spectrum Protect supports IBM Cloud infrastructure today with plans to expand to other public clouds in future. IBM Spectrum Accelerate (XIV block storage) also can be accessed as a service by IBM Cloud customers via the SoftLayer cloud infrastructure. There it allows companies to deploy block storage on SoftLayer without having to buy new storage hardware or manage appliance farm.

In competitive analysis, IBM found that a single IBM Spectrum Protect server performs the work of up to 15 CommVault servers. This means that large enterprises can consolidate backup servers to reduce cost and complexity while managing data growth from mobile, social, and Internet of Things environments.  Furthermore, SMBs can eliminate the need for a slew of infrastructure devices, including additional backup servers, media servers, and deduplication appliances, thereby reducing complexity and cost. Cost analysis with several beta customers, reports IBM, indicates that the enhanced IBM Spectrum Protect software can help clients reduce backup infrastructure costs on average by up to 53 percent.

IBM reports that the Spectrum Storage portfolio can centrally manage more than 300 different storage devices and yottabytes (yotta=1024 bytes) of data.  Its device interoperability is the broadest in the industry – incorporating both IBM and non-IBM hardware and tape systems.  IBM Spectrum Storage can help reduce storage costs up to 90 percent in certain environments by automatically moving data onto the most economical storage device – either from IBM or non-IBM flash, disk, and tape systems.

IBM Spectrum Storage portfolio packages key storage software from conventional IBM storage products. These include IBM Spectrum Accelerate (IBM XIV), Spectrum Virtualize (IBM SAN Volume Controller along with IBM Storwize), Spectrum Scale (IBM General Parallel File System or GPFS technology, previously referred to as Elastic Storage), Spectrum Control (IBM Virtual Storage Center and IBM Storage Insights), Spectrum Protect (Tivoli Storage Manager family) and Spectrum Archive (various IBM tape backup products).

The portfolio is presented as a software-only product and, presumably, you can run it on IBM and some non-IBM storage hardware if you chose. You will have to compare the cost of the software license with the cost of the IBM and non-IBM hardware to decide which gets you the best deal.  It may turn out that running Spectrum Accelerate (XIV) on low cost, generic disks rather than buying a rack of XIV disk to go with it may be the lowest price. But keep in mind that the lowest cost generic disk may not meet your performance or reliability specifications.

IBM reports it also is enhancing the software-only version of IBM Spectrum Accelerate to reduce costs by consolidating storage and compute resources on the same servers. In effect, IBM is making XIV software available with portable licensing across XIV systems, on- premises servers, and cloud environments to offer greater operational flexibility. Bottom line: Possibly a good deal but be prepared to do some detailed comparative cost analysis to identify the best mix of SDS, cloud storage, and hardware at the best price for your particular needs.

In general, however, DancingDinosaur favors almost anything that increases data center configuration and pricing flexibility. With that in mind consider the IBM Spectrum options the next time you plan storage changes. (BTW, DancingDinosaur also does storage and server cost assessments should you want help.)

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.



IBM Continues to Bolster Bluemix PaaS

September 10, 2015

In the last 10 years the industry, led by IBM, has gotten remarkably better at enabling nearly coding-free development. This is important given how critical app development has become. Today it is impossible to launch any product without sufficient app dev support.  At a minimum you need a mobile app and maybe a few micro-services. To that end, since May IBM has spent the summer introducing a series of Bluemix enhancements. Find them here and here and here and here.  DancingDinosaur, at best a mediocre programmer, hasn’t written any code for decades but in this new coding environment he has started to get the urge to participate in a hack-a-thon. Doesn’t that (below) look like fun?

Bluemix Garage Toronto 1

IBM’s Bluemix Garage in Toronto (click to enlarge)

The essential role of software today cannot be overestimated. Even companies introducing non-technical products have to support them with apps and digital services that must be continually refreshed.  When IoT really starts to ramp up bits and pieces of code everywhere will be needed to handle the disparate pieces, get everything to interoperate, collect the data, and then use it or analyze it and initiate the next action.

Bluemix, a cloud-based PaaS product, comes as close to an all-in-one Swiss army knife development and deployment platform for today’s kind of applications as you will find. Having only played around with a demo it appears about as intuitive as an enterprise-class product can get.

The most recent of IBM’s summer Bluemix announcement promises more flexibility to integrate Java-based resources into Bluemix.  It offers a set of services to more seamlessly integrate Java-based resources into cloud-based applications. For instance, according to IBM, it is now possible to test and run applications in Bluemix with Java 8. Additionally, among other improvements, the jsp-2.3, el-3.0, and jdbc-4.1 Liberty features, previously in beta, are now available as production-ready. Plus, Eclipse Tools for Bluemix now includes JavaScript Debug, support for Node.js applications, Java 8 Liberty for Java integration, and Eclipse Mars support for the latest Eclipse Mars version as well as an improved trust self-signed certificates capability. Incremental publish support for JEE applications also has been expanded to handle web fragment projects.

In mid-August IBM announced the use of streaming analytics and data warehouse services on Bluemix. This should enable developers to expand the capabilities of their applications to give users a more robust cloud experience by facilitating the integration of data analytics and visualization seamlessly in their apps. Specifically, according to IBM, a new streaming analytics capability was put into open beta; the service provides the capability to instantaneously analyze data while scaling to thousands of sources on the cloud. IBM also added MPP (massively parallel processing) capabilities to enable faster query processing and overall scalability. The announcement also introduces built-in Netezza analytics libraries integrated with Watson Analytics, and more.

Earlier in August, IBM announced the Bluemix Garage opening in Toronto (pictured above). Toronto is just the latest in a series coding workspaces IBM intends to open worldwide. Next up appear to be Nice, France and Melbourne, Australia later this year.  According to IBM, Bluemix Garages create a bridge between the scale of enterprises and the culture of startups by establishing physical collaboration spaces housed in the heart of thriving entrepreneurial communities around the world. Toronto marks the third Bluemix Garage. The Toronto Bluemix Garage is located at the DMZ at Ryerson University, described as the top-ranked university-based incubator in Canada. Experts there will mentor the rising numbers of developers and startups in the region to create of the next generation of cloud apps and services using IBM’s Bluemix.

Members of the Toronto Bluemix Garage include Tangerine, a bank based in Canada that is using Bluemix to implement its mobile strategy. Through the IBM Mobile Quality Assurance for Bluemix service, Tangerine gathers customer feedback and actionable insight on its mobile banking app, effectively streamlining its implementation and development processes.

Finally, back in May IBM introduced new Bluemix Services to help developers create analytics-driven cloud applications. Bluemix, according to IBM, is now the largest Cloud Foundry deployment in the world. And the services the company announced promise to make it easier for developers to create cloud applications for mobile, IoT, supply chain analytics, and intelligent infrastructure solutions. The new capabilities will be added to over 100 services already available in the Bluemix catalog.

At the May announcement, IBM reported bringing more of its own technology into Bluemix, including:

  • Bluemix API Management, which allows developers to rapidly create, deploy, and share large-scale APIs and provides a simple and consumable way of controlling critical APIs not possible with simpler connector services
  • New mobile capabilities available on Bluemix for the IBM MobileFirst Platform, which provide the ability to develop location-based mobile apps that connect insights from digital engagement and physical presence

It also announced a handful of ecosystem and third-party services being added into Bluemix, including several that will facilitate working with .NET capabilities. In short, it will enable Bluemix developers to take advantage of Microsoft development approaches, which should make it easier to integrate multiple mixed-platform cloud workloads.

Finally, as a surprise note at the end of the May announcement IBM added that the company’s total cloud revenue—covering public, private and hybrid engagements—was $7.7 billion over the previous 12 months as of the end of March 2015, growing more than 60% in first quarter 2015.  Hope you’ve noticed that IBM is serious about putting its efforts into the cloud and openness. And it’s starting to pay off.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


IBM LinuxONE and Open Mainframe Project Expand the z System

August 20, 2015

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Primary LinuxOne emperor

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement:  IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine.  IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z.  As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities.  Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs.  LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

  • Distributions: Red Hat, SuSE and Ubuntu
  • Hypervisors: PR/SM, z/VM, and KVM
  • Languages: Python, Perl, Ruby, Rails, Erlang, Java, Node.js
  • Management: WAVE, IBM Cloud Manager, Urban Code Openstack, Docker, Chef, Puppet, VMware vRealize Automation
  • Database: Oracle, DB2LUW, MariaDB, MongoDB, PostgreSQL
  • Analytics: Hadoop, Big Insights, DB2BLU and Spark

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Ranked #1 in Midrange Servers and Enterprise Network Storage

August 13, 2015

Although the financial markets may be beating up IBM the technology world continues to acclaim IBM technology and products. Most recently, IBM ranked on top in the CRN Annual Report Card (ARC) Survey recognizing the best-in-class vendors in the categories of partnership, support, and product innovation.  But the accolades don’t stop there.

Mobile Security Infographic

Courtesy of IBM (click to enlarge)

IBM was named a leader in four key cloud services categories—hosting, overall cloud professional services, cloud consulting services, and systems integration—by the independent technology market research firm Technology Business Research, Inc. (TBR).  This summer Gartner also named IBM as a leader in Security Information and Event Management (SIEM) in the latest Gartner Magic Quadrant for SIEM, this for the seventh consecutive year. Gartner also named IBM as a Leader in the 2015 Magic Quadrant for Mobile Application Development Platforms, specifically calling out the IBM MobileFirst Platform.

The CRN award addresses the technology channel. According to IBM, the company and its business partners are engaging with clients in new ways to work, building the infrastructure, and deploying innovative solutions for the digital era.  This should come as no surprise to anyone reading this blog; the z 13 was designed expressly to be a digital platform for the cloud, mobile, and big data era.  IBM’s z and Power Systems servers and Storage Solutions specifically were designed to address the challenges these areas present.

Along the same lines, IBM’s commitment to open alliances has continued this year unabated, starting with its focus on innovation platforms designed for big data and superior cloud economics, which continue to be the cornerstone of IBM Power System. The company also plays a leading role in the Open Power Foundation, the Linux Foundation as well as ramping up communities around the Internet of Things, developerWorks Recipes, and the open cloud, developerWorks Open. The last two were topics DancingDinosaur tackled recently, here and here.

The TBR report, entitled Hosted Private & Professional Services Cloud Benchmark, provides a market synopsis and growth estimates for 29 cloud providers in the first quarter of 2015. In that report, TBR cited IBM as:

  • The undisputed growth leader in overall professional cloud services
  • The leader in hosted private cloud and managed cloud services
  • A leader in OpenStack vendor acquisitions and OpenStack cloud initiatives
  • A growth leader in cloud consulting services, bridging the gap between technology and strategy consulting
  • A growth leader in cloud systems integration services

According to the report: IBM’s leading position across all categories remains unchallenged as the company’s established SoftLayer and Bluemix portfolios, coupled with in-house cloud and solutions integration expertise, provide enterprises with end-to-end solutions.

Wall Street analysts and pundits clearly look at IBM differently than IT analysts.  The folks who look at IBM’s technology, strategy, and services, like those at Gartner, TBR, and the CRN report card, tell a different story. Who do you think has it right?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Simplifies Internet of Things with developerWorks Recipes

August 6, 2015

IBM has a penchant for working through communities going back as far as Eclipse and probably before. Last week DancingDinosaur looked at the developerWorks Open community. Now let’s look at the IBM’s developerWorks Recipes community intended to address the Internet of Things (IoT).

recipes iot sensor tag

TI SensorTag

The Recipes community  will try to help developers – from novice to experienced – quickly and easily learn how to connect IoT devices to the cloud and how to use data coming from those connected devices. For example one receipe walks you through Connecting the TI Simplelink SensorTag (pictured above) to the IBM IoT foundation service in a few simple step. By following these steps a developer, according to IBM, should be able to connect the SensorTag to the IBM quickstart cloud service in less than 3 minutes. Think of recipes as simplified development patterns—so simple that almost anyone could follow it. (Wanted to try it myself but didn’t have a tag.  Still, it looked straightfoward enough.)

IoT is growing fast. Gartner forecasts 4.9 billion connected things in use in 2015, up 30% from 2014, and will reach 25 billion by 2020. In terms of revenue, this is huge. IDC predicts the worldwide IoT market to grow from $655.8 billion in 2014 to $1.7 trillion in 2020, a compound annual growth rate (CAGR) of 16.9%. For IT people who figure out how to do this, the opportunity will be boundless. Every organization will want to connect its devices to other devices via IoT. The developerWorks Recipes community seems like a perfect way to get started.

IoT isn’t exactly new. Manufacturers have cobbled together machine-to-machine (M2M) networks Banks and retailers have assembled networks of ATMs and POS terminals. DancingDinosaur has been writing about IoT for mainframe shops for several years.  Now deveoperWorks Recipes promises a way for just about anyone to set up their own IoT easily and quickly while leveraging the cloud in the process. There is a handful of recipes now but it provides a mechanism to add recipes so expect the catalog of recipes to steadily increase. And developers are certain to take existing recipes and improvise on them.

IBM has been trying to simplify  development for cloud, mobile, IoT starting with the launch of Bluemix last year. By helping users connect their IoT devices to IBM Bluemix, which today boasts more than 100 open-source tools and services, users can then run advanced analytics, utilize machine learning, and tap into additional Bluemix services to accelerate the adoption of  IoT and more.

As easy as IBM makes IoT development sound this is a nascent effort industry wide. There is a crying need for standards at every level to facilitate the interoperability and data exchange among the many and disparate devices, networks, and applications that will make up IoT.  Multiple organizations have initiated standards efforts but it will take some time to sort it all out.

And then there is the question of security. In a widely reported experiment by Wired Magazine  hackers were able to gain control of a popular smart vehicle. Given that cars are expected to be a major medium for IoT and every manufacturer is rushing to jam as much smart componentry into their vehicles you can only hope every automaker is  scrambling for security solutions .

Home appliances represent another fat, lucrative market target for manufacturers that want to embed intelligent devices and IoT into their all products. What if hackers access your automatic garage door opener? Or worse yet, what if they turn off your coffee maker and water heater? Could you start the day without a hot shower and cup of freshly brewed coffee and still function?

Running IoT through secure clouds like the IBM Cloud is part of the solution. And industry-specific clouds intended for IoT already are being announced, much like the Internet exchanges of a decade or two ago. Still, more work needs to be done on security and interoperability standards if IoT is to work seamlessly and broadly to achieve the trillions of dollars of economic value projected for it.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.









developerWorks Open Reinforces IBM’s Commitment to Open Source Cloud

July 30, 2015

Maybe IBM’s decades-long legacy of proprietary platforms and systems makes people a little hesitant to fully embrace its open source initiatives. Still, IBM has been supporting Linux on the z System for over a decade, the Eclipse initiative for as long or longer, and gives no sign of getting ready to pull the plug on any of its open source initiatives.

Rise of Open Source Linux and OpenStackCourtesy of IBM (click to enlarge)

Or take Bluemix, an implementation of IBM’s Open Cloud Architecture based on Cloud Foundry, an open source Platform as a Service (PaaS) initiative. And the company only gets more open source by the day. Just last week IBM continued to pour more open source components into Bluemix. It announced developerWorks Open, a cloud-based environment for developers to not only download open sourced IBM code but also have access to blogs, videos, tools and techniques to accelerate their own efforts on behalf of clients.

The current model of development in the open source community, according to IBM, lacks a strategic focus on business requirements. To address this IBM is launching a set of projects in industries like healthcare, mobile, retail, insurance, and banking that ensure a strategic business focus and address real-world business challenges.

The creation of developerWorks Open, notes IBM, comes at an important time for cloud developers. Organizations are wrestling with getting the most out of their multiple clouds environments. For instance, a developer building cloud applications on Bluemix for an insurance accident claim system likely will require storing videos and repair photos for an insurance accident claim application. The developer may have difficulty making a storage choice and then implementing it. But developerWorks Open helps simplify that choice and provides the reassurance that the choice has industry support.

To that end IBM is offering a broad range of technologies for developerWorks Open that aim to help bridge this perceived development gap and remove the obstacles that inhibit developers from turning open source code into sustainable applications that solve real business issues. IBM will also offer these services on its programming platform for cloud software development, Bluemix. The goal is to remove the obstacles that hinder developers from turning open source code into sustainable enterprise-caliber applications that solve real business issues.

For that reason the company will open source a number of apps from its MobileFirst portfolio (MobileFirst was recently cited by Gartner as a leader in its Magic Quadrant mobile application development segment) that will assist developers in the following markets:

  • IBM Ready App for Healthcare tracks patient progress for at-home physical therapy programs via mobile device.
  • IBM Ready App for Retail personalizes and reshapes the specialty retail store shopping experience through direct line of communication
  • IBM Ready App for Insurance improves the relationship between homeowners and insurers and uses Internet of Things sensors to synch home with utilities.
  • IBM Ready App for Banking helps financial institution’s address the mobile needs business owners and attract prospects.

Additionally, IBM is open sourcing several analytics technologies including:

  • Activity Streams provides developers with a standard model and encoding format for describing how users engage with both the application and with one another.
  • Agentless System Crawler offers a unified cloud monitoring and analytics framework that enables visibility into all types of cloud platforms and runtimes
  • IBM Analytics for Apache Spark puts the full analytics power and capabilities of Spark at the developers fingertips. (Beta now available on Bluemix.)

IBM will also continue to open source cloud data services, including IBM Object Storage on Bluemix Service Broker, which can be used to integrate OpenStack Swift with Cloud Foundry to enable fast access to cloud data without needing to know where the data is stored.

The introduction of developerWorks Open comes at a time when organizations are starting to realize that their software and apps increasingly are their products, especially cloud, mobile, and collaboration apps, and they need a fast and efficient way to build and update them. In other cases, IBM notes, organizations are wrestling with getting the most out of their multiple clouds environments.

IBM is committed to open source; there is no going back. Company executives see it as the foundation of innovative application development in the cloud.  “With developerWorks Open we are open sourcing additional IBM innovations that we feel have the potential to grow the community and ecosystem and eventually become established technologies,” declared IBM Vice President of Cloud Architecture and Technology Dr. Angel Diaz recently.

Currently, IBM participates in and contributes to more than 150 open source projects. These projects include Spark, OpenStack, Cloud Foundry, Open Contain Project, Node.js, CouchDb, Linux, Eclipse and an already established relationship with Apache. As IBM notes, open source projects increase the skills and knowledge base around the company’s software product set. developerWorks Open is the next step in IBM’s strategy to help businesses create, use, and innovate around cloud computing systems. Coming right behind is a similar community initiative for IoT development. Stay tuned.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.


Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


Get every new post delivered to your Inbox.

Join 800 other followers

%d bloggers like this: