Posts Tagged ‘Watson’

IBM Launches New IoT Collaborative Initiative

February 23, 2017

Collaboration partners can pull hundreds of millions of dollars in new revenue from IoT, according to IBM’s recent IoT announcement. Having reached what it describes as a tipping point with IoT innovation the company now boasts of having over 6,000 clients and partners around the world, many of whom are now wanting to join in its new global Watson IoT center to co-innovate. Already Avnet, BNP Paribas, Capgemini, and Tech Mahindra will collocate development teams at the IBM Munich center to work on IoT collaborations.

new-ibm-watson-iot-center

IBM Opens New Global Center for Watson IoT

The IBM center also will act as an innovation space for the European IoT standards organization EEBus.  The plan, according to Harriet Green, General Manager, IBM Watson IoT, Cognitive Engagement and Education (pictured above left), calls for building a new global IoT innovation ecosystem that will explore how cognitive and IoT technologies will transform industries and our daily lives.

IoT and more recently cognitive are naturals for the z System, and POWER Systems have been the platform for natural language processing and cognitive since Watson won Jeopardy three years ago. With the latest enhancements IBM has brought to the z in the form of on-premises cognitive and machine learning the z should assume an important role as it gathers, stores, collects, and processes IoT data for cognitive analysis. DancingDinosaur first reported on this late in 2014 and again just last week. As IoT and cognitive workloads ramp up on z don’t be surprised to see monthly workload charges rise.

Late last year IBM announced that car maker BMW will collocate part of its research and development operations at IBM’s new Watson IoT center to help reimagine the driving experience. Now, IBM is announcing four more companies that have signed up to join its special industry “collaboratories” where clients and partners work together with 1,000 Munich-based IBM IoT experts to tap into the latest design thinking and push the boundaries of the possible with IoT.

Let’s look at the four newest participants starting with Avnet. According to IBM, an IT distributor and global IBM partner, Avnet will open a new joint IoT Lab within IBM’s Watson IoT HQ to develop, build, demonstrate and sell IoT solutions powered by IBM Watson. Working closely with IBM’s leading technologists and IoT experts, Avnet also plans to enhance its IoT technical expertise through hands-on training and on-the-job learning. Avnet’s team of IoT and analytics experts will also partner with IBM on joint business development opportunities across multiple industries including smart buildings, smart homes, industry, transportation, medical, and consumer.

As reported by BNP Paribas, Consorsbank, its retail digital bank in Germany, will partner with IBM´s new Watson IoT Center. The company will collocate a team of solution architects, developers and business development personnel at the Watson facility. Together with IBM’s experts, they will explore how IoT and cognitive technologies can drive transformation in the banking industry and help innovate new financial products and services, such as investment advice.

Similarly, global IT consulting and technology services provider Capgemini will collocate a team of cognitive IoT experts at the Watson center. Together they will help customers maximize the potential of Industry 4.0 and develop and take to market sector-specific cognitive IoT solutions. Capgemini plans a close link between its Munich Applied Innovation Exchange and IBM’s new Customer Experience zones to collaborate with clients in an interactive environment.

Finally, the Indian multinational provider of enterprise and communications IT and networking technology Tech Mahindra, is one of IBM’s Global System Integrators with over 3,000 specialists focused on IBM technology around the world. The company will locate a team of six developers and engineers within the Watson IoT HQ to help deliver on Tech Mahindra’s vision of generating substantial new revenue based on IBM’s Watson IoT platform. Tech Mahindra will use the center to co-create and showcase new solutions based on IBM’s Watson IoT platform for Industry 4.0 and Manufacturing, Precision Farming, Healthcare, Insurance and Banking, and automotive.

To facilitate connecting the z to IoT IBM offers a simple recipe. It requires 4 basic ingredients and 4 steps: Texas Instrument’s SensorTag, a Bluemix account, IBM z/OS Connect Enterprise Edition, and a back-end service like CICS.  Start by exposing an existing z Systems application as a RESTful AP. This is where the z/OS Connect Edition comes in.  Then enable your SensorTag device to Watson IoT Quick Start. From there connect the Cloud to your on-premises Hybrid Cloud.  Finally, enable the published IoT data to trigger a RESTful API. Sounds pretty straightforward but—full disclosure—Dancing Dinosaur has not tried it due to lacking the necessary pieces. If you try it, please tell DancingDinosaur how it works (info@radding.net). Good luck.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth

IBM POWER9 chip

IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM Taps PCM to Advance Neuron-based Cognitive Computing

August 19, 2016

Just a couple of months ago DancingDinosaur reported a significant IBM advance in phase change memory (PCM). Then earlier this month IBM announced success in creating randomly spiking neurons using phase-change materials to store and process data. According to IBM, this represents a significant step toward achieving energy-efficient, ultra-dense, integrated neuromorphic technologies for application in cognitive computing.

IBM Phase Change Neurons

Phase Change Neurons

This also represents big step toward a cognitive computer. According to IBM, scientists have theorized for decades that it should be possible to imitate the versatile computational capabilities of large populations of neurons as the human brain does. With PCM it appears to be happening sooner than the scientists expected. “We have been researching phase-change materials for memory applications for over a decade, and our progress in the past 24 months has been remarkable,” said IBM Fellow Evangelos Eleftheriou.

As the IBM researchers explain: Phase-change neurons consist of a chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. In the graphic above individual devices are accessed by means of an array of probes to allow for precise characterization, modeling and interrogation. The tiny squares are contact pads that are used to access the nanometer-scale, phase-change cells (not visible). The sharp probes touch the contact pads to change the phase configuration stored in the cells in response to the neuronal input. Each set of probes can access a population of 100 cells. The chip hosts only the phase-change devices that are the heart of the neurons. There are thousands to millions of these cells on one chip that can be accessed (in this particular graphic) by means of the sharp needles of the probe card.

Not coincidentally, this seems to be dovetailing with IBM’s sudden rush to cognitive computing overall, one of the company’s recent strategic initiatives that has lately moved to the forefront.  Just earlier this week IBM was updating industry analysts on the latest with Watson and IoT and, sure enough, cognitive computing plays a prominent role.

As IBM explains it, the artificial neurons designed by IBM scientists in Zurich consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These artificial neurons do not store digital information; they are analog, just like the synapses and neurons in our biological brain, which is what makes them so tempting for cognitive computing.

In the published demonstration, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallization of the phase-change material, ultimately causing the neurons to fire. In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This forms the foundation for event-based computation and, in principle, is similar to how our brain triggers a response when we touch something hot.

Even a single neuron can exploit this integrate-and-fire property to detect patterns and discover correlations in real-time streams of event-based data. To that end, IBM scientists have organized hundreds of artificial neurons into populations and used them to represent fast and complex signals. Moreover, the artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts (for comparison, 60 million microwatts power a 60 watt lightbulb).

The examples the researchers have provided so far seem pretty conventional.  For example, IoT sensors can collect and analyze volumes of weather data collected at the network edge for faster forecasts. Artificial neurons could be used to detect patterns in financial transactions that identify discrepancies. Even data from social media can be used to discover new cultural trends in real time. To make this work, large populations of these high-speed, low-energy nano-scale neurons would most likely be used in neuromorphic coprocessors with co-located memory and processing units, effectively mixing neuron-based cognitive computing with conventional digital computing.

Makes one wonder if IBM might regret spending millions to dump its chip fabrication capabilities.  According to published reports Samsung is very interested in this chip technology and wants to put the new processing power to work fast. The processor, reportedly dubbed TrueNorth by IBM, uses 4,096 separate processing cores to form one standard chip. Each can operate independently and are designed for low power consumption. Samsung hopes  the chip can help with visual pattern recognition for use in autonomous cars, which might be just a few years away. So, how is IBM going to make any money from this with its chip fab gone and commercial cognitive computers still off in the future?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Puts Blockchain on the z System for a Disruptive Edge

April 22, 2016

Get ready for Blockchain to alter your z-based transaction environment. Blockchain brings a new class of distributed ledger applications. Bitcoin, the first Blockchain system to grab mainstream data center attention, is rudimentary compared to what the Linux Foundation’s open HyperledgerProject will deliver.

ibm-blockchain-adept1

As reported in CIO Magazine, Blockchain enables a distributed ledger technology with ability to settle transactions in seconds or minutes automatically via computers. This is a faster, potentially more secure settlement process than is used today among financial institutions, where clearing houses and other third-party intermediaries validate accounts and identities over a few days. Financial services, as well as other industries, are exploring blockchain for conducting transactions as diverse as trading stock, buying diamonds, and streaming music.

IBM in conjunction with the Linux Foundation’s HyperledgerProject expects the creation and management of Blockchain network services to power a new class of distributed ledger applications. With the HyperLedger and Blockchain developers could create digital assets and accompanying business logic to more securely and privately transfer assets among members of a permissioned Blockchain network running on IBM LinuxONE or Linux on z.

In addition, IBM will introduce fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud and enable applications to be deployed on IBM z Systems. Furthermore, by using Watson as part of an IoT platform IBM intends to make possible information from devices such as RFID-based locations, barcode-scan events, or device-recorded data to be used with IBM Blockchain apps. Clearly, IBM is looking at Blockchain for more than just electronic currency. In fact, Blockchain will enable a wide range of secure transactions between parties without the use of intermediaries, which should speed transaction flow. For starters, the company brought to the effort 44,000 lines of code as a founding member of the Linux Foundation’s HyperledgerProject

The z, with its rock solid reputation for no-fail, extreme high volume and performance, and secure processing, is a natural for Blockchain applications and system. In the process it brings the advanced cryptography, security, and reliability of the z platform. No longer content just to handle traditional backend systems-of-record processing IBM is pushing to bring the z into new areas that leverage the strength and flexibility of today’s mainframe.  As IoT ramps up expect the z to handle escalating volumes of IoT traffic, mobile traffic, and now blockchain distributed ledger traffic.  Says IBM: “We intend to support clients looking to deploy this disruptive technology at scale, with performance, availability and security.” That statement has z written all over it.

Further advancing the z into new areas, IBM reemphasized its advantages through built-in hardware accelerators for hashing and digital signatures, tamper-proof security cards, unlimited random keys to encode transactions, and integration to existing business data with Smart Contract APIs. IBM believes the z could take blockchain performance to new levels with the world’s fastest commercial processor, which is further optimized through the use of hundreds of internal processors. The highly scalable I/O system can handle massive amounts of transactions and the optimized network between virtual systems in a z Systems cloud can speed up blockchain peer communications.

An IBM Blockchain DevOps service will also enable blockchain applications to be deployed on the z, ensuring an additional level of security, availability and performance for handling sensitive and regulated data. Blockchain applications can access existing transactions on distributed servers and z through APIs to support new payment, settlement, supply chain, and business processes.

Use Blockchain on the z to create and manage Blockchain networks to power the emerging new classes of distributed ledger applications.  According to IBM, developers can create digital assets and the accompanying business logic to more securely and privately transfer assets among members of a permissioned Blockchain network. Using fully integrated DevOps tools for creating, deploying, running, and monitoring Blockchain applications on IBM Cloud, data centers can enable applications to be deployed on the z. Through the Watson IoT Platform, IBM will make it possible for information from devices such as RFID-based locations, barcode scans, or device-recorded data to be used with IBM Blockchain.

However, Blockchain remains nascent technology. Although the main use cases already are being developed and deployed many more ideas for blockchain systems and applications are only just being articulated. Nobody, not even the Linux Foundation, knows what ultimately will shake out. Blockchain enables developers to easily build secure distributed ledgers that can be used to exchange most anything of value fast and securely. Now is the time for data center managers at z shops to think what they might want to do with such extremely secure transactions on their z.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM InterCONNECT 2016 as Cloud Fest for App Dev

February 29, 2016

IBM spent the last week of February announcing a constant stream of Cloud deals that focused mostly on various aspects of App Dev. All IBM software is now enabled for private, public and hybrid cloud.  It announced expansion of Bluemix public, dedicated, and local services, IoT and the Weather Company, a growing suite of cognitive APIs for Watson, and hybrid object storage. These should be no surprise to DancingDinosaur readers who have seen a steady trickle of IBM Cloud announcements for months. Let’s sample just a few:

IBM/vmware execs (Alan M Rosenberg/Feature Photo Service for IBM)

IBM senior VP Robert LeBlanc and VMware COO Carl Eschenbach

For DancingDinsosaur, this announcement: IBM and VMware Announce Strategic Partnership to Accelerate Enterprise Hybrid Cloud Adoption, was the most eyebrow raising. IBM and VMware have jointly designed an architecture and cloud offering that will enable customers to automatically provision pre-configured VMware SDDC environments, consisting of VMware vSphere, NSX and Virtual SAN on the IBM Cloud. With this SDDC environment in place, customers will be able to deploy workloads in this hybrid cloud environment without modification, due to common security and networking models based on VMware. This appears intended to encompass SoftLayer too as just another new application environment.

Apple’s Swift development language adds more developer news: IBM to Bring Swift to the Cloud to Radically Simplify End-to-End Development of Apps. IBM has become the first cloud provider to enable the development of applications in native Swift, unlocking its full potential in radically simplifying the development of end-to-end apps on the IBM Cloud. This announcement is the next phase of its roadmap to bring Swift to the Cloud with a preview of a Swift runtime and a Swift Package Catalog to help enable developers to create apps for the enterprise.  DancingDinosaur, a former wannabe developer, is a fan of Swift as well as node.js and Go. Where were all these nifty tools when I was younger?

Watson is another longtime favorite of DancingDinosaur: IBM Announces New and Advanced Watson APIs on the Cloud. New and expanded cognitive APIs for developers that enhance Watson’s emotional and visual senses will further extend the capabilities of the industry’s largest and most diverse set of cognitive technologies and tools.  IBM is also adding tooling capabilities and enhancing its SDKs (Node, Java, Python, and the newly introduced iOS Swift and Unity) across the Watson portfolio and adding Application Starter Kits to make it easy for developers to customize and build with Watson. All APIs are available through the IBM Watson Developer Cloud on Bluemix.

And just in case you didn’t think these weren’t enterprise-class announcements: IBM and GitHub Form Strategic Partnership to Offer First GitHub Enterprise Service in Dedicated and Local Hybrid. IBM and GitHub plan to deliver GitHub Enterprise as a dedicated service on Bluemix to customers across private and hybrid cloud environments. By working with IBM Cloud, developers can expect to learn, code and work with GitHub’s collaborative development tools in a private, environment with robust security capabilities. GitHub and IBM, through this strategic partnership, aim to advance the development of next generation cloud applications for enterprise customers.

IBM WebSphere Blockchain Connect – A new service available to all WebSphere clients is designed to provide a safe and encrypted passage from their blockchain cloud to their enterprise. Starting immediately, enterprises currently using IBM’s on-premises software can tap these new offerings as an on ramp to hybrid cloud, realizing immediate benefits and new value from their existing investments. Blockchain is just one part of a series of tools intended to make it easier for developers to unlock the valuable data, knowledge and transaction systems. Also coming is fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on IBM Cloud that enables the applications to be deployed on IBM z Systems.

Blockchain still may be unfamiliar to many. Recognized most as the technology behind bitcoins, it should prove particularly valuable for IoT systems by providing a mechanism to securely track any of the various things. It enables what amounts to trustless transactions by eliminating the need for an intermediary between buyers and sellers or things and things. For those who want open trustworthy IoT communications without relying on intermediaries blockchain could provide the answer, facilitating the kind of IoT exchanges people have barely begun to imagine could be possible.

Finally, IBM Unveils Fast, Open Alternative to Event-Driven Programming through the Bluemix OpenWhisk platform, which enables developers to quickly build and link microservices that execute software code in response to events such as mouse clicks or receipt of sensor data from an IOT device. Developers won’t to need worry about things like pre-provisioning infrastructure or operations. Instead, they can simply focus on code, dramatically speeding the process.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM z System Shines in 3Q15 Quarterly Report

October 23, 2015

IBM posted another down quarter this past Monday, maybe the thirteenth in a row; it’s easy to lose track. But yet again, the IBM z System provided a bright spot, a 15 percent increase compared with the year-ago period. Last quarter the z also came up a winner. Still the investment analysts went crazy, the stock tumbled, and wild scenarios, inspired by Dell’s acquisition of EMC no doubt, began circulating.

ibm-z13

IBM z13

However, don’t expect IBM to be going away anytime soon. DancingDinosaur is a technology analyst and writer, absolutely not a financial analyst (his wife handles the checkbook).  If you look at what has been going on in the past two years with z System and POWER from a technology standpoint these platforms are here for the long haul.  Most of the top 100 companies rely on a mainframe.  Linux on z has become a factor in roughly 70 percent of the leading shops. When DancingDinosaur last ran the numbers there still are about 5000-6000 active mainframe shops and the numbers aren’t dropping nearly as fast as some pundits would have you believe.

primary-linuxone-emperor

IBM LinuxONE

The z13 and LinuxONE are very powerful mainframes, the most powerful by any number of measures in the industry.  And they are a dramatically different breed of enterprise platform, capable of concurrently running mixed workloads—OLTP, mobile, cloud, analytics—with top performance, scalability, and rock solid security. The Open Mainframe Project in conjunction with the Linux Foundation means that IBM no longer is going it alone with the mainframe. A similar joint effort with the Open POWER Consortium began delivering results within a year.

The Dell-EMC comparison is not a valid one. EMC’s primary business was storage and the business at the enterprise level has changed dramatically. It has changed for IBM too; the company’s revenues from System Storage decreased 19 percent. But storage was never as important to the company as the z, which had long been its cash cow, now diminished for sure but still worth the investment. The dozens and dozens of acquisitions EMC made never brought it much in terms of synergy. IBM, at least, has its strategic imperatives plan that is making measurable progress.

IBM’s strategic imperatives, in fact, were the only business that was doing as well as the z. Strategic imperatives revenue: up 27 percent year-to-year; Cloud revenue up more than 65 percent year-to-date.  Total cloud revenue hit $9.4 billion over the trailing 12 months. Cloud delivered as a service had an annual run rate of $4.5 billion vs. $3.1 billion in third-quarter 2014.  Business analytics revenue was up 19 percent year-to-date. Be interesting to see what cognitive computing and Watson can produce.

Besides storage, the other dim spot in the IBM platform story is Power Systems.  Revenues from Power Systems were down 3 percent compared with the 2014 period. DancingDinosaur, long a fan of Power Systems, anticipates the platform will turn positive next quarter or the first quarter of 2016 as some of the new technology and products coming, in part, from the Open POWER Consortium begin to attract new customers and ring up sales. The new Power Systems LC Server family should attract interest for hybrid Cloud, Hyperscale Data Centers, and Open Solutions, hopefully bringing new customers. With online pricing starting around $6600 the LC machines should be quite competitive against x86 boxes of comparable capabilities.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Commits $1B to Drive Watson into the Mainstream

January 10, 2014

IBM is ready to propel Watson beyond Jeopardy, its initial proof-of-concept, and into mainstream enterprise computing. To that end, it announced plans to spend more than $1 billion on the recently formed Watson business unit, an amount that includes $100 million in venture investments to build an ecosystem of entrepreneurs developing Watson-powered apps.

In addition, companies won’t need racks of Power servers to run Watson. With a series of announcements yesterday IBM unveiled plans to deliver Watson capabilities as business-ready cloud services. The announcement focused on three Watson services: 1)  Watson Discovery Advisor for research and development projects in industries such as pharmaceutical, publishing and biotechnology; 2) Watson Analytics to deliver visualized big data insights based on questions posed in natural language by any business user; and 3) IBM Watson Explorer to more easily uncover and share data-driven insights across the enterprise.

DancingDinosaur has been following Watson since its Jeopardy days. Having long since gotten over the disappointment that Watson didn’t run on the Power side of a hybrid zEnterprise, it turns out that IBM has managed to shrink Watson considerably.  Today Watson runs 24x faster, boasts a 2,400% improvement in performance, and is 90% smaller.  IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes, and you don’t even need to locate it in your data center; you can run it in the cloud.

Following the introduction of Watson IBM was slow to build on that achievement. It focused on healthcare and financial services, use-cases that appeared to be no-brainers.  Eventually it experienced success, particularly in healthcare, but the initial customers came slowly and the implementations appeared to be cumbersome.

Watson, at least initially, wasn’t going to be a simple deployment. It needed a ton of Power processors. It also needed massive amounts of data; in healthcare IBM collected what amounted to the entire library of the world’s medical research and knowledge. And it needed applications that took advantage of Watson’s formidable yet unusual capabilities.

The recent announcements of delivering Watson via the cloud and committing to underwrite application developers definitely should help. And yesterday’s announcement of what amounts to three packaged Watson services should speed deployment.

For example, Watson Analytics, according to IBM, removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Using sophisticated analytics and aided by Watson’s natural language interface, Watson Analytics automatically prepares the data, finds the most important relationships, and presents the results in an easy to interpret interactive visual format. As a result, business users are no longer limited to predefined views or static data models. Better yet, they can feel empowered to apply their own knowledge of the business to ask and answer new questions as they emerge. They also will be able to quickly understand and make decisions based on Watson Analytics’ data-driven visualizations.

Behind the new Watson services lies IBM Watson Foundations, described as a comprehensive, integrated set of big data and analytics capabilities that enable enterprises to find and capitalize on actionable insights. Basically, it amounts to a set of user tools and capabilities to tap into all relevant data – regardless of source or type – and run analytics to gain fresh insights in real-time. And it does so securely across any part of an enterprise, including revenue generation, marketing, finance, risk, and operations.  Watson Foundations also includes business analytics with predictive and decision management capabilities, information management with in-memory and stream computing, and enterprise content management packaged into modular offerings. As such it enables organizations of any size to address immediate needs for decision support, gain sustainable value from their initial investments, and grow from there.

This apparently sounded good to Singapore’s DBS Bank, which will deploy Watson cognitive computing capabilities to deliver a next- generation client experience.  For starters, DBS intends to apply Watson to its wealth management business to improve the advice and experience delivered to affluent customers.  The bank is counting on cloud-based Watson to process enormous amounts of information with the ability to understand and learn from each interaction at unprecedented speed. This should greatly increase the bank’s ability to quickly analyze, understand and respond to the vast amounts of data it is accumulating.

Specifically, DBS will deploy IBM’s cloud-based Watson Engagement Advisor solution, to be rolled out in the second half of the year. From there the bank reportedly plans to progressively deploy these capabilities to its other businesses over time.

For fans of cognitive computing and Watson, the announcements represent a much awaited evolution in IBM’s strategy. It promises to make cognitive computing and the natural language power of Watson usable for mainstream enterprises. How excited fans should get, however, depends on the specifics of IBM’s pricing and packaging for these offerings.  Still, faced with having to recoup a $1 billion investment, don’t expect loss-leader pricing from IBM.

Follow DancingDinosaur on Twitter: @mainframeblog

Open KVM Adds Kimchi to Speed Ramp Up

November 15, 2013

The Linux Foundation, the group trying to drive the growth of Linux and collaborative development recently brought the Open Virtualization Alliance (OVA) under its umbrella as a Linux Foundation Collaborative Project.  The change should help KVM take better advantage of the marketing and administrative capabilities of the Linux Foundation and enable tighter affinity with the Linux community at large.

The immediate upshot of the Oct. 21 announcement was increased exposure for open KVM.  Over 150 media stories appeared, Facebook hits jumped 33%, and the OVA website saw a big surge of traffic, 82% of which from first time visitors. First up on the agenda should be tapping the expansive ecosystem of the Linux Foundation in service of Kimchi, OVA’s new easy to deploy and use administrative tool for KVM.  Mike Day, an IBM Distinguished Engineer and Chief Virtualization Architect for Open Systems Development described Kimchi as the “fastest on-ramp to using KVM.

Kimchi is about as lightweight as a management tool can get. It offers stateless installation (no server), brings a graphical and mobile interface, and comes bundled with KVM for Power but does not require HMC, IBM’s primary tool for planning, deploying, and managing IBM Power System servers. It also is based on open, standard components, including the RESTful API, and it is part of the OpenStack community.

What Kimchi does is to provide a mobile- and Windows-friendly virtualization manager for KVM. It delivers point-to-point management, thereby avoiding the need to invest in yet more management server hardware, training, or installation. Promised to be simple to use, it was designed to appeal to a VMware administrator.

So what can you actually do with Kimchi? At the moment only the basics.  You can use it to manage all KVM guests, although it does has special support for some Linux guests at this point. Also, you can use it without Linux skills.

To figure out the path going forward the OVA and Linux Foundation are really seeking community participation and feedback.  Some of the Kimchi options coming under consideration first:

  • Federation versus export to OpenStack
  • Further storage and networking configurations; how advanced does it need to get?
  • Automation and tuning – how far should it go?
  • RESTful API development and usage
  • Addition of knobs and dials or keep sparse

Today Kimchi supports most basic networking and configurations.  There is yet no VLAN or clustering with Kimchi.

Kimchi is poised to fulfill a central position in the KVM environment—able to speed adoption.  What is most needed, however, is an active ecosystem of developers who can build out this sparse but elegant open source tool. To do that, IBM will need to give some attention to Kimchi to make sure it doesn’t get overlooked or lost in the slew of its sister open source initiatives like OpenStack, Linux itself, and even Eclipse. OpenStack, it appears, will be most critical, and it is a good sign that it already is at the top of the Kimchi to-do list.

And speaking of IBM opening up development, in an announcement earlier this week IBM said it will make its IBM Watson technology available as a development platform in the cloud to enable a worldwide community of software application providers who might build a new generation of apps infused with Watson’s cognitive computing intelligence.  Watson badly needed this; until now Watson has been an impressive toy for a very small club.

The move, according to IBM, aims to spur innovation and fuel a new ecosystem of entrepreneurial software application providers – ranging from start-ups and emerging, venture capital backed businesses to established players. To make this work IBM will be launching the IBM Watson Developers Cloud, a cloud-hosted marketplace where application providers of all sizes and industries will be able to tap into resources for developing Watson-powered apps. This will include a developer toolkit, educational materials, and access to Watson’s application programming interface (API). And they should do the same with Kimchi.


%d bloggers like this: