Posts Tagged ‘Cloud’

IBM Leverages Strategic Imperatives to Win in Cloud

March 16, 2018

Some people may have been ready to count out IBM in the cloud. The company, however, is clawing its way back into contention faster than many imagined. In a recent Forbes Magazine piece, IBM credits 16,000 AI engagements, 400 blockchain engagements, and a couple of quantum computing pilots as driving its return as a serious cloud player.

IBM uses blockchain to win the cloud

According to Fortune, IBM has jumped up to third in cloud revenue with $17 billion, ranking behind Microsoft with $18.6 billion and Amazon, with $17.5. Among other big players, Google comes in seventh with $3 billion

In the esoteric world of quantum computing IBM is touting live projects underway with JPMorganChase, Daimler, and others. Bob Evans, a respected technology writer and now the principle of Evans Strategic Communications, notes that the latest numbers “underscore not only IBM’s aggressive moves into enterprise IT’s highest-potential markets,” but also the legitimacy of the company’s claims that it has joined the top ranks of the competitive cloud-computing marketplace alongside Microsoft and Amazon.

As reported in the Fortune piece, CEO Ginni Rometty, speaking to a quarterly analyst briefing, declared: “While IBM has a considerable presence in the public-cloud IaaS market because many of its clients require or desire that, it intends to greatly differentiate itself from the big IaaS providers via higher-value technologies such as AI, blockchain, cybersecurity and analytics.” These are the areas that Evans sees as driving IBM into the cloud’s top tier.

Rometty continued; “I think you know that for us the cloud has never been about having Infrastructure-as-a-Service-only as a public cloud, or a low-volume commodity cloud; Frankly, Infrastructure-as-a-Service is almost just a dialtone. For us, it’s always been about a cloud that is going to be enterprise-strong and of which IaaS is only a component.”

In the Fortune piece she then laid out four strategic differentiators for the IBM Cloud, which in 2017 accounted for 22% of IBM’s revenue:

  1. “The IBM Cloud is built for “data and applications anywhere,” Rometty said. “When we say you can do data and apps anywhere, it means you have a public cloud, you have private clouds, you have on-prem environments, and then you have the ability to connect not just those but also to other clouds. That is what we have done—all of those components.”
  2. The IBM Cloud is “infused with AI,” she continued, alluding to how most of the 16,000 AI engagements also involve the cloud. She cited four of the most-popular ways in which customers are using AI: customer service, enhancing white-collar work, risk and compliance, and HR.
  3. For securing the cloud IBM opened more than 50 cybersecurity centers around the world to ensure “the IBM Cloud is secure to the core,” Rometty noted.
  4. “And perhaps this the most important differentiator—you have to be able to extend your cloud into everything that’s going to come down the road, and that could well be more cyber analytics but it is definitely blockchain, and it is definitely quantum because that’s where a lot of new value is going to reside.”

You have to give Rometty credit: She bet big that IBM’s strategic imperatives, especially blockchain and, riskiest of all, quantum computing would eventually pay off. The company had long realized it couldn’t compete in high volume, low margin businesses. She made her bet on what IBM does best—advanced research—and stuck with it.  During those 22 consecutive quarters of revenue losses she stayed the course and didn’t publicly question the decision.

As Fortune observed: In quantum, IBM’s leveraging its first-mover status and has moved far beyond theoretical proposals. “We are the only company with a 50-qubit system that is actually working—we’re not publishing pictures of photos of what it might look like, or writings that say if there is quantum, we can do it—rather, we are scaling rapidly and we are the only one working with clients in development working on our quantum,” Rometty said.

IBM’s initial forays into commercial quantum computing are just getting started: JPMorganChase is working on risk optimization and portfolio optimization using IBM quantum computing;  Daimler is using IBM’s quantum technology to explore new approaches to logistics and self-driving car routes; and JSR is doing computational chemistry to create entirely new materials. None of these look like the payback is right around the corner. As DancingDinosaur wrote just last week, progress with quantum has been astounding but much remains to be done to get a functioning commercial ecosystem in place to support the commercialization of quantum computing for business on a large scale.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

Dinosaurs Strike Back in IBM Business Value Survey

March 2, 2018

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing.  Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet.  The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset.  Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found:  Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

IBM Jumps into the Next Gen Server Party with POWER9

February 15, 2018

IBM re-introduced its POWER9 lineup of servers  this week starting with 2-socket and 4-socket systems and more variations coming in the months ahead as IBM, along with the rest of the IT vendor community grapples with how to address changing data center needs. The first, the AC922, arrived last fall. DancingDinosaur covered it here. More, the S922/S914/S924 and H922/H924/L922, are promised later this quarter.

The workloads organizations are running these days are changing, often dramatically and quickly. One processor, no matter how capable or flexible or efficient will be unlikely to do the job going forward. It will take an entire family of chips.  That’s as true for Intel and AMR and the other chip players as IBM.

In some ways, IBM’s challenge is even qwerkier. Its chips will not only need to support Linux and Windows, but also IBMi and AIX. IBM simply cannot abandon its IBMi and AIX customer bases. So chips supporting IBMi and AIX are being built into the POWER9 family.

For IBMi the company is promising POWER9 exploitation for:

  • Expanding the secure-ability of IBMi with TLS, secure APIs, and logs for SIEM solutions
  • Expanded Install options with an installation process using USB 3.0 media
  • Encryption and compression for cloud storage
  • Increasing the productivity of developers and administrators

This may sound trivial to those who have focused on the Linux world and work with x86 systems too, but it is not for a company still mired in productive yet aging IBMi systems.

IBM also is promising POWER9 goodies for AIX, its legacy Unix OS, including:

  • AIX Security: PowerSC and PowerSC MFA updates for malware intrusion prevention and strong authentication
  • New workload acceleration with shared memory communications over RDMA (SMC-R)
  • Improved availability: AIX Live Update enhancements; GDR 1.2; PowerHA 7.2
  • Improved Cloud Mgmt: IBM Cloud PowerVC Manager for SDI; Import/Export;
  • AIX 7.2 native support for POWER9 – e.g. enabling NVMe

Again, if you have been running Linux on z or LinuxONE this may sound antiquated, but AIX has not been considered state-of-the-art for years. NVMe alone gives is a big boost.

But despite all the nice things IBM is doing for IBMi and AIX, DancingDinosaur believes the company clearly is betting POWER9 will cut into Intel x86 sales. But that is not a given. Intel is rolling out its own family of advanced x86 Xeon machines under the Skylake code name. Different versions will be packaged and tuned to different workloads. They are rumored, at the fully configured high end, to be quite expensive. Just don’t expect POWER9 systems to be cheap either.

And the chip market is getting more crowded. As Timothy Prickett Morgan, analyst at The Next Platform noted, various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s POWER9 family. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the POWER9 will have to fight for every sale IBM wants.

Morgan went on: IBM differentiated the hardware and the pricing with its NVLink versions, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Where the Power8 chip had the advantage over the Intel’s Haswell and Broadwell Xeon E5 processors when it came to memory capacity and memory bandwidth per socket, and could meet or beat the Xeons when it came to performance on some workloads that is not yet apparent with the POWER9.

With the POWER9, however, IBM will likely charge a little less for companies buying its Linux-only variants, observes Morgan, effectively enabling IBM to win Linux deals, particularly where data analytics and open source databases drive the customer’s use case. Similarly, some traditional simulation and modeling workloads in the HPC and machine learning areas are ripe for POWER9.

POWER9 is not one chip. Packed into the chip are next-generation NVIDIA NVLink and OpenCAPI to provide significantly faster performance for attached GPUs. The PCI-Express 4.0 interconnect will be twice the speed of PCI-Express 3.0. The open POWER9 architecture also allows companies to mix a wide range of accelerators to meet various needs. Meanwhile, OpenCAPI can unlock coherent FPGAs to support varied accelerated storage, compute, and networking workloads. IBM also is counting on the 300+ members of the OpenPOWER Foundation and OpenCAPI Consortium to launch innovations for POWER9. Much is happening: Stay tuned to DancingDinosaur

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

IBM Boosts DevOps with ADDI on Z

February 9, 2018

IBM’s Application Discovery and Delivery Intelligence (ADDI) is an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so you can quickly discover and understand interdependencies and impacts of change. You can use this intelligence to transform and renew these applications faster than ever. Capitalize on time-tested mainframe code to engage the API economy. Accelerate application transformation of your IBM Z hybrid cloud environment and more.

Formerly, ADDI was known as EZSource. Back then EZSource was designed to expedite digital transformations by unlocking core business logic and apps. Specifically it enabled the IT team to pinpoint specific mainframe code in preparation for leveraging IT through a hybrid cloud strategy. In effect it enabled the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enabled enterprise DevOps, which was necessary to keep up with the pace of changes overtaking existing business processes.

This wasn’t easy when EZSource initially arrived and it still isn’t although the intelligence built into ADDI makes it easier now.  Originally it was intended to help the mainframe data center team to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data, and schedule interdependencies
  • Aid in sizing the change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people came onboarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Today, IBM describes Application Discovery and Delivery Intelligence (ADDI), its follow-up to EZSource, as an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so your team can quickly discover and understand interdependencies and impacts of any change. In theory you should be able to use this intelligence to transform and renew these applications more efficiently and productively. In short, it should allow you to leverage time-tested mainframe code to engage with the API economy and accelerate the application transformation on your IBM Z and hybrid cloud environment.

More specifically, it promises to enable your team to analyze a broad range of IBM and non-IBM programing languages, databases, workload schedulers, and environments. Enterprise application portfolios were built over decades using an ever-evolving set of technologies, so you need a tool with broad support, such as ADDI, to truly understand the relationships between application components and accurately determine the impacts of potential changes.

In practice, it integrates with mainframe environments and tools via a z/OS agent to automatically synchronize application changes. Without keeping your application analysis synchronized with the latest changes that your developers made, according to IBM, your analysis can get out of date and you risk missing critical changes.

In addition, it provides visual analysis integrated with leading IDEs. Data center managers are petrified of changing applications that still work, fearing they will inadvertently break it or slow performance. When modifying complex applications, you need to be able to quickly navigate the dependencies between application components and drill down to see relevant details. After you understand the code, you can then effectively modify it at much lower risk. The integration between ADDI and IBM Developer for z (IDz) combines the leading mainframe IDE with the application understanding and analytics capabilities you need to safely and efficiently modify the code.

It also, IBM continues, cognitively optimizes your test suites.  When you have a large code base to maintain and manyf tests to run, you must run the tests most optimally. ADDI correlates code coverage data and code changes with test execution records to enable you to identify which regression tests are the most critical, allowing you to optimize time and resources while reducing risk. It exposes poorly tested or complex code and empowers the test teams with cognitive insights that turns awareness of trends into mitigation of future risks.

Finally, ADDI intelligently identifies performance degradations before they hit production. It correlates runtime performance data with application discovery data and test data to quickly pinpoint performance degradation and narrow down the code artifacts to those that are relevant to the cause of bad performance. This enables early detection of performance issues and speeds resolution.

What’s the biggest benefit of ADDI on the Z? It enables your data center to play a central role in digital transformation, a phrase spoken by every c-level executive today as a holy mantra. But more importantly, it will keep your mainframe relevant.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Halts Losing Quarterly Slide

January 25, 2018

With all respects to Casey at Bat author Ernest Thayer, joy may have returned to Mudville. IBM finally broke its 22 consecutive quarters losing streak and posted positive results in 4Q 17.  Fourth-quarter revenue of $22.5 billion, up 4 percent but that was just the start.

Watson and Weather Co. track flu

IBM is counting on its strategic imperatives to come through big and they did in 2017. Full-year strategic imperatives revenue of $36.5 billion, up 11 percent; represents 46 percent of IBM revenue. Similarly, IBM is making some gains in the highly competitive cloud business where IBM is fighting to position itself among the top ranks of formidable cloud players—Google, Amazon, and Microsoft. IBM did quite respectably in the cloud, posting $17 billion in cloud revenue, up 24 percent year to year.

DancingDinosaur readers will be interested to know that some of IBM’s various business segments, which have been a steady drain on IBM revenue turned things around in the 4th quarter. For example, Systems (systems hardware and operating systems software) saw revenues of $3.3 billion, up 32 percent driven by growth in IBM Z, Power Systems, and storage. That’s important to readers charged with planning their organization’s future with the Z or Power machines. They now can be confident that IBM mightn’t the sell the business tomorrow as it did with the x86 systems.

So where might IBM go in the future. “Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter.” said Ginni Rometty, IBM chairman, president, and CEO.  She then continued: “During 2017, we established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

Added James Kavanaugh, IBM CFO: “Over the past several years we have invested aggressively in technology and our people to reposition IBM.  2018 will be all about reinforcing IBM’s leadership position,” he continued, “in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

IBM has done well in some business and technology segments. Specifically, the company reported gains in revenues from analytics, up 9 percent, mobile, up 23 percent, and security, up a whopping 132 percent.

Other segments have not done as well. Technology Services & Cloud Platforms (includes infrastructure services, technical support services, and integration software) continue to lose money. A number of investment analysts are happy with IBM’s financials but are not optimistic about what they portend for IBM’s future.

For instance, Bert Hochfeld, long/short equity, growth, event-driven, research analyst, writes in Seeking Alpha, “the real reason why strategic imperatives and cloud showed relatively robust growth last quarter has nothing to do with IBM’s pivots and everything to do with the success of IBM’s mainframe cycle. IBM’s Z system achieved 71% growth last quarter compared to 62% in the prior quarter. New Z Systems are being delivered with pervasive encryption, they are being used to support hybrid cloud architectures, and they are being used to support Blockchain solutions… Right now, the mainframe performance is above the prior cycle (z13) and consistent with the z12 cycle a few years ago. And IBM has enjoyed some reasonable success with its all-flash arrays in the storage business. Further, the company’s superscalar offering, Power9, is having success and, as many of its workloads are used for AI, its revenues get counted as part of strategic initiatives. But should investors count on a mainframe cycle and a high-performance computer cycle in making a long-term investment decision regarding IBM shares?

He continued: “IBM management has suggested that some of the innovations in the current product range including blockchain, cryptography, security and reliability will make this cycle different, and perhaps longer, then other cycles. The length of the mainframe cycle is a crucial component in management’s earnings estimate. It needs to continue at elevated levels at least for another couple of quarters. While that is probably more likely, is it really prudent to base an investment judgement on the length of a mainframe cycle?

Of course, many DancingDinosaur readers are basing their career and employment decisions on the mainframe or Power Systems. Let’s hope this quarter’s success encourages them; it sure beats 22 consecutive quarters of revenue declines.

Do you remember how Thayer’s poem ends? With the hopes and dreams of Mudville riding on him, it is the bottom of the 9th; Casey takes a mighty swing and… strikes out! Let’s hope this isn’t IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Syncsort Survey Unveils 5 Ways Z Users Are Saving Money

January 9, 2018

Syncsort Inc. recently completed its year-end 2017 State-of-the-Mainframe annual survey of IT professionals. Over In the past year, the organizations surveyed increased their spending for mainframe capacity, new mainframe applications, and mainframe data analytics. The IBM z/OS mainframe remains an important focus in organizations, with the majority of respondents reporting that the mainframe serves as the hub for business-critical applications by providing high-volume transaction and database processing.

More interestingly, Syncsort notes high number of respondents indicated they’ll use the mainframe to run revenue-generating services over the next 12 months, another clear indication that the mainframe remains integral to the business.

However, the survey also reflects concerns over the high cost of the mainframe. In effect, mainframe optimization, cost reduction, and spending remain at the forefront, with many organizations looking to leverage zIIP engines to offload general processor cycles, which maximize resources, delays or avoids hardware upgrades, and lowers monthly software charges.

At the same time some organizations are looking at mainframe optimization to fund strategic projects, such as enhanced mainframe data analytics to support better business decisions for meeting SLAs as well as security and compliance initiatives. All of this may relieve pressure to jump to a lower cost platform (x86) in the hope of reducing spending.

But apparently it is not enough in a number of cases. Despite the focus on optimization, the survey notes, nearly 20% of respondents plan to move off the mainframe completely in 2018. DancingDinosaur, however spent decades writing mainframe-is-dead pieces and this invariably takes longer, costs more, often much more, than expected, and sometimes is never fully achieved. The cost of building a no-fail, scalable, and secure business platform has proven to be extremely difficult.

However costly as the mainframe is, you can get it up running dependably for less than you will end up paying to cobble together bare metal x86 boxes. But if you try, please let me know and I will check back with you next year to publicize your success. One exception might be if you opt for a 100% cloud solution; again, let me know if it works and how much you save; I’ll make you a hero.

In the meantime, here are five ways respondents expect to save money by streamlining operations through mainframe-based optimization:

  1. This year organizations aim to redirect budget dollars to strategic projects such as mainframe data analytics. Optimization will primarily focus on general processor usage by leveraging zIIP engines and using MSU optimization tools. Some organizations will take it a step further, and target some candidate workloads to be moved off of the mainframe (possibly to a hybrid cloud) to ensure sufficient capacity remains for business critical applications.
  1. Big data analytics for operational intelligence, security, and compliance will continue to grow and emerge as a critical effort, and ensuring that IT services are delivered effectively to meet SLAs. Mainframe data sources will be critical in helping to address these challenges.
  1. Integration of mainframe data with modern analytics tools will become pervasive and critically important as organizations look to exploit this abundance of information for enhanced visibility. Integrating mainframe machine data will not only provide enhanced visualization but will enable correlation with data sources from other platforms. Additionally, new analytics technologies, like Splunk, will make mainframe application data more readily available to business analysts who typically aren’t mainframe experts while addressing the diminishing pool of mainframe talent by putting rich, easy tools into the hands of newer staff.
  1. SMF and z/OS log data will play an increased role in addressing security exposures, fulfilling audit requirements, and addressing compliance mandates, a key initiative for IT executives and IT organizations. Here think pervasive encryption on Z. Overall, organizations are looking at leveraging analytics platforms for security and compliance. Along with SMF and other z/OS log data they will look to Splunk, Elastic, and Hadoop.
  1. Data movement across the variety of platforms in distributed enterprises presents important challenges that must be secured, monitored, and performed efficiently. With over half of mainframe organizations still lacking full visibility this must become a priority for organizations.

Over the years, DancingDinosaur writes up every opportunity to lower mainframe costs or optimize operations. Find some of these here, here, and here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Under the Covers of Z Container Pricing

December 1, 2017

Along with the announcement of the z14, or now just Z, last July IBM also introduced container pricing as an upcoming capability of the machine intended to make it both flexible and price competitive. This is expected to happen by the end of this year.

A peak into the IBM z14

Container pricing implied overall cost savings and also simplified deployment. At the announcement IBM suggested competitive economics too, especially when benchmarked against public clouds and on-premises x86 environments.

By now you should realize that IBM has difficulty talking about price. They have lots of excuses relating to their global footprint and such. Funny, other systems vendors that sell globally don’t seem to have that problem. After two decades of covering IBM and the mainframe as a reporter, analyst, and blogger I’ve finally realized why the reticence: that the company’s pricing is almost always high, over-priced compared to the competition.

If you haven’t realized it yet, the only way IBM will talk price is around a 3-year TCO cost analysis. (Full disclosure: as an analyst, I have developed such TCO analyses and am quite familiar with how to manipulate them.) And even then you will have to swallow a number of assumptions and caveats to get the numbers to work.

For example, there is no doubt that IBM is targeting the x86 (Intel) platform with its LinuxONE lineup and especially its newest machine, the Emperor II. For example, IBM reports it can scale a single MongoDB database to 17TB on the Emperor II while running it at scale with less than 1ms response time. That will save up to 37% compared to x86 on a 3-year TCO analysis. The TCO analysis gets even better when you look at a priced-per-core data serving infrastructures. IBM reports it can consolidate thousands of x86 cores on a single LinuxONE server and reduce costs by up to 40%.

So, let’s see what the Z’s container pricing can do for you. IBM’s container pricing is being introduced to allow new workloads to be added onto z/OS in a way that doesn’t impact an organization’s rolling four-hour average while supporting deployment options that makes the most sense for an organization’s architecture while facilitating competitive pricing at an attractive price point relative to that workload.

For example, one of the initial use cases for container pricing revolves around payments workloads, particularly instant payments. That workload will be charged not to any capacity marker but to the number of payments processed. The payment workload pricing grid promises to be highly competitive with the price–per-payment starting at $0.0021 and dropping to $0.001 with volume. “That’s a very predictable, very aggressive price,” says Ray Jones, vice president, IBM Z Software and Hybrid Cloud. You can do the math and decide how competitive this is for your organization.

Container pricing applies to various deployment options—including co-located workloads in an existing LPAR—that present line-of-sight pricing to a solution. The new pricing promises simplified software pricing for qualified solutions. It even offers the possibility, IBM adds, of different pricing metrics within the same LPAR.

Container pricing, however, requires the use of IBM’s software for payments, Financial Transaction Manager (FTM). FTM counts the number of payments processed, which drives the billing from IBM.

To understand container pricing you must realize IBM is not talking about Docker containers. A container to IBM simply is an address space, or group of address spaces, in support of a particular workload. An organization can have multiple containers in an LPAR, have as many containers as it wants, and change the size of containers as needed. This is where the flexibility comes in.

The fundamental advantage of IBM’s container pricing comes from the co-location of workloads to get improved performance and lower latency. The new pricing eliminates what goes on in containers from consideration in the MLC calculations.

To get container pricing, however, you have to qualify. The company is setting up pricing agents around the world. Present your container plans and an agent will determine if you qualify and at what price. IBM isn’t saying anything about how you should present your container plans to qualify for the best deal. Just be prepared to negotiate as hard as you would with any IBM deal.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

BMC’s 12th Annual Mainframe Survey Shows Z Staying Power

November 17, 2017

ARM processors are invading HPC and supercomputer segments. The Power9 is getting closer and closer to general commercial availability. IBM unveiled not one but two new quantum computers. Meanwhile, the Z continues to roll right along without skipping a beat, according to BMC’s 12th mainframe survey.

There is no doubt that the computing landscape is changing dramatically and will continue to change. Yet mainframe shops appear to be taking it all in stride. As Mark Wilson reported on the recently completed SHARE Europe conference in the UK, citing the keynote delivered by Compuware’s CEO Chris O’Malley: “By design, the post-modern mainframe is the most future ready platform in the world: the most reliable, securable, scalable, and cost efficient. Unsurprisingly, the mainframe remains the dominant, growing, and vital backbone for the worldwide economy. However, outdated processes and tools ensnared in an apathetic culture doggedly resistant to change, prevent far too many enterprises from unleashing its unique technical virtues and business value.”  If you doubt we are entering the post-modern mainframe era just look at the LinuxONE Emperor II or the z14.

Earlier this month BMC released its 12th annual mainframe survey. Titled 5 Myths Busted, you can find the report here.  See these myths right below:

  • Myth 1: Organizations have fully optimized mainframe availability
  • Myth 2: The mainframe is in maintenance mode; no one is modernizing
  • Myth 3: Executives are planning to replace their mainframes
  • Myth 4: Younger IT professionals are pessimistic about mainframe careers
  • Myth 5: People working on the mainframe today are all older

Everyone from prestigious executives like O’Malley to a small army of IBMers to lowly bloggers and analysts like DancingDinosaur have been pounding away at discrediting these myths for years. And this isn’t the first survey to thoroughly discredit mainframe skeptics.

The mainframe is growing: 48% of respondents saw MIPS growth in the last 12 months, over 50% of respondents forecast MIPS growth in the next 12 months, and 71% of large shops (10,000 MIPS or more) experienced MIPS growth in the last year. Better yet, these same shops forecast more growth in the next 12 months.

OK, the top four priorities of respondents remained the same this year. The idea that mainframe shops, however, are fully optimized and just cruising is dead wrong. Survey respondents still have a list of to-do of priorities:

  1. Cost reduction/optimization
  2. Data privacy/compliance
  3. Availability
  4. Application modernization

Maybe my favorite myth is that younger people have given up on the mainframe. BMC found that 53% of respondents are under age 50 and of this group, (age 30-49 with under 10 years of experience) overwhelmingly report a very positive view of the the mainframe future. The majority went so far as to say they see the workload of their mainframe growing and also view the mainframe as having a strong position of growth in the industry overall. This is reinforced by the growth of IBM’s Master of the Mainframe competition, which attracts young people in droves, over 85,000 to date, to work with the so-called obsolete mainframe.

And the mainframe, both the Z and the LinuxONE, is packed with technology that will continue to attract young people: Linux, Docker, Kubernetes, Java, Spark, and support for a wide range of both relational databases like DB2 and NoSQL databases like MongoDB. They use this technology to do mobile, IoT, blockchain, and more. Granted most mainframe shops are not ready yet to run these kinds of workloads. IBM, however, even introduced new container pricing for the new Z to encourage such workloads.

John McKenny, BMC’s VP of Strategy, has noticed growing interest in new workloads. “Yes, they continue to be mainly transactional applications but they are aimed to support new digital workloads too, such as doing business with mobile devices,” he noted.  Mobility and analytics, he added, are used increasingly to improve operations, and just about every mainframe shop has some form of cloud computing, often multiple clouds.

The adoption of Linux on the mainframe a decade ago imediatey put an end to the threat posed by x86. Since then, IBM has become a poster child for open source and a slew of new technologies, from Java to Hadoop to Spark to whatever comes next. Although traditional mainframe data centers have been slow to adopt these new technologies some are starting, and that along with innovative machines like the z14 and LinuxONE Emperor ll are what, ultimately, will keep the mainframe young and competitive.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Introduces Cloud Private to Hybrid Clouds

November 10, 2017

When you have enough technologies lying around your basement, sometimes you can cobble a few pieces together, mix it with some sexy new stuff and, bingo, you have something that meets a serious need of a number of disparate customers. That’s essentially what IBM did with Cloud Private, which it announced Nov. 1.

IBM staff test Cloud Private automation software

IBM intended Cloud Private to enable companies to create on-premises cloud capabilities similar to public clouds to accelerate app dev. Don’t think it as just old stuff; the new platform is built on the open source Kubernetes-based container architecture and supports both Docker containers and Cloud Foundry. This facilitates integration and portability of workloads, enabling them to evolve to almost any cloud environment, including—especially—the public IBM Cloud.

Also IBM announced container-optimized versions of core enterprise software, including IBM WebSphere Liberty, DB2 and MQ – widely used to run and manage the world’s most business-critical applications and data. This makes it easier to share data and evolve applications as needed across the IBM Cloud, private, public clouds, and other cloud environments with a consistent developer, administrator, and user experience.

Cloud Private amounts to a new software platform, which relies on open source container technology to unlock billions of dollars in core data and applications incorporating legacy software like WebSphere and Db2. The purpose is to extend cloud-native tools across public and private clouds. For z data centers that have tons of valuable, reliable working systems years away from being retired, if ever, Cloud Private may be just what they need.

Almost all enterprise systems vendors are trying to do the same hybrid cloud computing enablement. HPE, Microsoft, Cisco, which is partnering with Google on this, and more. This is a clear indication that the cloud and especially the hybrid cloud is crossing the proverbial chasm. In years past IT managers and C-level executives didn’t want anything to do with the cloud; the IT folks saw it as a threat to their on premises data center and the C-suite was scared witless about security.

Those issues haven’t gone away although the advent of hybrid clouds have mitigated some of the fears among both groups. Similarly, the natural evolution of the cloud and advances in hybrid cloud computing make this more practical.

The private cloud too is growing. According to IBM, while public cloud adoption continues to grow at a rapid pace, organizations, especially in regulated industries of finance and health care, are continuing to leverage private clouds as part of their journey to public cloud environments to quickly launch and update applications. This also is what is driving hybrid clouds. IBM estimates companies will spend more than $50 billion globally starting in 2017 to create and evolve private clouds with growth rates of 15 to 20 percent a year through 2020, according to IBM market projections.

The problem facing IBM and the other enterprise systems vendors scrambling for hybrid clouds is how to transition legacy systems into cloud native systems. The hybrid cloud in effect acts as facilitating middleware. “Innovation and adoption of public cloud services has been constrained by the challenge of transitioning complex enterprise systems and applications into a true cloud-native environment,” said Arvind Krishna, Senior Vice President for IBM Hybrid Cloud and Director of IBM Research. IBM’s response is Cloud Private, which brings rapid application development and modernization to existing IT infrastructure while combining it with the service of a public cloud platform.

Hertz adopted this approach. “Private cloud is a must for many enterprises such as ours working to reduce or eliminate their dependence on internal data centers,” said Tyler Best, Hertz Chief Information Officer.  A strategy consisting of public, private and hybrid cloud is essential for large enterprises to effectively make the transition from legacy systems to cloud.

IBM is serious about cloud as a strategic initiative. Although not as large as Microsoft Azure or Amazon Web Service (AWS) in the public cloud, a recent report by Synergy Research found that IBM is a major provider of private cloud services, making the company the third-largest overall cloud provider.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

%d bloggers like this: