Posts Tagged ‘technology’

IBM Leverages Strategic Imperatives to Win in Cloud

March 16, 2018

Some people may have been ready to count out IBM in the cloud. The company, however, is clawing its way back into contention faster than many imagined. In a recent Forbes Magazine piece, IBM credits 16,000 AI engagements, 400 blockchain engagements, and a couple of quantum computing pilots as driving its return as a serious cloud player.

IBM uses blockchain to win the cloud

According to Fortune, IBM has jumped up to third in cloud revenue with $17 billion, ranking behind Microsoft with $18.6 billion and Amazon, with $17.5. Among other big players, Google comes in seventh with $3 billion

In the esoteric world of quantum computing IBM is touting live projects underway with JPMorganChase, Daimler, and others. Bob Evans, a respected technology writer and now the principle of Evans Strategic Communications, notes that the latest numbers “underscore not only IBM’s aggressive moves into enterprise IT’s highest-potential markets,” but also the legitimacy of the company’s claims that it has joined the top ranks of the competitive cloud-computing marketplace alongside Microsoft and Amazon.

As reported in the Fortune piece, CEO Ginni Rometty, speaking to a quarterly analyst briefing, declared: “While IBM has a considerable presence in the public-cloud IaaS market because many of its clients require or desire that, it intends to greatly differentiate itself from the big IaaS providers via higher-value technologies such as AI, blockchain, cybersecurity and analytics.” These are the areas that Evans sees as driving IBM into the cloud’s top tier.

Rometty continued; “I think you know that for us the cloud has never been about having Infrastructure-as-a-Service-only as a public cloud, or a low-volume commodity cloud; Frankly, Infrastructure-as-a-Service is almost just a dialtone. For us, it’s always been about a cloud that is going to be enterprise-strong and of which IaaS is only a component.”

In the Fortune piece she then laid out four strategic differentiators for the IBM Cloud, which in 2017 accounted for 22% of IBM’s revenue:

  1. “The IBM Cloud is built for “data and applications anywhere,” Rometty said. “When we say you can do data and apps anywhere, it means you have a public cloud, you have private clouds, you have on-prem environments, and then you have the ability to connect not just those but also to other clouds. That is what we have done—all of those components.”
  2. The IBM Cloud is “infused with AI,” she continued, alluding to how most of the 16,000 AI engagements also involve the cloud. She cited four of the most-popular ways in which customers are using AI: customer service, enhancing white-collar work, risk and compliance, and HR.
  3. For securing the cloud IBM opened more than 50 cybersecurity centers around the world to ensure “the IBM Cloud is secure to the core,” Rometty noted.
  4. “And perhaps this the most important differentiator—you have to be able to extend your cloud into everything that’s going to come down the road, and that could well be more cyber analytics but it is definitely blockchain, and it is definitely quantum because that’s where a lot of new value is going to reside.”

You have to give Rometty credit: She bet big that IBM’s strategic imperatives, especially blockchain and, riskiest of all, quantum computing would eventually pay off. The company had long realized it couldn’t compete in high volume, low margin businesses. She made her bet on what IBM does best—advanced research—and stuck with it.  During those 22 consecutive quarters of revenue losses she stayed the course and didn’t publicly question the decision.

As Fortune observed: In quantum, IBM’s leveraging its first-mover status and has moved far beyond theoretical proposals. “We are the only company with a 50-qubit system that is actually working—we’re not publishing pictures of photos of what it might look like, or writings that say if there is quantum, we can do it—rather, we are scaling rapidly and we are the only one working with clients in development working on our quantum,” Rometty said.

IBM’s initial forays into commercial quantum computing are just getting started: JPMorganChase is working on risk optimization and portfolio optimization using IBM quantum computing;  Daimler is using IBM’s quantum technology to explore new approaches to logistics and self-driving car routes; and JSR is doing computational chemistry to create entirely new materials. None of these look like the payback is right around the corner. As DancingDinosaur wrote just last week, progress with quantum has been astounding but much remains to be done to get a functioning commercial ecosystem in place to support the commercialization of quantum computing for business on a large scale.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

The Rush to Quantum Computing

March 9, 2018

Are you excited about quantum computing? Are you taking steps to get ready for it? Do you have an idea of what you would like to do with quantum computing or a plan for how to do it? Except for the most science-driven organizations or those with incomprehensively complex challenges to solve DancingDinosaur can’t imagine this is the most pressing IT issue you are facing today.

Yet leading IT-based vendors are making astounding gains in moving quantum computing forward further and faster than the industry was even projecting a few months ago. This past Nov. IBM announced a 50 qubit system. Earlier this month Google announced Bristlecone, which claims to top that. With Bristlecone Google trumps IBM for now with 72 qubits. However, that may not be the most important metric to focus on.

Never heard of quantum supremacy? You are going to hear a lot about it in the coming weeks, months, and even years as the vendors battle for the quantum supremacy title. Here is how Wikipedia defines it: Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers cannot. In computational complexity-theoretic terms, this generally means providing a super-polynomial speedup over the best known or possible classical algorithm. If this doesn’t send you racing to dig out your old college math book you were a better student than DancingDinosaur. In short, supremacy means beating the current best conventional algorithms. But you can’t just beat them; you have to do it using less energy or faster or some way that will demonstrate your approach’s advantage.

The issue resolves around the instability of qubits; the hardware needs to be sturdy to run them. Industry sources note that quantum computers need to keep their processors extremely cold (Kelvin levels of cold) and protect them from external shocks. Even accidental sounds can cause the computer to make mistakes. To operate in even remotely real-world settings, quantum processors also need to have an error rate of less than 0.5 percent for every two qubits. Google’s best came in at 0.6 percent using its much smaller 9-qubit hardware. Its latest blog post didn’t state Bristlecone’s error rate, but Google promised to improve on its previous results. To drop the error rate for any qubit processor, engineers must figure out how software, control electronics, and the processor itself can work alongside one another without causing errors.

50 cubits currently is considered the minimum number for serious business work. IBM’s November announcement, however, was quick to point out that “does not mean quantum computing is ready for common use.” The system IBM developed remains extremely finicky and challenging to use, as are those being built by others. In its 50-qubit system, the quantum state is preserved for 90 microseconds—record length for the industry but still an extremely short period of time.

Nonetheless, 50 qubits have emerged as the minimum number for a (relatively) stable system to perform practical quantum computing. According to IBM, a 50-qubit machine can do things that are extremely difficult to simulate without quantum technology.

The problem touches on one of the attributes of quantum systems.  As IBM explains, where normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena—entanglement and superposition—to process information differently.  Conventional computers store numbers as sequences of 0 and 1 in memory and process the numbers using only the simplest mathematical operations, add and subtract.

Quantum computers can digest 0 and 1 too but have a broader array of tricks. That’s where entanglement and superposition come in.  For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum systems can handle multiple, contradictory states. That can be very helpful if you are trying to solve huge data- and compute-intensive problems like a Monte Carlo simulation. After working at quantum computing for decades the new 50-cubit system finally brings something IBM can offer to businesses which face complex challenges that can benefit from quantum’s superposition capabilities.

Still, don’t bet on using quantum computing to solve serious business challenges very soon.  An entire ecosystem of programmers, vendors, programming models, methodologies, useful tools, and a host of other things have to fall into place first. IBM and Google and others are making stunningly rapid progress. Maybe DancingDinosaur will actually be alive to see quantum computing as just another tool in a business’s problem-solving toolkit.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

Dinosaurs Strike Back in IBM Business Value Survey

March 2, 2018

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing.  Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet.  The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset.  Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found:  Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

IBM Brings NVMe to Revamped Storage

February 23, 2018

The past year has been good for IBM storage and it’s not only that the company rang up four consecutive quarters of positive storage revenue. Over that period and starting somewhat earlier, the company embarked on a thorough revamping of its storage lineup, adding all the hot goodies from flash to software defined storage (Spectrum) to NVMe (Non-Volatile Memory express) in 2018. NVMe represents a culmination of sorts by allowing the revamped storage products to actually deliver on the low latency and parallelism promises of the latest technology.

Hyper-Scale Manager for IBM FlashSystem (Jared Lazarus/Feature Photo Service for IBM)

The revamp follows changes in the way organizations are deploying technology. They now are wrestling with exponential volumes of data growth and the need to quickly modernize their traditional IT infrastructures by taking advantage of multi-cloud, analytics, and cognitive/AI workloads going forward.

This is not just a revamp of existing products. IBM has added innovations and enhancements across the storage portfolio to expand the range of data types supported, deliver new function, and enable new technology deployment.

This week, IBM Storage — the #2 storage software vendor by revenue market share according to IDC—announced a wide-ranging set of innovations to its software-defined storage (SDS), data protection, and storage systems portfolio. Continuing IBM investments in enhancing its SDS (Spectrum), data protection, and storage systems capabilities, these announcements demonstrate its commitment to IBM storage solutions as the foundation for multi-cloud and cognitive/AI applications and workloads.

With these enhancements, IBM is aiming to transform on-premises infrastructure to meet these new business imperatives. Recent innovations and enhancements across the IBM Storage portfolio expand the range of data types supported, deliver new function, and enable new technology deployment. For example, IBM Spectrum NAS delivers enterprise capabilities and SDS simplicity with cost benefits for common file workloads, including support for Microsoft environments. Or, IBM Spectrum Protect still addresses data security concerns but just added General Data Protection Regulation (GDPR) and automated detection and alerting of ransomware.

Along the same lines, IBM Spectrum Storage Suite brings a complete solution for software-defined storage needs while gaining expanded range and value through the inclusion of IBM Spectrum Protect Plus at no additional charge. Similarly, IBM Spectrum Virtualize promises lower data storage costs through new and better performing data reduction technologies for the IBM Storwize family, IBM SVC, and IBM FlashSystem V9000, as well as for over 440 non-IBM vendor storage systems.

Finally, IBM Spectrum Connect simplifies management of complex server environments by providing a consistent experience when provisioning, monitoring, automating, and orchestrating IBM storage in containerized VMware and Microsoft PowerShell environments. Orchestration is critical in increasingly complex container environments.

The newest part of the IBM storage announcements is NVM Express (NVMe). This is an open logical device interface specification for accessing non-volatile storage media attached via a PCIe bus. The non-volatile memory referred to is flash memory, typically in the form of solid-state drives (SSDs). NVMe provides a logical device interface designed from the ground up to capitalize on the low latency and internal parallelism of flash-based storage devices, essentially mirroring the parallelism of modern CPUs, platforms and applications.

By its design, NVMe allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVMe reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple, long command queues, and reduced latency. (The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a lengthy delay in response exists between a request and the corresponding data receipt due to much slower data speeds than RAM speeds could generate a fault.

NVMe devices exist both in the form of standard PCIe expansion card and as 2.5-inch form-factor devices that provide a four-lane PCIe interface through the U.2 connector (formerly known as SFF-8639) and SATA storage devices and the M.2 specification for internally mounted computer expansion cards also support NVMe as the logical device interface.

Maybe NVMe sounds like overkill now but it won’t the next time you upgrade your IT infrastructure. Don’t plan on buying more HDD or going back to IPv3. With IoT, cognitive computing, blockchain, and more your users will have no tolerance for a slow infrastructure.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

IBM Jumps into the Next Gen Server Party with POWER9

February 15, 2018

IBM re-introduced its POWER9 lineup of servers  this week starting with 2-socket and 4-socket systems and more variations coming in the months ahead as IBM, along with the rest of the IT vendor community grapples with how to address changing data center needs. The first, the AC922, arrived last fall. DancingDinosaur covered it here. More, the S922/S914/S924 and H922/H924/L922, are promised later this quarter.

The workloads organizations are running these days are changing, often dramatically and quickly. One processor, no matter how capable or flexible or efficient will be unlikely to do the job going forward. It will take an entire family of chips.  That’s as true for Intel and AMR and the other chip players as IBM.

In some ways, IBM’s challenge is even qwerkier. Its chips will not only need to support Linux and Windows, but also IBMi and AIX. IBM simply cannot abandon its IBMi and AIX customer bases. So chips supporting IBMi and AIX are being built into the POWER9 family.

For IBMi the company is promising POWER9 exploitation for:

  • Expanding the secure-ability of IBMi with TLS, secure APIs, and logs for SIEM solutions
  • Expanded Install options with an installation process using USB 3.0 media
  • Encryption and compression for cloud storage
  • Increasing the productivity of developers and administrators

This may sound trivial to those who have focused on the Linux world and work with x86 systems too, but it is not for a company still mired in productive yet aging IBMi systems.

IBM also is promising POWER9 goodies for AIX, its legacy Unix OS, including:

  • AIX Security: PowerSC and PowerSC MFA updates for malware intrusion prevention and strong authentication
  • New workload acceleration with shared memory communications over RDMA (SMC-R)
  • Improved availability: AIX Live Update enhancements; GDR 1.2; PowerHA 7.2
  • Improved Cloud Mgmt: IBM Cloud PowerVC Manager for SDI; Import/Export;
  • AIX 7.2 native support for POWER9 – e.g. enabling NVMe

Again, if you have been running Linux on z or LinuxONE this may sound antiquated, but AIX has not been considered state-of-the-art for years. NVMe alone gives is a big boost.

But despite all the nice things IBM is doing for IBMi and AIX, DancingDinosaur believes the company clearly is betting POWER9 will cut into Intel x86 sales. But that is not a given. Intel is rolling out its own family of advanced x86 Xeon machines under the Skylake code name. Different versions will be packaged and tuned to different workloads. They are rumored, at the fully configured high end, to be quite expensive. Just don’t expect POWER9 systems to be cheap either.

And the chip market is getting more crowded. As Timothy Prickett Morgan, analyst at The Next Platform noted, various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s POWER9 family. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the POWER9 will have to fight for every sale IBM wants.

Morgan went on: IBM differentiated the hardware and the pricing with its NVLink versions, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Where the Power8 chip had the advantage over the Intel’s Haswell and Broadwell Xeon E5 processors when it came to memory capacity and memory bandwidth per socket, and could meet or beat the Xeons when it came to performance on some workloads that is not yet apparent with the POWER9.

With the POWER9, however, IBM will likely charge a little less for companies buying its Linux-only variants, observes Morgan, effectively enabling IBM to win Linux deals, particularly where data analytics and open source databases drive the customer’s use case. Similarly, some traditional simulation and modeling workloads in the HPC and machine learning areas are ripe for POWER9.

POWER9 is not one chip. Packed into the chip are next-generation NVIDIA NVLink and OpenCAPI to provide significantly faster performance for attached GPUs. The PCI-Express 4.0 interconnect will be twice the speed of PCI-Express 3.0. The open POWER9 architecture also allows companies to mix a wide range of accelerators to meet various needs. Meanwhile, OpenCAPI can unlock coherent FPGAs to support varied accelerated storage, compute, and networking workloads. IBM also is counting on the 300+ members of the OpenPOWER Foundation and OpenCAPI Consortium to launch innovations for POWER9. Much is happening: Stay tuned to DancingDinosaur

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at and here.

IBM Boosts DevOps with ADDI on Z

February 9, 2018

IBM’s Application Discovery and Delivery Intelligence (ADDI) is an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so you can quickly discover and understand interdependencies and impacts of change. You can use this intelligence to transform and renew these applications faster than ever. Capitalize on time-tested mainframe code to engage the API economy. Accelerate application transformation of your IBM Z hybrid cloud environment and more.

Formerly, ADDI was known as EZSource. Back then EZSource was designed to expedite digital transformations by unlocking core business logic and apps. Specifically it enabled the IT team to pinpoint specific mainframe code in preparation for leveraging IT through a hybrid cloud strategy. In effect it enabled the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enabled enterprise DevOps, which was necessary to keep up with the pace of changes overtaking existing business processes.

This wasn’t easy when EZSource initially arrived and it still isn’t although the intelligence built into ADDI makes it easier now.  Originally it was intended to help the mainframe data center team to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data, and schedule interdependencies
  • Aid in sizing the change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people came onboarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Today, IBM describes Application Discovery and Delivery Intelligence (ADDI), its follow-up to EZSource, as an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so your team can quickly discover and understand interdependencies and impacts of any change. In theory you should be able to use this intelligence to transform and renew these applications more efficiently and productively. In short, it should allow you to leverage time-tested mainframe code to engage with the API economy and accelerate the application transformation on your IBM Z and hybrid cloud environment.

More specifically, it promises to enable your team to analyze a broad range of IBM and non-IBM programing languages, databases, workload schedulers, and environments. Enterprise application portfolios were built over decades using an ever-evolving set of technologies, so you need a tool with broad support, such as ADDI, to truly understand the relationships between application components and accurately determine the impacts of potential changes.

In practice, it integrates with mainframe environments and tools via a z/OS agent to automatically synchronize application changes. Without keeping your application analysis synchronized with the latest changes that your developers made, according to IBM, your analysis can get out of date and you risk missing critical changes.

In addition, it provides visual analysis integrated with leading IDEs. Data center managers are petrified of changing applications that still work, fearing they will inadvertently break it or slow performance. When modifying complex applications, you need to be able to quickly navigate the dependencies between application components and drill down to see relevant details. After you understand the code, you can then effectively modify it at much lower risk. The integration between ADDI and IBM Developer for z (IDz) combines the leading mainframe IDE with the application understanding and analytics capabilities you need to safely and efficiently modify the code.

It also, IBM continues, cognitively optimizes your test suites.  When you have a large code base to maintain and manyf tests to run, you must run the tests most optimally. ADDI correlates code coverage data and code changes with test execution records to enable you to identify which regression tests are the most critical, allowing you to optimize time and resources while reducing risk. It exposes poorly tested or complex code and empowers the test teams with cognitive insights that turns awareness of trends into mitigation of future risks.

Finally, ADDI intelligently identifies performance degradations before they hit production. It correlates runtime performance data with application discovery data and test data to quickly pinpoint performance degradation and narrow down the code artifacts to those that are relevant to the cause of bad performance. This enables early detection of performance issues and speeds resolution.

What’s the biggest benefit of ADDI on the Z? It enables your data center to play a central role in digital transformation, a phrase spoken by every c-level executive today as a holy mantra. But more importantly, it will keep your mainframe relevant.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Value and Power of LinuxOne Emperor II

February 4, 2018

There is much value n the mainframe but it doesn’t become clear until you do a full TCO analysis. When you talk to an IBMer about the cost of a mainframe the conversation immediately shifts to TCO, usually in the form of how many x86 systems you would have to deploy to handle a comparable workload with similar quality of service.  The LinuxONE Emperor II, introduced in September, can beat those comparisons.

LinuxONE Emperor II

Proponents of x86 boast about the low acquisition cost of x86 systems. They are right if you are only thinking about a low initial acquisition cost. But you also have to think about the cost of software for each low-cost core you purchase, and for many enterprise workloads you will need to acquire a lot of cores. This is where costs can mount quickly.

As a result, software will likely become the highest TCO item because many software products are priced per core.  Often the amount charged for cores is determined by the server’s maximum number of physical cores, regardless of whether they actually are activated. In addition, some architectures require more cores per workload. Ouch! An inexpensive device suddenly becomes a pricy machine when all those cores are tallied and priced.

Finally, x86 to IBM Z core ratios differ per workload, but x86 almost invariably requires more cores than a z-based workload; remember, any LinuxONE is a Z System. For example, the same WebSphere workload on x86 that requires 10 – 12 cores may require only one IFL on the Z. The lesson here: whether you’re talking about system software or middleware, you have to consider the impact of software on TCO.

The Emperor II delivers stunning specs. The machine can be packed with up to 170 cores, as much as 32 TB of memory, and 160 PCIe slots. And it is flexible; use this capacity, for instance, to add more system resources—cores or memory—to service an existing Linux instance or clone more Linux instances. Think of it as scale-out capabilities on steroids, taking you far beyond what you can achieve in the x86 world and do it with just a few keystrokes. As IBM puts it, you might:

  • Dynamically add cores, memory, I/O adapters, devices, and network cards without disruption.
  • Grow horizontally by adding Linux instances or grow vertically by adding resources (memory, cores, slots) to existing Linux guests.
  • Provision for peak utilization.
  • After the peak subsides automatically return unused resources to the resource pool for reallocation to another workload.

So, what does this mean to most enterprise Linux data centers? For example, IBM often cites a large insurance firm. The insurer needed fast and flexible provisioning for its database workloads. The company’s approach directed it to deploy more x86 servers to address growth. Unfortunately, the management of software for all those cores had become time consuming and costly. The company deployed 32 x86 servers with 768 cores running 384 competitor’s database licenses.

By leveraging elastic pricing on the Emperor II, for example, it only needed one machine running 63 IFLs serving 64 competitor’s database licenses.  It estimated savings of $15.6 million over 5 years just by eliminating charges for unused cores. (Full disclosure: these figures are provided by IBM; DancingDinosaur did not interview the insurer to verify this data.) Also, note there are many variables at play here around workloads and architecture, usage patterns, labor costs, and more. As IBM warns: Your results may vary.

And then there is security. Since the Emperor II is a Z it delivers all the security of the newest z14, although in a slightly different form. Specifically, it provides:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

BTW the Emperor II also anchors IBM’s Blockchain cloud service. That calls for security to the max. In the end. the Emperor II is unlike any x86 Linux system.

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (not included in the core count)
  • Leading I/O capacity and performance in the industry
  • IBM’s shared memory vertical scale architecture with a better architecture for stateful workloads like databases and systems of record
  • Hardware designed to give good response time even with 100% utilization, which simplifies the solution and reduces the extra costs x86 users assume are necessary because they’re used to keeping a utilization safety margin.

This goes far beyond TCO.  Just remember all the things the Emperor II brings: scalability, reliability, container-based security and flexibility, and more.

…and Go Pats!

DancingDinosaur is Alan Radding, a Boston-based veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

IBM Halts Losing Quarterly Slide

January 25, 2018

With all respects to Casey at Bat author Ernest Thayer, joy may have returned to Mudville. IBM finally broke its 22 consecutive quarters losing streak and posted positive results in 4Q 17.  Fourth-quarter revenue of $22.5 billion, up 4 percent but that was just the start.

Watson and Weather Co. track flu

IBM is counting on its strategic imperatives to come through big and they did in 2017. Full-year strategic imperatives revenue of $36.5 billion, up 11 percent; represents 46 percent of IBM revenue. Similarly, IBM is making some gains in the highly competitive cloud business where IBM is fighting to position itself among the top ranks of formidable cloud players—Google, Amazon, and Microsoft. IBM did quite respectably in the cloud, posting $17 billion in cloud revenue, up 24 percent year to year.

DancingDinosaur readers will be interested to know that some of IBM’s various business segments, which have been a steady drain on IBM revenue turned things around in the 4th quarter. For example, Systems (systems hardware and operating systems software) saw revenues of $3.3 billion, up 32 percent driven by growth in IBM Z, Power Systems, and storage. That’s important to readers charged with planning their organization’s future with the Z or Power machines. They now can be confident that IBM mightn’t the sell the business tomorrow as it did with the x86 systems.

So where might IBM go in the future. “Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter.” said Ginni Rometty, IBM chairman, president, and CEO.  She then continued: “During 2017, we established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

Added James Kavanaugh, IBM CFO: “Over the past several years we have invested aggressively in technology and our people to reposition IBM.  2018 will be all about reinforcing IBM’s leadership position,” he continued, “in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

IBM has done well in some business and technology segments. Specifically, the company reported gains in revenues from analytics, up 9 percent, mobile, up 23 percent, and security, up a whopping 132 percent.

Other segments have not done as well. Technology Services & Cloud Platforms (includes infrastructure services, technical support services, and integration software) continue to lose money. A number of investment analysts are happy with IBM’s financials but are not optimistic about what they portend for IBM’s future.

For instance, Bert Hochfeld, long/short equity, growth, event-driven, research analyst, writes in Seeking Alpha, “the real reason why strategic imperatives and cloud showed relatively robust growth last quarter has nothing to do with IBM’s pivots and everything to do with the success of IBM’s mainframe cycle. IBM’s Z system achieved 71% growth last quarter compared to 62% in the prior quarter. New Z Systems are being delivered with pervasive encryption, they are being used to support hybrid cloud architectures, and they are being used to support Blockchain solutions… Right now, the mainframe performance is above the prior cycle (z13) and consistent with the z12 cycle a few years ago. And IBM has enjoyed some reasonable success with its all-flash arrays in the storage business. Further, the company’s superscalar offering, Power9, is having success and, as many of its workloads are used for AI, its revenues get counted as part of strategic initiatives. But should investors count on a mainframe cycle and a high-performance computer cycle in making a long-term investment decision regarding IBM shares?

He continued: “IBM management has suggested that some of the innovations in the current product range including blockchain, cryptography, security and reliability will make this cycle different, and perhaps longer, then other cycles. The length of the mainframe cycle is a crucial component in management’s earnings estimate. It needs to continue at elevated levels at least for another couple of quarters. While that is probably more likely, is it really prudent to base an investment judgement on the length of a mainframe cycle?

Of course, many DancingDinosaur readers are basing their career and employment decisions on the mainframe or Power Systems. Let’s hope this quarter’s success encourages them; it sure beats 22 consecutive quarters of revenue declines.

Do you remember how Thayer’s poem ends? With the hopes and dreams of Mudville riding on him, it is the bottom of the 9th; Casey takes a mighty swing and… strikes out! Let’s hope this isn’t IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Compuware Brings Multi-Platform DevOps to the Z

January 19, 2018

The rush has started to DevOps for Z. IBM jumped on the bandwagon with an updated release of IBM Developer for z Systems (IDz) V14.1.1, which allows Z organizations to provide new capabilities and product maintenance to users sooner than the traditional release models they previously used from IBM.

Even more recently, Compuware, which described DevOps and the mainframe as the ultimate win-win, announced a program to advance DevOps on the mainframe with integrated COBOL code coverage metrics for multi-platform DevOps.  This will make it possible for all developers in the organization to fluidly handle multi-platform code, including mainframe code, in a fast delivery DevOps approach.

SonarSource-Compuware DevOps Dashboard

The new Compuware-SonarSource integrations are expected to ease enterprise DevOps teams trying to track and validate code coverage of COBOL application testing and do it with the same ease and employing the same processes as they do with Java and other more mainstream code. This ability to automate code coverage tracking across platforms is yet another example of empowering enterprise IT to apply the same proven and essential Agile, DevOps and Continuous Integration/Continuous Delivery (CI/CD) disciplines to both core systems-of-record (mainframe) as well as systems-of-engagement (mostly distributed systems).

Code coverage metrics promise insight into the degree to which source code is executed during a test. It identifies  which lines of code have been executed, and what percentage of an application has been tested. These measurements allow IT teams to understand the scope and effectiveness of its testing as code is moved towards production.

DevOps has become increasingly critical to mainframe shops that risk becoming irrelevant and even replaceable if they cannot turn around code improvements fast enough. The mainframe continues to be valued as the secure repository of the organization’s critical data but that won’t hold off those who feel the mainframe is a costly extravagance, especially when mainframe shops can’t turn out code updates and enhancements as fast as systems regarded as more inherently agile.

As Compuware puts it, the latest integrations automatically feed code coverage results captured by its Topaz for Total Test into SonarSource’s SonarQube. This gives DevOps teams an accurate, unified view of quality metrics and milestones across platforms enterprise-wide.

For z shops specifically, such continuous code quality management across platforms promises high value to large enterprises, enabling them to bring new digital deliverables to market, which increasingly is contingent on simultaneously updating code across both back-end mainframe systems-of-record and front-end mobile/web and distributed systems-of-engagement.

Specifically, notes Compuware, integration between Topaz for Total Test and SonarQube enables DevOps teams to:

  • Gain insight into the coverage of code being promoted for all application components across all platforms
  • Improve the rigor of digital governance with strong enforcement of mainframe QA policies for coding errors, data leakage, credential vulnerabilities, and more
  • Shorten feedback loops to speed time-to-benefit and more promptly address shortfalls in COBOL skills and bottlenecks in mainframe DevOps processes

Topaz for Total Test captures code coverage metrics directly from the source code itself, rather than from a source listing, as is the case with outdated mainframe tools. This direct capture is more accurate and eliminates the need for development, Compuware reported.

The new integration actually encompasses a range of tools and capabilities. For instance:

From within a Compuware Xpediter debug session, a developer can kick off a Compuware Topaz for Total Test automated unit test and set it up to collect code coverage info as it runs. Code coverage metrics then can be automatically fed into SonarSource’s SonarQube where they can be displayed in a dashboard along with other quality metrics, such as lines going to subprograms.

It also integrates with Jenkins as a Continuous Integration (CI) platform, which acts as a process orchestrator and interacts with an SCM tool, such as Compuware ISPW, which automates software quality checks and pushes metrics onto SonarQube among other things. ISPW also is where code gets promoted to the various stages within the lifecycle and ultimately deployed. Finally Topaz is Compuware’s Eclipse-based IDE from which developers drive all these activities.

The Compuware announcement further delivers on its promise to mainstream the mainframe; that is, provide a familiar, modern, and intuitive multi-platform mainframe development environment—integrated with state-of-the-art DevOps tools for veteran mainframe developers and, more importantly, those newly engaged as IT newbies from the distributed world. In short, this is how you keep your Z relevant and invaluable going forward.

** Special note regarding last week’s DancingDinosaur reporting on chip problems here; Don’t count on an immediate solution coming from the vendors anytime soon; not Google, IBM, Intel, AMD, ARM, or others. The word among chip geeks is that the dependencies are too complex to be fully fixed with a patch. This probably requires new chip designs and fabrication. DancingDinosaur will keep you posted.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at and here.


%d bloggers like this: