Posts Tagged ‘IBM’

IBM Jumps into the Next Gen Server Party with POWER9

February 15, 2018

IBM re-introduced its POWER9 lineup of servers  this week starting with 2-socket and 4-socket systems and more variations coming in the months ahead as IBM, along with the rest of the IT vendor community grapples with how to address changing data center needs. The first, the AC922, arrived last fall. DancingDinosaur covered it here. More, the S922/S914/S924 and H922/H924/L922, are promised later this quarter.

The workloads organizations are running these days are changing, often dramatically and quickly. One processor, no matter how capable or flexible or efficient will be unlikely to do the job going forward. It will take an entire family of chips.  That’s as true for Intel and AMR and the other chip players as IBM.

In some ways, IBM’s challenge is even qwerkier. Its chips will not only need to support Linux and Windows, but also IBMi and AIX. IBM simply cannot abandon its IBMi and AIX customer bases. So chips supporting IBMi and AIX are being built into the POWER9 family.

For IBMi the company is promising POWER9 exploitation for:

  • Expanding the secure-ability of IBMi with TLS, secure APIs, and logs for SIEM solutions
  • Expanded Install options with an installation process using USB 3.0 media
  • Encryption and compression for cloud storage
  • Increasing the productivity of developers and administrators

This may sound trivial to those who have focused on the Linux world and work with x86 systems too, but it is not for a company still mired in productive yet aging IBMi systems.

IBM also is promising POWER9 goodies for AIX, its legacy Unix OS, including:

  • AIX Security: PowerSC and PowerSC MFA updates for malware intrusion prevention and strong authentication
  • New workload acceleration with shared memory communications over RDMA (SMC-R)
  • Improved availability: AIX Live Update enhancements; GDR 1.2; PowerHA 7.2
  • Improved Cloud Mgmt: IBM Cloud PowerVC Manager for SDI; Import/Export;
  • AIX 7.2 native support for POWER9 – e.g. enabling NVMe

Again, if you have been running Linux on z or LinuxONE this may sound antiquated, but AIX has not been considered state-of-the-art for years. NVMe alone gives is a big boost.

But despite all the nice things IBM is doing for IBMi and AIX, DancingDinosaur believes the company clearly is betting POWER9 will cut into Intel x86 sales. But that is not a given. Intel is rolling out its own family of advanced x86 Xeon machines under the Skylake code name. Different versions will be packaged and tuned to different workloads. They are rumored, at the fully configured high end, to be quite expensive. Just don’t expect POWER9 systems to be cheap either.

And the chip market is getting more crowded. As Timothy Prickett Morgan, analyst at The Next Platform noted, various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s POWER9 family. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the POWER9 will have to fight for every sale IBM wants.

Morgan went on: IBM differentiated the hardware and the pricing with its NVLink versions, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Where the Power8 chip had the advantage over the Intel’s Haswell and Broadwell Xeon E5 processors when it came to memory capacity and memory bandwidth per socket, and could meet or beat the Xeons when it came to performance on some workloads that is not yet apparent with the POWER9.

With the POWER9, however, IBM will likely charge a little less for companies buying its Linux-only variants, observes Morgan, effectively enabling IBM to win Linux deals, particularly where data analytics and open source databases drive the customer’s use case. Similarly, some traditional simulation and modeling workloads in the HPC and machine learning areas are ripe for POWER9.

POWER9 is not one chip. Packed into the chip are next-generation NVIDIA NVLink and OpenCAPI to provide significantly faster performance for attached GPUs. The PCI-Express 4.0 interconnect will be twice the speed of PCI-Express 3.0. The open POWER9 architecture also allows companies to mix a wide range of accelerators to meet various needs. Meanwhile, OpenCAPI can unlock coherent FPGAs to support varied accelerated storage, compute, and networking workloads. IBM also is counting on the 300+ members of the OpenPOWER Foundation and OpenCAPI Consortium to launch innovations for POWER9. Much is happening: Stay tuned to DancingDinosaur

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Boosts DevOps with ADDI on Z

February 9, 2018

IBM’s Application Discovery and Delivery Intelligence (ADDI) is an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so you can quickly discover and understand interdependencies and impacts of change. You can use this intelligence to transform and renew these applications faster than ever. Capitalize on time-tested mainframe code to engage the API economy. Accelerate application transformation of your IBM Z hybrid cloud environment and more.

Formerly, ADDI was known as EZSource. Back then EZSource was designed to expedite digital transformations by unlocking core business logic and apps. Specifically it enabled the IT team to pinpoint specific mainframe code in preparation for leveraging IT through a hybrid cloud strategy. In effect it enabled the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enabled enterprise DevOps, which was necessary to keep up with the pace of changes overtaking existing business processes.

This wasn’t easy when EZSource initially arrived and it still isn’t although the intelligence built into ADDI makes it easier now.  Originally it was intended to help the mainframe data center team to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data, and schedule interdependencies
  • Aid in sizing the change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people came onboarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Today, IBM describes Application Discovery and Delivery Intelligence (ADDI), its follow-up to EZSource, as an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so your team can quickly discover and understand interdependencies and impacts of any change. In theory you should be able to use this intelligence to transform and renew these applications more efficiently and productively. In short, it should allow you to leverage time-tested mainframe code to engage with the API economy and accelerate the application transformation on your IBM Z and hybrid cloud environment.

More specifically, it promises to enable your team to analyze a broad range of IBM and non-IBM programing languages, databases, workload schedulers, and environments. Enterprise application portfolios were built over decades using an ever-evolving set of technologies, so you need a tool with broad support, such as ADDI, to truly understand the relationships between application components and accurately determine the impacts of potential changes.

In practice, it integrates with mainframe environments and tools via a z/OS agent to automatically synchronize application changes. Without keeping your application analysis synchronized with the latest changes that your developers made, according to IBM, your analysis can get out of date and you risk missing critical changes.

In addition, it provides visual analysis integrated with leading IDEs. Data center managers are petrified of changing applications that still work, fearing they will inadvertently break it or slow performance. When modifying complex applications, you need to be able to quickly navigate the dependencies between application components and drill down to see relevant details. After you understand the code, you can then effectively modify it at much lower risk. The integration between ADDI and IBM Developer for z (IDz) combines the leading mainframe IDE with the application understanding and analytics capabilities you need to safely and efficiently modify the code.

It also, IBM continues, cognitively optimizes your test suites.  When you have a large code base to maintain and manyf tests to run, you must run the tests most optimally. ADDI correlates code coverage data and code changes with test execution records to enable you to identify which regression tests are the most critical, allowing you to optimize time and resources while reducing risk. It exposes poorly tested or complex code and empowers the test teams with cognitive insights that turns awareness of trends into mitigation of future risks.

Finally, ADDI intelligently identifies performance degradations before they hit production. It correlates runtime performance data with application discovery data and test data to quickly pinpoint performance degradation and narrow down the code artifacts to those that are relevant to the cause of bad performance. This enables early detection of performance issues and speeds resolution.

What’s the biggest benefit of ADDI on the Z? It enables your data center to play a central role in digital transformation, a phrase spoken by every c-level executive today as a holy mantra. But more importantly, it will keep your mainframe relevant.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Value and Power of LinuxOne Emperor II

February 4, 2018

There is much value n the mainframe but it doesn’t become clear until you do a full TCO analysis. When you talk to an IBMer about the cost of a mainframe the conversation immediately shifts to TCO, usually in the form of how many x86 systems you would have to deploy to handle a comparable workload with similar quality of service.  The LinuxONE Emperor II, introduced in September, can beat those comparisons.

LinuxONE Emperor II

Proponents of x86 boast about the low acquisition cost of x86 systems. They are right if you are only thinking about a low initial acquisition cost. But you also have to think about the cost of software for each low-cost core you purchase, and for many enterprise workloads you will need to acquire a lot of cores. This is where costs can mount quickly.

As a result, software will likely become the highest TCO item because many software products are priced per core.  Often the amount charged for cores is determined by the server’s maximum number of physical cores, regardless of whether they actually are activated. In addition, some architectures require more cores per workload. Ouch! An inexpensive device suddenly becomes a pricy machine when all those cores are tallied and priced.

Finally, x86 to IBM Z core ratios differ per workload, but x86 almost invariably requires more cores than a z-based workload; remember, any LinuxONE is a Z System. For example, the same WebSphere workload on x86 that requires 10 – 12 cores may require only one IFL on the Z. The lesson here: whether you’re talking about system software or middleware, you have to consider the impact of software on TCO.

The Emperor II delivers stunning specs. The machine can be packed with up to 170 cores, as much as 32 TB of memory, and 160 PCIe slots. And it is flexible; use this capacity, for instance, to add more system resources—cores or memory—to service an existing Linux instance or clone more Linux instances. Think of it as scale-out capabilities on steroids, taking you far beyond what you can achieve in the x86 world and do it with just a few keystrokes. As IBM puts it, you might:

  • Dynamically add cores, memory, I/O adapters, devices, and network cards without disruption.
  • Grow horizontally by adding Linux instances or grow vertically by adding resources (memory, cores, slots) to existing Linux guests.
  • Provision for peak utilization.
  • After the peak subsides automatically return unused resources to the resource pool for reallocation to another workload.

So, what does this mean to most enterprise Linux data centers? For example, IBM often cites a large insurance firm. The insurer needed fast and flexible provisioning for its database workloads. The company’s approach directed it to deploy more x86 servers to address growth. Unfortunately, the management of software for all those cores had become time consuming and costly. The company deployed 32 x86 servers with 768 cores running 384 competitor’s database licenses.

By leveraging elastic pricing on the Emperor II, for example, it only needed one machine running 63 IFLs serving 64 competitor’s database licenses.  It estimated savings of $15.6 million over 5 years just by eliminating charges for unused cores. (Full disclosure: these figures are provided by IBM; DancingDinosaur did not interview the insurer to verify this data.) Also, note there are many variables at play here around workloads and architecture, usage patterns, labor costs, and more. As IBM warns: Your results may vary.

And then there is security. Since the Emperor II is a Z it delivers all the security of the newest z14, although in a slightly different form. Specifically, it provides:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

BTW the Emperor II also anchors IBM’s Blockchain cloud service. That calls for security to the max. In the end. the Emperor II is unlike any x86 Linux system.

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (not included in the core count)
  • Leading I/O capacity and performance in the industry
  • IBM’s shared memory vertical scale architecture with a better architecture for stateful workloads like databases and systems of record
  • Hardware designed to give good response time even with 100% utilization, which simplifies the solution and reduces the extra costs x86 users assume are necessary because they’re used to keeping a utilization safety margin.

This goes far beyond TCO.  Just remember all the things the Emperor II brings: scalability, reliability, container-based security and flexibility, and more.

…and Go Pats!

DancingDinosaur is Alan Radding, a Boston-based veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Halts Losing Quarterly Slide

January 25, 2018

With all respects to Casey at Bat author Ernest Thayer, joy may have returned to Mudville. IBM finally broke its 22 consecutive quarters losing streak and posted positive results in 4Q 17.  Fourth-quarter revenue of $22.5 billion, up 4 percent but that was just the start.

Watson and Weather Co. track flu

IBM is counting on its strategic imperatives to come through big and they did in 2017. Full-year strategic imperatives revenue of $36.5 billion, up 11 percent; represents 46 percent of IBM revenue. Similarly, IBM is making some gains in the highly competitive cloud business where IBM is fighting to position itself among the top ranks of formidable cloud players—Google, Amazon, and Microsoft. IBM did quite respectably in the cloud, posting $17 billion in cloud revenue, up 24 percent year to year.

DancingDinosaur readers will be interested to know that some of IBM’s various business segments, which have been a steady drain on IBM revenue turned things around in the 4th quarter. For example, Systems (systems hardware and operating systems software) saw revenues of $3.3 billion, up 32 percent driven by growth in IBM Z, Power Systems, and storage. That’s important to readers charged with planning their organization’s future with the Z or Power machines. They now can be confident that IBM mightn’t the sell the business tomorrow as it did with the x86 systems.

So where might IBM go in the future. “Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter.” said Ginni Rometty, IBM chairman, president, and CEO.  She then continued: “During 2017, we established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

Added James Kavanaugh, IBM CFO: “Over the past several years we have invested aggressively in technology and our people to reposition IBM.  2018 will be all about reinforcing IBM’s leadership position,” he continued, “in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

IBM has done well in some business and technology segments. Specifically, the company reported gains in revenues from analytics, up 9 percent, mobile, up 23 percent, and security, up a whopping 132 percent.

Other segments have not done as well. Technology Services & Cloud Platforms (includes infrastructure services, technical support services, and integration software) continue to lose money. A number of investment analysts are happy with IBM’s financials but are not optimistic about what they portend for IBM’s future.

For instance, Bert Hochfeld, long/short equity, growth, event-driven, research analyst, writes in Seeking Alpha, “the real reason why strategic imperatives and cloud showed relatively robust growth last quarter has nothing to do with IBM’s pivots and everything to do with the success of IBM’s mainframe cycle. IBM’s Z system achieved 71% growth last quarter compared to 62% in the prior quarter. New Z Systems are being delivered with pervasive encryption, they are being used to support hybrid cloud architectures, and they are being used to support Blockchain solutions… Right now, the mainframe performance is above the prior cycle (z13) and consistent with the z12 cycle a few years ago. And IBM has enjoyed some reasonable success with its all-flash arrays in the storage business. Further, the company’s superscalar offering, Power9, is having success and, as many of its workloads are used for AI, its revenues get counted as part of strategic initiatives. But should investors count on a mainframe cycle and a high-performance computer cycle in making a long-term investment decision regarding IBM shares?

He continued: “IBM management has suggested that some of the innovations in the current product range including blockchain, cryptography, security and reliability will make this cycle different, and perhaps longer, then other cycles. The length of the mainframe cycle is a crucial component in management’s earnings estimate. It needs to continue at elevated levels at least for another couple of quarters. While that is probably more likely, is it really prudent to base an investment judgement on the length of a mainframe cycle?

Of course, many DancingDinosaur readers are basing their career and employment decisions on the mainframe or Power Systems. Let’s hope this quarter’s success encourages them; it sure beats 22 consecutive quarters of revenue declines.

Do you remember how Thayer’s poem ends? With the hopes and dreams of Mudville riding on him, it is the bottom of the 9th; Casey takes a mighty swing and… strikes out! Let’s hope this isn’t IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Brings Multi-Platform DevOps to the Z

January 19, 2018

The rush has started to DevOps for Z. IBM jumped on the bandwagon with an updated release of IBM Developer for z Systems (IDz) V14.1.1, which allows Z organizations to provide new capabilities and product maintenance to users sooner than the traditional release models they previously used from IBM.

Even more recently, Compuware, which described DevOps and the mainframe as the ultimate win-win, announced a program to advance DevOps on the mainframe with integrated COBOL code coverage metrics for multi-platform DevOps.  This will make it possible for all developers in the organization to fluidly handle multi-platform code, including mainframe code, in a fast delivery DevOps approach.

SonarSource-Compuware DevOps Dashboard

The new Compuware-SonarSource integrations are expected to ease enterprise DevOps teams trying to track and validate code coverage of COBOL application testing and do it with the same ease and employing the same processes as they do with Java and other more mainstream code. This ability to automate code coverage tracking across platforms is yet another example of empowering enterprise IT to apply the same proven and essential Agile, DevOps and Continuous Integration/Continuous Delivery (CI/CD) disciplines to both core systems-of-record (mainframe) as well as systems-of-engagement (mostly distributed systems).

Code coverage metrics promise insight into the degree to which source code is executed during a test. It identifies  which lines of code have been executed, and what percentage of an application has been tested. These measurements allow IT teams to understand the scope and effectiveness of its testing as code is moved towards production.

DevOps has become increasingly critical to mainframe shops that risk becoming irrelevant and even replaceable if they cannot turn around code improvements fast enough. The mainframe continues to be valued as the secure repository of the organization’s critical data but that won’t hold off those who feel the mainframe is a costly extravagance, especially when mainframe shops can’t turn out code updates and enhancements as fast as systems regarded as more inherently agile.

As Compuware puts it, the latest integrations automatically feed code coverage results captured by its Topaz for Total Test into SonarSource’s SonarQube. This gives DevOps teams an accurate, unified view of quality metrics and milestones across platforms enterprise-wide.

For z shops specifically, such continuous code quality management across platforms promises high value to large enterprises, enabling them to bring new digital deliverables to market, which increasingly is contingent on simultaneously updating code across both back-end mainframe systems-of-record and front-end mobile/web and distributed systems-of-engagement.

Specifically, notes Compuware, integration between Topaz for Total Test and SonarQube enables DevOps teams to:

  • Gain insight into the coverage of code being promoted for all application components across all platforms
  • Improve the rigor of digital governance with strong enforcement of mainframe QA policies for coding errors, data leakage, credential vulnerabilities, and more
  • Shorten feedback loops to speed time-to-benefit and more promptly address shortfalls in COBOL skills and bottlenecks in mainframe DevOps processes

Topaz for Total Test captures code coverage metrics directly from the source code itself, rather than from a source listing, as is the case with outdated mainframe tools. This direct capture is more accurate and eliminates the need for development, Compuware reported.

The new integration actually encompasses a range of tools and capabilities. For instance:

From within a Compuware Xpediter debug session, a developer can kick off a Compuware Topaz for Total Test automated unit test and set it up to collect code coverage info as it runs. Code coverage metrics then can be automatically fed into SonarSource’s SonarQube where they can be displayed in a dashboard along with other quality metrics, such as lines going to subprograms.

It also integrates with Jenkins as a Continuous Integration (CI) platform, which acts as a process orchestrator and interacts with an SCM tool, such as Compuware ISPW, which automates software quality checks and pushes metrics onto SonarQube among other things. ISPW also is where code gets promoted to the various stages within the lifecycle and ultimately deployed. Finally Topaz is Compuware’s Eclipse-based IDE from which developers drive all these activities.

The Compuware announcement further delivers on its promise to mainstream the mainframe; that is, provide a familiar, modern, and intuitive multi-platform mainframe development environment—integrated with state-of-the-art DevOps tools for veteran mainframe developers and, more importantly, those newly engaged as IT newbies from the distributed world. In short, this is how you keep your Z relevant and invaluable going forward.

** Special note regarding last week’s DancingDinosaur reporting on chip problems here; Don’t count on an immediate solution coming from the vendors anytime soon; not Google, IBM, Intel, AMD, ARM, or others. The word among chip geeks is that the dependencies are too complex to be fully fixed with a patch. This probably requires new chip designs and fabrication. DancingDinosaur will keep you posted.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Survey Unveils 5 Ways Z Users Are Saving Money

January 9, 2018

Syncsort Inc. recently completed its year-end 2017 State-of-the-Mainframe annual survey of IT professionals. Over In the past year, the organizations surveyed increased their spending for mainframe capacity, new mainframe applications, and mainframe data analytics. The IBM z/OS mainframe remains an important focus in organizations, with the majority of respondents reporting that the mainframe serves as the hub for business-critical applications by providing high-volume transaction and database processing.

More interestingly, Syncsort notes high number of respondents indicated they’ll use the mainframe to run revenue-generating services over the next 12 months, another clear indication that the mainframe remains integral to the business.

However, the survey also reflects concerns over the high cost of the mainframe. In effect, mainframe optimization, cost reduction, and spending remain at the forefront, with many organizations looking to leverage zIIP engines to offload general processor cycles, which maximize resources, delays or avoids hardware upgrades, and lowers monthly software charges.

At the same time some organizations are looking at mainframe optimization to fund strategic projects, such as enhanced mainframe data analytics to support better business decisions for meeting SLAs as well as security and compliance initiatives. All of this may relieve pressure to jump to a lower cost platform (x86) in the hope of reducing spending.

But apparently it is not enough in a number of cases. Despite the focus on optimization, the survey notes, nearly 20% of respondents plan to move off the mainframe completely in 2018. DancingDinosaur, however spent decades writing mainframe-is-dead pieces and this invariably takes longer, costs more, often much more, than expected, and sometimes is never fully achieved. The cost of building a no-fail, scalable, and secure business platform has proven to be extremely difficult.

However costly as the mainframe is, you can get it up running dependably for less than you will end up paying to cobble together bare metal x86 boxes. But if you try, please let me know and I will check back with you next year to publicize your success. One exception might be if you opt for a 100% cloud solution; again, let me know if it works and how much you save; I’ll make you a hero.

In the meantime, here are five ways respondents expect to save money by streamlining operations through mainframe-based optimization:

  1. This year organizations aim to redirect budget dollars to strategic projects such as mainframe data analytics. Optimization will primarily focus on general processor usage by leveraging zIIP engines and using MSU optimization tools. Some organizations will take it a step further, and target some candidate workloads to be moved off of the mainframe (possibly to a hybrid cloud) to ensure sufficient capacity remains for business critical applications.
  1. Big data analytics for operational intelligence, security, and compliance will continue to grow and emerge as a critical effort, and ensuring that IT services are delivered effectively to meet SLAs. Mainframe data sources will be critical in helping to address these challenges.
  1. Integration of mainframe data with modern analytics tools will become pervasive and critically important as organizations look to exploit this abundance of information for enhanced visibility. Integrating mainframe machine data will not only provide enhanced visualization but will enable correlation with data sources from other platforms. Additionally, new analytics technologies, like Splunk, will make mainframe application data more readily available to business analysts who typically aren’t mainframe experts while addressing the diminishing pool of mainframe talent by putting rich, easy tools into the hands of newer staff.
  1. SMF and z/OS log data will play an increased role in addressing security exposures, fulfilling audit requirements, and addressing compliance mandates, a key initiative for IT executives and IT organizations. Here think pervasive encryption on Z. Overall, organizations are looking at leveraging analytics platforms for security and compliance. Along with SMF and other z/OS log data they will look to Splunk, Elastic, and Hadoop.
  1. Data movement across the variety of platforms in distributed enterprises presents important challenges that must be secured, monitored, and performed efficiently. With over half of mainframe organizations still lacking full visibility this must become a priority for organizations.

Over the years, DancingDinosaur writes up every opportunity to lower mainframe costs or optimize operations. Find some of these here, here, and here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Q Network Promises to Commercialize Quantum

December 14, 2017

The dash to quantum computing is well underway and IBM is preparing to be one of the leaders. When IBM gets there it will find plenty of company. HPE, Dell/EMC, Microsoft and more are staking out quantum claims. In response IBM is speeding the build-out of its quantum ecosystem, the IBM Q Network, which it announced today.

IBM’s 50 qubit system prototype

Already IBM introduced its third generation of quantum computers in Nov., a prototype 50 qubit system. IBM promises online access to the IBM Q systems by the end of 2017, with a series of planned upgrades during 2018. IBM is focused on making available advanced, scalable universal quantum computing systems to clients to explore practical applications.

Further speeding the process, IBM is building a quantum computing ecosystem of big companies and research institutions. The result, dubbed IBM Q Network, will consist of a worldwide network of individuals and organizations, including scientists, engineers, business leaders, and forward thinking companies, academic institutions, and national research labs enabled by IBM Q. Its mission: advancing quantum computing and launching the first commercial applications.

Two particular goals stand out: Engage industry leaders to combine quantum computing expertise with industry-oriented, problem-specific expertise to accelerate development of early commercial uses. The second: expand and train the ecosystem of users, developers, and application specialists that will be essential to the adoption and scaling of quantum computing.

The key to getting this rolling is the groundwork IBM laid with the IBM Q Experience, which IBM initially introduced in May of 2016 as a 5 cubit system. The Q Experience (free) upgrade followed with a 16-qubit upgrade in May, 2017. The IBM effort to make available a commercial universal quantum computer for business and science applications has increased with each successive rev until today with a prototype 50 cubit system delivered via the IBM Cloud platform.

IBM opened public access to its quantum processors over a year ago  to serve as an enablement tool for scientific research, a resource for university classrooms, and a catalyst for enthusiasm. Since then, participants have run more than 1.7M quantum experiments on the IBM Cloud.

To date IBM was pretty easy going about access to the quantum computers but now that they have a 20 cubit system and 50 cubit system coming the company has become a little more restrictive about who can use them. Participation in the IBM Q Network is the only way to access these advanced systems, which involves a commitment of money, intellectual property, and agreement to share and cooperate, although IBM implied at any early briefing that it could be flexible about what was shared and what could remain an organization’s proprietary IP.

Another reason to participate in the Quantum Experience is QISKit, an open source quantum computing SDK anyone can access. Most DancingDinosaur readers, if they want to participate in IBM’s Q Network will do so as either partners or members. Another option, a Hub, is really targeted for bigger, more ambitious early adopters. Hubs, as IBM puts it, provide access to IBM Q systems, technical support, educational and training resources, community workshops and events, and opportunities for joint work.

The Q Network has already attracted some significant interest for organizations at every level and across a variety of industry segments. These include automotive, financial, electronics, chemical, and materials players from across the globe. Initial participants include JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University, and University of Melbourne.

As noted at the top, other major players are staking out their quantum claims, but none seem as far along or as comprehensive as IBM:

  • Dell/EMC is aiming to solve complex, life-impacting analytic problems like autonomous vehicles, smart cities, and precision medicine.
  • HPE appears to be focusing its initial quantum efforts on encryption.
  • Microsoft, not surprisingly, expects to release a new programming language and computing simulator designed for quantum computing.

As you would expect, IBM also is rolling out IBM Q Consulting to help organizations envision new business value through the application of quantum computing technology and provide customized roadmaps to help enterprises become quantum-ready.

Will quantum computing actually happen? Your guess is as good as anyone’s. I first heard about quantum physics in high school 40-odd years ago. It was baffling but intriguing then. Today it appears more real but still nothing is assured. If you’re willing to burn some time and resources to try it, go right ahead. Please tell DancingDinosaur what you find.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: