Posts Tagged ‘System z’

High Cost of Ignoring Z’s Pervasive Encryption

May 17, 2018

That cost was spelled out at IBM’s Think this past spring.  Writes David Bruce, who leads IBM’s strategies for security on IBM Z and LinuxONE, data breaches are expensive, costing $3.6 million on average. And hoping to avoid one by doing business as usual is a bad bet. Bruce reports breaches are increasingly likely: an organization has a 28 percent chance of being breached in the next 24 months. You can find Bruce’s comments on security and pervasive encryption here.

9 million data records were compromised in 2015

Were any of those 9 million records from your organization? Did you end up on the front page of the newspaper? To stay out of the data breach headlines, organizations require security solutions that protect enterprise and customer data at minimal cost and effort, Bruce observes.

Encryption is the preferred solution, but it is costly, cumbersome, labor-intensive, and hit-or-miss. It is hit-or-miss because the overhead involved forces organizations to choose what to encrypt and what to skip. You have to painstakingly classify the data in terms of risk, which takes time and only adds to the costs. Outside of critical revenue transactions or key intellectual property—no brainers—you will invariably choose wrong and miss something you will regret when it shows up on the front page of the New York Times.

Adding to the cost is the compliance runaround. Auditors are scheduled to visit or maybe they aren’t even scheduled and just drop in; you now have to drop whatever your staff was hoping to do and gather the necessary documentation to prove your data is safe and secure.  Do you really need this? Life is too short as it is.

You really want to put an end to the entire security compliance runaround and all the headaches it entails. But more than that, you want protected, secure data; all data, all the time.  When someone from a ransomware operation calls asking for hundreds or thousands of dollars to get your data back you can laugh and hang up the phone. That’s what Bruce means when he talks about pervasive encryption. All your data is safely encrypted with its keys protected from the moment it is created until the moment it is destroyed by you. And you don’t have to lift a finger; the Z does it all.

That embarrassing news item about a data breach; it won’t happen to you either. Most importantly of all, customers will never see it and get upset.

In fact, at Think, Forrester discussed today’s customer-obsessed approach that leading organizations are adopting to spur growth. To obsess over customers, explained Bruce, means to take great care in protecting the customer’s sensitive data, which provides the cornerstone of a customer-obsessed Forrester zero trust security framework. The framework includes, among other security elements, encryption of all data across the enterprise. Enabling the Z’s built in pervasive encryption and automatic key protection you can ignore the rest of Forrester’s framework.

Pervasive encryption, unique to Z, addresses the security challenges while helping you thrive in this age of the customer. At Think, Michael Jordan, IBM Distinguished Engineer for IBM Z Security, detailed how pervasive encryption represents a paradigm shift in security, reported Bruce. Previously, selective field-level encryption was the only feasible way to secure data, but it was time-, cost-, and resource-intensive – and it left large portions of data unsecured.

Pervasive encryption, however, offers a solution capable of encrypting data in bulk, making it possible and practical to encrypt all data associated with an application, database, and cloud service – whether on premises or in the cloud, at-rest or in-flight. This approach also simplifies compliance by eliminating the need to demonstrate compliance at the field level. Multiple layers of encryption – from disk and tape up through applications – provide the strongest possible defense against security breaches. The high levels of security enabled by pervasive encryption help you promote customer confidence by protecting their data and privacy.

If you have a Z and have not enabled pervasive encryption, you are putting your customers and your organization at risk. Am curious, please drop me a note why.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

May 4, 2018

Compuware Tackles Mainframe Workforce Attrition and Batch Processing

While IBM works furiously to deliver quantum computing and expand AI and blockchain into just about everything, many DancingDinosaur readers are still wrestling with the traditional headaches and boosting quality and efficiency or mainframe operations and optimizing the most traditional mainframe activities there are, batch processes. Would be nice if quantum computing could handle multiple batch operations simultaneously but that’s not high on IBM’s list of quantum priorities.

So Compuware is stepping up as it has been doing quarterly by delivering new systems to expedite and facilitate conventional mainframe processes.  Its zAdviser promises actionable analytic insight to continuously improve quality, velocity and efficiency on the mainframe. While Compuware’s ThruPut Manager enables next-gen ITstaff to optimize mainframe batch execution through new visually intuitive workload scheduling.

zAdviser captures data about developers’ behaviors

zAdviser uses machine learning to continuously measure and improve an organization’s mainframe DevOps processes and development outcomes. Based on key performance indicators (KPIs), zAdviser measures application quality, as well as development speed and the efficiency of a development team. The result: managers can now make evidence-based decisions in support of their continuous improvement efforts.

The new tool leverages a set of analytic models that uncover correlations between mainframe developer behaviors and mainframe DevOps KPIs. These correlations represent the best available empirical evidence regarding the impact of process, training and tooling decisions on digital business outcomes. Compuware is offering zAdviser free to customers on current maintenance.

zAdviser leverages a set of analytic models that uncover correlations between mainframe developer behaviors and mainframe DevOps KPIs. These correlations represent the best available empirical evidence regarding the impact of process, training and tooling decisions on digital business outcomes.

Long mainframe software backlogs are no longer acceptable. Improvements in mainframe DevOps has become an urgent imperative for large enterprises that find themselves even more dependent on mainframe applications—not less. According to a recent Forrester Consulting study commissioned by Compuware, 57 percent of enterprises with a mainframe run more than half of their business-critical workloads on the mainframe. That percentage is expected to increase to 64 percent by 2019, while at the same time enterprises are failing to replace the expert mainframe workforce they have lost by attrition. Hence the need for modern, automated, intelligent tools to speed the learning curve for workers groomed on Python or Node.js.

Meanwhile, IBM hasn’t exactly been twiddling its thumbs in regard to DevOps analytics for the Z. Its zAware delivers a self-contained firmware IT analytics offering that helps systems and operations professionals rapidly identify problematic messages and unusual system behavior in near real time, which systems administrators can use to take corrective actions.

ThruPut Manager brings a new web interface that offers  visually intuitive insight for the mainframe staff, especially new staff, into how batch jobs are being initiated and executed—as well as the impact of those jobs on mainframe software licensing costs.

By implementing ThruPut Manager, Compuware explains, enterprises can better safeguard the performance of both batch and non-batch applications while avoiding the significant adverse economic impact of preventable spikes in utilization as measured by Rolling 4-Hour Averages (R4HA). Reducing the R4HA is a key way data centers can contain mainframe costs.

More importantly,  with the new ThruPut Manager, enterprises can successfully transfer batch management responsibilities to the next generation of IT staff with far less hands-on platform experience—without exposing themselves to related risks such as missed batch execution deadlines, missed SLAs, and excess costs.

With these new releases, Compuware is providing a way to reduce the mainframe software backlog—the long growing complaint that mainframe shops cannot deliver new requested functionality fast enough—while it offers a way to replace the attrition among aging mainframe staff with young staff who don’t have years of mainframe experience to fall back on. And if the new tools lower some mainframe costs however modestly in the process, no one but IBM will complain.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Introduces Skinny Z Systems

April 13, 2018

Early this week IBM unveiled two miniaturized mainframe models, dubbed skinny mainframes, it said are easier to deploy in a public or private cloud facility than their more traditional, much bulkier predecessors. Relying on all their design tricks, IBM engineers managed to pack each machine into a standard 19-inch rack with space to spare, which can be used for additional components.

Z14 LinuxONE Rockhopper II, 19-inch rack

The first new mainframe introduced this week, also in a 19-inch rack, is the Z14 model ZR1. You can expect subsequent models to increment the model numbering.  The second new machine is the LinuxONE Rockhopper II, also in a 19-inch rack.

In the past, about a year after IBM introduced a new mainframe, say the z10, it was introduced what it called a Business Class (BC) version. The BC machines were less richly configured, less expandable but delivered comparable performance with lower capacity and a distinctly lower price.

In a Q&A analyst session IBM insisted the new machines would be priced noticeably lower, as were the BC-class machines of the past. These are not comparable to the old BC machines. Instead, they are intended to attract a new group of users who face new challenges. As such, they come cloud-ready. The 19-inch industry standard, single-frame design is intended for easy placement into existing cloud data centers alongside other components and private cloud environments.

The company, said Ross Mauri, General Manager IBM Z, is targeting the new machines toward clients seeking robust security with pervasive encryption, cloud capabilities and powerful analytics through machine learning. Not only, he continued, does this increase security and capability in on-premises and hybrid cloud environments for clients, IBM will also deploy the new systems in IBM public cloud data centers as the company focuses on enhancing security and performance for increasingly intensive data loads.

In terms of security, the new machines will be hard to beat. IBM reports the new machines capable of processing over 850 million fully encrypted transactions a day on a single system. Along the same lines, the new mainframes do not require special space, cooling or energy. They do, however, still provide IBM’s pervasive encryption and Secure Service Container technology, which secures data serving at a massive scale.

Ross continued: The new IBM Z and IBM LinuxONE offerings also bring significant increases in capacity, performance, memory and cache across nearly all aspects of the system. A complete system redesign delivers this capacity growth in 40 percent less space and is standardized to be deployed in any data center. The z14 ZR1 can be the foundation for an IBM Cloud Private solution, creating a data-center-in-a-box by co-locating storage, networking and other elements in the same physical frame as the mainframe server.  This is where you can utilize that extra space, which was included in the 19-inch rack.

The LinuxONE Rockhopper II can also accommodate a Docker-certified infrastructure for Docker EE with integrated management and scale tested up to 330,000 Docker containers –allowing developers to build high-performance applications and embrace a micro-services architecture.

The 19-inch rack, however, comes with tradeoffs, notes Timothy Green writing in The Motley Fool. Yes, it takes up 40% less floor space than the full-size Z14, but accommodates only 30 processor cores, far below the 170 cores supported by a full size Z14, , which fills a 24-inch rack. Both new systems can handle around 850 million fully encrypted transactions per day, a fraction of the Z14’s full capacity. But not every company needs the full performance and capacity of the traditional mainframe. For companies that don’t need the full power of a Z14 mainframe, notes Green, or that have previously balked at the high price or massive footprint of full mainframe systems, these smaller mainframes may be just what it takes to bring them to the Z. Now IBM needs to come through with the advantageous pricing they insisted they would offer.

The new skinny mainframe are just the latest in IBM’s continuing efforts to keep the mainframe relevant. It began over a decade ago with porting Linux to the mainframe. It continued with Hadoop, blockchain, and containers. Machine learning and deep learning are coming right along.  The only question for DancingDinosaur is when IBM engineers will figure out how to put quantum computing on the Z and squeeze it into customers’ public or private cloud environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Mainframe ISVs Advance the Mainframe While IBM Focuses on Think

March 30, 2018

Last week IBM reveled in the attention of upwards of 30,000 visitors to its Think conference, reportedly a record for an IBM conference. Meanwhile Syncsort and Compuware stayed home pushing new mainframe initiatives. Specifically, Syncsort introduced innovations to deliver mainframe log and application data in real-time directly to Elastic for deeper next generation analytics through like Splunk, Hadoop and the Elastic Stack.

Syncsort Ironstone for next-gen analytics

Compuware reported that the percentage of organizations running at least half their business-critical applications on the mainframe expected to increase next year, although the loss of skilled mainframe staff, and the failure to subsequently fill those positions pose significant threats to application quality, velocity and efficiency. Compuware has been taking the lead in modernizing the mainframe developer experience to make it compatible with the familiar x86 experience.

According to David Hodgson, Syncsort’s chief product officer, many organizations are using Elastic’s Kibana to visualize Elasticsearch data and navigate the Elastic Stack. These organizations, like others, are turning to tools like Hadoop and Splunk to get a 360-degree view of their mainframe data enterprise-wide. “In keeping with our proven track record of enabling our customers to quickly extract value from their critical data anytime, anywhere, we are empowering enterprises to make better decisions by making mission-critical mainframe data available in another popular analytics platform,” he adds.

For cost management, Syncsort now offers Ironstream with the flexibility of MSU-based (capacity) or Ingestion-based pricing.

Compuware took a more global view of the mainframe. The mainframe, the company notes, is becoming more important to large enterprises as the percentage of organizations running at least half their business-critical applications on that platform expected to increase next year. However, the loss of skilled mainframe staff, and the failure to subsequently fill those positions, pose significant threats to application quality, velocity and efficiency.

These are among the findings of research and analysis conducted by Forrester Consulting on behalf of Compuware.  According to the study, “As mainframe workload increases—driven by modern analytics, blockchain and more mobile activity hitting the platform—customer-obsessed companies should seek to modernize application delivery and remove roadblocks to innovation.”

The survey of mainframe decision-makers and developers in the US and Europe also revealed the growing mainframe importance–64 percent of enterprises will run more than half of their critical applications on the platform within the next year, up from 57 percent this year. And just to ratchet up the pressure a few notches, 72 percent of customer-facing applications at these enterprises are completely or very reliant on mainframe processing.

That means the loss of essential mainframe staff hurts, putting critical business processes at risk. Overall, enterprises reported losing an average of 23 percent of specialized mainframe staff in the last five years while 63 percent of those positions have not been filled.

There is more to the study, but these findings alone suggest that mainframe investments, culture, and management practices need to evolve fast in light of the changing market realities. As Forrester puts it: “IT decision makers cannot afford to treat their mainframe applications as static environments bound by long release cycles, nor can they fail to respond to their critical dependence with a retiring workforce. Instead, firms must implement the modern tools necessary to accelerate not only the quality, but the speed and efficiency of their mainframe, as well as draw [new] people to work on the platform.”

Nobody has 10 years or even three years to cultivate a new mainframer. You need to attract and cultivate talented x86 or ARM people now, equip each—him or her—with the sexiest, most efficient tools, and get them working on the most urgent items at the top of your backlog.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Compuware Brings Multi-Platform DevOps to the Z

January 19, 2018

The rush has started to DevOps for Z. IBM jumped on the bandwagon with an updated release of IBM Developer for z Systems (IDz) V14.1.1, which allows Z organizations to provide new capabilities and product maintenance to users sooner than the traditional release models they previously used from IBM.

Even more recently, Compuware, which described DevOps and the mainframe as the ultimate win-win, announced a program to advance DevOps on the mainframe with integrated COBOL code coverage metrics for multi-platform DevOps.  This will make it possible for all developers in the organization to fluidly handle multi-platform code, including mainframe code, in a fast delivery DevOps approach.

SonarSource-Compuware DevOps Dashboard

The new Compuware-SonarSource integrations are expected to ease enterprise DevOps teams trying to track and validate code coverage of COBOL application testing and do it with the same ease and employing the same processes as they do with Java and other more mainstream code. This ability to automate code coverage tracking across platforms is yet another example of empowering enterprise IT to apply the same proven and essential Agile, DevOps and Continuous Integration/Continuous Delivery (CI/CD) disciplines to both core systems-of-record (mainframe) as well as systems-of-engagement (mostly distributed systems).

Code coverage metrics promise insight into the degree to which source code is executed during a test. It identifies  which lines of code have been executed, and what percentage of an application has been tested. These measurements allow IT teams to understand the scope and effectiveness of its testing as code is moved towards production.

DevOps has become increasingly critical to mainframe shops that risk becoming irrelevant and even replaceable if they cannot turn around code improvements fast enough. The mainframe continues to be valued as the secure repository of the organization’s critical data but that won’t hold off those who feel the mainframe is a costly extravagance, especially when mainframe shops can’t turn out code updates and enhancements as fast as systems regarded as more inherently agile.

As Compuware puts it, the latest integrations automatically feed code coverage results captured by its Topaz for Total Test into SonarSource’s SonarQube. This gives DevOps teams an accurate, unified view of quality metrics and milestones across platforms enterprise-wide.

For z shops specifically, such continuous code quality management across platforms promises high value to large enterprises, enabling them to bring new digital deliverables to market, which increasingly is contingent on simultaneously updating code across both back-end mainframe systems-of-record and front-end mobile/web and distributed systems-of-engagement.

Specifically, notes Compuware, integration between Topaz for Total Test and SonarQube enables DevOps teams to:

  • Gain insight into the coverage of code being promoted for all application components across all platforms
  • Improve the rigor of digital governance with strong enforcement of mainframe QA policies for coding errors, data leakage, credential vulnerabilities, and more
  • Shorten feedback loops to speed time-to-benefit and more promptly address shortfalls in COBOL skills and bottlenecks in mainframe DevOps processes

Topaz for Total Test captures code coverage metrics directly from the source code itself, rather than from a source listing, as is the case with outdated mainframe tools. This direct capture is more accurate and eliminates the need for development, Compuware reported.

The new integration actually encompasses a range of tools and capabilities. For instance:

From within a Compuware Xpediter debug session, a developer can kick off a Compuware Topaz for Total Test automated unit test and set it up to collect code coverage info as it runs. Code coverage metrics then can be automatically fed into SonarSource’s SonarQube where they can be displayed in a dashboard along with other quality metrics, such as lines going to subprograms.

It also integrates with Jenkins as a Continuous Integration (CI) platform, which acts as a process orchestrator and interacts with an SCM tool, such as Compuware ISPW, which automates software quality checks and pushes metrics onto SonarQube among other things. ISPW also is where code gets promoted to the various stages within the lifecycle and ultimately deployed. Finally Topaz is Compuware’s Eclipse-based IDE from which developers drive all these activities.

The Compuware announcement further delivers on its promise to mainstream the mainframe; that is, provide a familiar, modern, and intuitive multi-platform mainframe development environment—integrated with state-of-the-art DevOps tools for veteran mainframe developers and, more importantly, those newly engaged as IT newbies from the distributed world. In short, this is how you keep your Z relevant and invaluable going forward.

** Special note regarding last week’s DancingDinosaur reporting on chip problems here; Don’t count on an immediate solution coming from the vendors anytime soon; not Google, IBM, Intel, AMD, ARM, or others. The word among chip geeks is that the dependencies are too complex to be fully fixed with a patch. This probably requires new chip designs and fabrication. DancingDinosaur will keep you posted.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 3Q17 Results Break Consecutive Quarters Losing Streak

November 2, 2017

DancingDinosaur generally does not follow the daily gyrations of IBM’s stock, assuming that readers like you are not really active investors in the company’s stock. That is not to say, however, that you don’t have an important, even critical interest in the company’s fortunes.  As users of Z or Power systems, you want to know that IBM has the means to continue to invest in and advance your preferred platform.  And a 20+ consecutive quarters losing streak doesn’t exactly inspire confidence.

What is interesting about IBM’s latest 3Q17 financials, which ends the string of consecutive revenue losses, is the performance of the Z and storage, two things most of us are concerned with.

Blockchain simplifies near real-time clearing and settlement

Here is what Martin Schroeter, IBM Senior Vice President and Chief Financial Officer said to the investment analysts he briefs: In Systems, we had strong growth driven by the third consecutive quarter of growth in storage, and a solid launch of our new z14 mainframe, now just called Z, which was available for the last two weeks of the quarter.

DancingDinosaur has followed the mainframe for several decades at least, and the introduction of a new mainframe always boosts revenue for the next quarter or two. The advantages were apparent on Day 1 when the machine was introduced. As DancingDinosaur wrote: You get this encryption automatically, virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box.

A few months later IBM introduced a new LinuxOne mainframe, the Emperor II. The new LinuxOne doesn’t yet offer pervasive encryption but provides Secure Service Containers. As it was described here at that time: Through the Secure Service Container data can be protected against internal threats at the system level even from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats.

Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use. Again, it will likely take a few quarters for LinuxONE shops and other Linux shops to seek out the Emperor II and Secure Service Containers.

Similarly, in recent weeks, IBM has been bolstering its storage offerings. As Schroeter noted, storage, including Spectrum storage and Flash, have been experiencing a few positive quarters and new products should help to continue that momentum. For example, products like IBM Spectrum Protect Plus promises to make data protection available in as little as one hour.

Or the IBM FlashSystem 900, introduced at the end of October promises to deliver efficient, ultra dense flash with CAPEX and OPEX savings due to 3x more capacity in a 2U enclosure. It also offers to maximize efficiency using inline data compression with no application performance impact as it achieves consistent 95 microsecond response times.

But probably the best 3Q news came from the continuing traction IBM’s strategic imperatives are gaining. Here these imperatives—cloud, security, cognitive computing—continue to make a serious contribution to IBM revenue. Third-quarter cloud revenues increased 20 percent to $4.1 billion.  Cloud revenue over the last 12 months was $15.8 billion, including $8.8 billion delivered as-a-service and $7.0 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions.  The annual exit run rate for as-a-service revenue increased to $9.4 billion from $7.5 billion in the third quarter of 2016.  In the quarter, revenues from analytics increased 5 percent.  Revenues from mobile increased 7 percent and revenues from security increased 51 percent. Added Schroeter: Revenue from our strategic imperatives over the last 12 months was also up 10% to $34.9 billion, and now represents 45% of IBM.

OK, so IBM is no longer a $100 + billion company and hasn’t been for some time. Maybe in a few years if blockchain and the strategic imperatives continue to grow and quantum catches fire it may be back over the $100 billion mark, but not sure how much it matters.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Moves Quantum Computing Toward Commercial Systems

September 20, 2017

IBM seem determined to advance quantum computing. Just this week IBM announced its researchers developed a new approach to simulate molecules on a quantum computer that may one day help revolutionize chemistry and materials science. In this case, the researchers implemented a novel algorithm that is efficient with respect to the number of quantum operations required for the simulation. This involved a 7-qubit processor.

7-cubit processor

In the diagram above IBM scientists successfully used six qubits on a purpose-built seven-qubit quantum device to address the molecular structure problem for beryllium hydride (BeH2) – the largest molecule simulated on a quantum computer to date.

Back in May IBM announced an even bigger quantum device. It prototyped the first commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM. This week’s announcement certainly didn’t surpass it in size. IBM engineered the 17-qubit system to be at least twice as powerful as what is available today to the public on the IBM Cloud and it will be the basis for the first IBM Q early-access commercial systems.

It has become apparent to the scientists and researchers who try to work with complex mathematical problems and simulations that the most powerful conventional commercial computers are not up to the task. Even the z14 with its 10-core CPU and hundreds of additional processors dedicated to I/O cannot do the job.

As IBM puts it: Even today’s most powerful supercomputers cannot exactly simulate the interacting behavior of all the electrons contained in a simple chemical compound such as caffeine. The ability of quantum computers to analyze molecules and chemical reactions could help accelerate research and lead to the creation of novel materials, development of more personalized drugs, or discovery of more efficient and sustainable energy sources.

The interplay of atoms and molecules is responsible for all matter that surrounds us in the world. Now “we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q, IBM Research. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do, and start becoming a tool for experts in areas such as chemistry, biology, healthcare and materials science.”

So commercial quantum systems are coming.  Are you ready to bring a quantum system into you data center? Actually you can try one today for free here  or through GitHub, which offers a Python software development kit for writing quantum computing experiments, programs, and applications. Although DancingDinosaur will gladly stumble through conventional coding, quantum computing probably exceeds his frustration level even with a Python development kit.

However, if your organization is involved in these industries—materials science, chemistry, and the like or is wrestling with a problem you cannot do on a conventional computer—it probably is worth a try, especially for free. You can try an easy demo card game that compares quantum computing with conventional computing.

But as reassuringly as IBM makes quantum computing sound, don’t kid yourself; it is very complicated.  Deploying even a small qubit machine is not going to be like buying your first PC. Quantum bits, reportedly, are very fragile or transitory. Labs will keep them very cold just to better stabilize the system and keep them from switching their states before they should.  Just think how you’d feel about your PC if the bit states of 0 and 1 suddenly and inextricably changed.

That’s not the only possible headache. You only have limited time to work on cubits given their current volatility when not super cooled. Also, work still is progressing on advancing the quantum frameworks and mapping out ecosystem enablement.

Even IBM researchers admit that some problems may not be better on quantum computers. Still, until you pass certain threshold, like qubit volume, your workload might not perform better on a quantum computer. The IBM quantum team suggests it will take until 2021 to consistently solve a problem that has commercial relevance using quantum computing.

Until then, and even after, IBM is talking about a hybrid approach in which parts of a problem are solved with a quantum computer and the rest with a conventional system. So don’t plan on replacing your Z with a few dozen or even hundreds of qubits anytime soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Blockchain Platform Aims for Immutable Accuracy

August 25, 2017

Earlier this week IBM announced a major blockchain collaboration among group of leading companies across the global food supply chain. The goal is to reduce the number of people falling ill or even dying from eating contaminated food. IBM’s solution is its blockchain platform, which it believes is ideally suited to help address these challenges because it establishes a trusted environment that tracks all transactions, an accurate, consistent, immutable version.

Blockchain can improve food traceability

The food segment is just one of many industries IBM will target for its blockchain platform. It describes the platform as ideally suited to help address varied industry challenges because it establishes a trusted environment for all transactions. IBM claims it as the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance and operation of a multi-institution business network. Rival vendors, like Accenture, may disagree.  In the case of the global food supply chain, all participants -growers, suppliers, processors, distributors, retailers, regulators and consumers – can gain permissioned access to known and trusted information regarding the origin and state of food. In December 2016 DancingDinosaur reported on IBM and Walmart using blockchain for food safety.

IBM’s blockchain platform is built around Hyperledger Composer, integrated with popular development environments using open developer tools, and accepted business terms to generate blockchain code and smart contracts. It also includes sample industry use cases.  Using IBM’s platform, developers can create standard business language in JavaScript and the APIs help keep development work at the business level, rather than being highly technical. This makes it possible for most any programmer to be a blockchain developer. Additionally, a variety of IBM Developer Journeys for blockchain are available featuring free open source code, documentation, APIs, architecture diagrams, and one-click deployment Git repositories to fast-track building, according to IBM.

For governance and operation it also provides activation tools for new networks, members, smart contracts and transaction channels. It also includes multi-party workflow tool with member activities panel, integrated notifications, and secure signature collection for policy voting. In addition, a new class of democratic governance tools designed to help improve productivity across the organizations uses a voting process that collects signatures from members to govern member invitation distribution of smart contracts and the creation of transactions channels. By enabling the quick onboarding of participants, assigning roles, and managing access, organizations can begin transacting via the blockchain fast.

In operating the network IBM blockchain platform provides always-on, high availability with seamless software and blockchain network updates, a hardened security stack with no privileged access, which blocks malware, and built-in blockchain monitoring for full network visibility. Woven throughout the platform is the Hyperledger Fabric. It also provides the highest-level, commercially available tamper resistant FIPS140-2 level 4 protection for encryption keys.

Along with its blockchain platform, IBM is advancing other blockchain supply chain initiatives by using the platform for an automated billing and invoicing system. Initial work to use blockchain for invoicing also is underway starting with Lenovo. This will provide an audit-ready solution with full traceability of billing and operational data, and help speed on-boarding time for new vendors and new contract requirements, according to IBM.

The platform leverages IBM’s work for more than 400 organizations. It includes insights gained as IBM has built blockchain networks across industries ranging from financial services, supply chain and logistics, retail, government, and healthcare.

Extensively tested and piloted, the IBM’s new blockchain platform addresses a wide range of enterprise pain points, including both business and technical requirements around security, performance, collaboration and privacy. It includes innovation developed through open source collaboration in the Hyperledger community, including the newest Hyperledger Fabric v1.0 framework and Hyperledger Composer blockchain tool, both hosted by the Linux Foundation.

DancingDinosaur has previously noted that the z appears ideal for blockchain. DancingDinosaur based this on the z13’s scalability, security, and performance. The new z14, with its automated, pervasive encryption may be even better.  The Hyperledger Composer capabilities along with the sample use cases promise an easy, simple way to try blockchain among some suppliers and partners.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: