IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Racks Up Blockchain Success

July 15, 2016

It hasn’t even been a year (Dec. 17, 2015) since IBM first publicly introduced its participation in the Linux Foundation’s newest collaborative project, Open Ledger Project, a broad-based Blockchain initiative.  And only this past April did IBM make serious noise publicly about Blockchain on the z, here. But since then IBM has been ramping up Blockchain initiatives fast.

LinuxONE rockhopper

Courtesy of IBM: LinuxONE Rockhopper

Just this week IBM made its security framework for blockchain public, first announced in April, by releasing the beta of IBM’s Blockchain Enterprise Test Network. This enables organizations to easily access a secure, partitioned blockchain network on the cloud to deploy, test, and run blockchain projects.

The IBM Blockchain Enterprise Test Network is a cloud platform built on a LinuxONE system.  Developers can now test four-node networks for transactions and validations with up to four parties.  The Network provides the next level of service for developers ready to go beyond the two-node blockchain service currently available in Bluemix for testing and simulating transactions between two parties. The Enterprise Test Network runs on LinuxONE, which IBM touts as the industry’s most secure Linux server due to the z mainframe’s Evaluation Assurance Level 5+ (EAL5+) security rating.

Also this week, Everledger, a fraud detection system for use with big data, announced it is building a business network using IBM Blockchain for their global certification system designed to track valuable items through the supply chain. Such items could be diamonds, fine art, and luxury goods.

Things continued to crank up around blockchain with IBM announcing a collaboration with the Singapore Economic Development Board (EDB) and the Monetary Authority of Singapore (MAS). With this arrangement IBM researchers will work with government, industries, and academia to develop applications and solutions based on enterprise blockchain, cyber-security, and cognitive computing technologies. The effort will draw on the expertise in the Singapore talent pool as well as that of the IBM Research network.  The Center also is expected to engage with small- and medium-sized enterprises to create new applications and grow new markets in finance and trade.

Facilitating this is the cloud. IBM expects new cloud services around blockchain will make these technologies more accessible and enable leaders from all industries to address what is already being recognized as profound and disruptive implications in finance, banking, IoT, healthcare, supply chains, manufacturing, technology, government, the legal system, and more. The hope, according to IBM, is that collaboration with the private sector and multiple government agencies within the same country will advance the use of Blockchain and cognitive technologies to improve business transactions across several different industries.

That exactly is the goal of blockchain. In a white paper from the IBM Institute of Business Value on blockchain, here, the role of blockchain is as a distributed, shared, secure ledger. These shared ledgers write business transactions as an unbreakable chain that forms a permanent record viewable by the parties in a transaction. In effect, blockchains shifts the focus from information held by an individual party to the transaction as a whole, a cross-entity history of an asset or transaction. This alone promises to reduce or even eliminate friction in the transaction while removing the need for most middlemen.

In that way, the researchers report, an enterprise, once constrained by complexity, can scale without unnecessary friction. It can integrate vertically or laterally across a network or ecosystem, or both. It can be small and transact with super efficiency. Or, it can be a coalition of individuals that come together briefly. Moreover, it can operate autonomously; as part of a self-governing, cognitive network. In effect, distributed ledgers can become the foundation of a secure distributed system of trust, a decentralized platform for massive collaboration. And through the Linux Foundation’s Open Ledger Project, blockchain remains open.

Even at this very early stage there is no shortage of takers ready to push the boundaries of this technology. For example, Crédit Mutuel Arkéa recently announced the completion of its first blockchain project to improve the bank’s ability to verify customer identity. The result is an operational permissioned blockchain network that provides a view of customer identity to enable compliance with Know Your Customer (KYC) requirements. The bank’s success demonstrated the disruptive capabilities of blockchain technology beyond common transaction-oriented use cases.

Similarly, Mizuho Financial Group and IBM announced in June a test of the potential of blockchain for use in settlements with virtual currency. Blockchain, by the way, first gained global attention with Bitcoin, an early virtual currency. By incorporating blockchain technology into settlements with virtual currency, Mizuho plans to explore how payments can be instantaneously swapped, potentially leading to new financial services based on this rapidly evolving technology. The pilot project uses the open source code IBM contributed to the Linux Foundation’s Hyperledger Project.

Cloud-based blockchain running on large LinuxONE clusters may turn out to play a big role in ensuring the success of IoT by monitoring and tracking the activity between millions of things participating in a wide range of activities. Don’t let your z data center get left out; at least make sure it can handle Linux at scale.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Oracle Aims at Intel and IBM POWER

July 8, 2016

In late June Oracle announced the SPARC S7 processor, a new 20nm, 4.27 GHz, 8-core/64-thread SPARC processor targeted for scale-out Cloud workloads that usually go to Intel x86 servers. These are among the same workloads IBM is aiming for with POWER8, POWER9, and eventually POWER10, as reported by DancingDinosaur just a couple of weeks ago.

oracle roadmap trajectory

Oracle 5-year SPARC trajectory (does not include newly announced S series).

According to Oracle, the latest additions to the SPARC platform are built on the new 4.27 GHz, 8-core/64-thread SPARC S7 microprocessor with what Oracle calls Software-in-Silicon features such as Silicon Secured Memory and Data Analytics Accelerators, which enable organizations to run applications of all sizes on the SPARC platform at commodity price points. All existing commercial and custom applications will also run on the new SPARC enterprise cloud services and solutions unchanged while experiencing improvements in security, efficiency, and simplicity.

By comparison, the IBM POWER platform includes with the POWER8, which is delivered as a 12-core, 22nm processor. The POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators, which ensure delivery of more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even a 7nm, processor, based on the existing micro-architecture. And an even beefier POWER10 is expected to arrive around 2020.

At the heart of the Oracle’s new scale-out, commodity-priced server, the S7. According to Oracle, the SPARC S7 delivers balanced compute performance with 8 cores per processor, integrated on-chip DDR4 memory interfaces, a PCIe controller, and coherency links. The cores in the SPARC S7 are optimized for running key enterprise software, including Java applications and database. The SPARC S7–based servers use very high levels of integration that increase bandwidth, reduce latencies, simplify board design, reduce the number of components, and increase reliability, according to Oracle. All this promises an increase in system efficiency with a corresponding improvement in the economics of deploying a scale-out infrastructure when compared to other vendor solutions.

Oracle’s SPARC S7 processor, based on Oracle enterprise class M7 servers, is optimized for horizontally scalable systems with all the key functionality included in the microprocessor chip. Its Software-in-Silicon capabilities, introduced with the SPARC M7 processor, are also available in the SPARC S7 processor to enable improved data protection, cryptographic acceleration, and analytics performance. These features include Security-in-Silicon, which provides Silicon Secured Memory and cryptographic acceleration, and Data Analytics Accelerator (DAX) units, which provide In-memory query acceleration and in-line decompression

SPARC S7 processor–based servers include single- and dual-processor systems that are complementary to the existing mid-range and high-end systems based on Oracle’s SPARC M7 processor. SPARC S7 processor–based servers include two rack-mountable models. The SPARC S7-2 server uses a compact 1U chassis, and the SPARC S7-2L server is implemented in a larger, more expandable 2U chassis. Uniformity of management interfaces and the adoption of standards also should help reduce administrative costs, while the chassis design provides density, efficiency, and economy as increasingly demanded by modern data centers. Published reports put the cost of the new Oracle systems at just above $11,000 with a single processor, 64GB of memory and two 600GB disk drives, and up to about $50,000 with two processors and a terabyte of memory.

DancingDinosaur doesn’t really have enough data to compare the new Oracle system with the new POWER8 and upcoming POWER9 systems. Neither Oracle nor IBM have provided sufficient details. Oracle doesn’t even offer a roadmap at this point, which might tell you something.

What we do know about the POWER machines is this: POWER9 promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and is being optimized for new deployment models like hyperscale, cloud, and technical computing through scale-out deployment. Available for either clustered or multiple formats, it will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

According to IBM, you can expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

At least IBM showed its POWER roadmap. There is no comparable information from Oracle. At best, DancingDinosaur was able to dig up the following sketchy details for 2017-2019: Next Gen Core, 2017 Software-in-Silicon V1, Scale Out fully integrated Software-in-Silicon V1 or 2; 2018- 2019 Core Enhancements, Increased Cache, Increased Bandwidth, Software-in-Silicon V3.

Both Oracle and IBM have made it clear neither really wants to compete in the low cost, scale out server market. However, as both companies’ large clients turn to scale out, hyperscale Intel-based systems they have no choice but to follow the money. With the OpenPOWER Foundation growing and driving innovation, mainly in the form of accelerators, IBM POWER may have an advantage driving a very competitive price/performance story against Intel. With the exception of Fujitsu as an ally of sorts, Oracle has no comparable ecosystem as far as DancingDinosaur can tell.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Continues Mainframe GUI Tool Enhancements

July 1, 2016

Early in 2015 Compuware announced the first in what it promised would be a continuing stream of new mainframe tools and tool enhancements. Did anyone really believe them? Mainframe ISVs are not widely regarded for their fast release cycles. DancingDinosaur reported on it then here and has continued to follow up and report its progress through a handful of new releases. This past week, DancingDinosaur received new Compuware mainframe tool announcements. For a mainframe ISV this is almost unheard of. IBM sometimes releases new mainframe products in intense spurts but then quickly resumes its typical languid release pace.

compuware ispw

Screen from Compuware’s ISPW for Continuous Delivery to the Mainframe

Let’s take a look at each of these new releases. First, ISPW Deploy, an advanced mainframe release automation solution that enables large enterprises to bring continuous delivery best practices to their IBM z/OS environments. ISPW Deploy, built on the ISPW technology Compuware acquired in January 2016, facilitates faster and more reliable mainframe software deployment. Specifically, it helps, according to Compuware, in three ways, through:

  1. Automation that rapidly moves code through the deployment process, including test staging and approvals, while also providing greatly simplified full or partial rollbacks.
  1. Visualization that enables DevOps managers to quickly pinpoint deployment issues in order to both solve immediate rollout problems and address persistent bottlenecks in code promotion.
  1. Integrations with both third-party solutions and Compuware’s own industry-leading mainframe toolkit that allow IT to build complete SCM-to-production DevOps pipelines and to quickly launch associated remediation support tools if and when deployment issues occur.

Compuware is further empowering enterprises to achieve mainframe agility by integrating. For instance, its ISPW and XebiaLabs’ cross-platform continuous delivery solutions enable IT organizations to orchestrate and visualize their mainframe DevOps processes in a common manner with their broader cross-platform DevOps automation.

The second announcement focused on Xebial Labs, as noted above. The idea here is to deliver cross-platform continuous releases for the mainframe. As Compuware explained, enterprises using XebiaLabs’ solution suite and Compuware ISPW, can now automate and monitor all phases of mainframe DevOps within the same continuous delivery management environment they use for their distributed, web, and cloud platforms. This automation and monitoring includes test/QA, pre-copy staging, and code promotion. The goal, as with all DevOps, is to speed digital agility for mainframe or distributed systems or both.

The third announcement concerned a partnership between Compuware and ConicIT that aims to help a new generation of IT ops staff proactively resolve emerging mainframe issues before they impact application service levels. It does so by integrating ConicIT’s predictive mainframe analytics with Compuware’s Strobe, which provides visually intuitive troubleshooting intelligence. Together, the two companies promise to enable even IT staff with relatively little hands-on mainframe experience to quickly identify and resolve a wide range of application performance problems.

The key to doing this is a reliance on the adoption of intuitive GUI interfaces. Compuware started this with its Topaz tools and has been continuing along this path for two years. Compuware’s CEO, Chris O’Malley, has been harping on these themes almost since he first arrived there.

Compuware customers apparently have gotten the message. As reported: “Market pressures are making it essential for us to deliver quality products and services to our clients more frequently, and the mainframe plays a critical role in that delivery,” according to Craig Danielson, Assistant Vice President for Commerce Bank. “We leverage ISPW to help in this capacity and its new capabilities will provide us the automation and visibility of our software deployment process to help us continuously improve our internal operations and services.” (note: DancingDinosaur did not validate this customer statement.)

Companies will need all the help modern mainframe tools can deliver. Mainframe data centers are facing unprecedented challenges that require unusual speed and agility. In short, they need DevOps fast. And they will have to respond with an increasingly aging core of experienced mainframe staff supplemented by millennials who have to be coaxed and cajoled onto the mainframe with easy graphical tools. If mainframe data centers can’t respond to these challenges—not just cloud, mobile, Linux, and analytics, but also IoT, blockchain, cognitive computing, and whatever else is coming along next—how are they going to cope. Already their users, the line of business managers, are turning to shadow IT out of frustration with the slow response from the mainframe data centers. And you know what comes next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM z13 Helps Avoid Costly Data Breaches

June 24, 2016

A global study sponsored by IBM and conducted by the Ponemon Institute found that the average cost of a data breach for companies surveyed has grown to $4 million, representing a 29 percent increase since 2013. With cybersecurity incidents continuing to increase with 64% more security incidents in 2015 than in 2014 the costs are poised to grow.

World’s Most Secure System

z13–world’s most secure system

The z13, at least, is one way to keep security costs down. It comes with a cryptographic processor unit available on every core, enabled as a no-charge feature. It also provides EAL5+ support, a regulatory certification for LPARS, which verifies the separation of partitions to further improve security along with a dozen or so other built-in security features for the z13. For a full list of z13 security features click here. There also is a Redbook, Ultimate Security with the IBM z13 here. A midsize z, the z13s brings the benefits of mainframe security and mainframe computing to smaller organizations. You read about the z13s here on DancingDinosaur this past February.

As security threats become more complex, the researchers noted, the cost to companies continues to rise. For example, the study found that companies lose $158 per compromised record. Breaches in highly regulated industries were even more costly, with healthcare reaching $355 per record – a full $100 more than in 2013. And the number of records involved can run from the thousands to the millions.

Wow, why so costly? The researchers try to answer that too: leveraging an incident response team was the single biggest factor associated with reducing the cost of a data breach – saving companies nearly $400,000 on average (or $16 per record). In fact, response activities like incident forensics, communications, legal expenditures and regulatory mandates account for 59 percent of the cost of a data breach. Part of these high costs may be linked to the fact that 70 percent of U.S. security executives report they don’t even have incident response plans in place.

The process of responding to a breach is extremely complex and time consuming if not properly planned for. As described by the researchers, the process of responding to a breach consists of a minimum of four steps. Among the specified steps, a company must:

  • Work with IT or outside security experts to quickly identify the source of the breach and stop any more data leakage
  • Disclose the breach to the appropriate government/regulatory officials, meeting specific deadlines to avoid potential fines
  • Communicate the breach with customers, partners, and stakeholders
  • Set up any necessary hotline support and credit monitoring services for affected customers

And not even included in the researchers’ list are tasks like inventorying and identifying the data records that have been corrupted or destroyed, remediating the damaged data, and validating it against the last known clean backup copy. Am surprised the costs aren’t even higher. Let’s not even talk about the PR damage or loss of customer goodwill. Now, aren’t you glad you have a z13?

That’s not even the worst of it. The study also found the longer it takes to detect and contain a data breach, the more costly it becomes to resolve. While breaches that were identified in less than 100 days cost companies an average of $3.23 million, breaches that were found after the 100-day mark cost over $1 million more on average ($4.38 million). The average time to identify a breach in the study was estimated at 201 days and the average time to contain a breach was estimated at 70 days. The cost of a z13 or even the lower cost z13s could justify itself by averting just one data breach.

The researchers also found that companies with predefined Business Continuity Management (BCM) processes in place found and contained breaches more quickly, discovering breaches 52 days earlier and containing them 36 days faster than companies without BCM. Still, the cheapest solution is to avert breaches in the first place.

Not surprisingly, IBM is targeting the incident response business as an up and coming profit center. The company increased its investment in the Incident response market with the recent acquisition of Resilient Systems, which just came out with an updated version that graphically displays the relationships between Indicators of Compromise (IOCs) and incidents in an organization’s environment. But the z13 is probably a better investment if you want to avoid data breaches in the first place.

Surprisingly, sometimes your blogger is presented as a mainframe guru. Find the latest here.

DancingDinosaur is Alan Radding, a veteran information technology analyst writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Fires a Shot at Intel with its Latest POWER Roadmap

June 17, 2016

In case you worry that IBM will abandon hardware in the pursuit of its strategic initiatives focusing on cloud, mobile, analytics and more; well, stop worrying. With the announcement of its POWER Roadmap at the OpenPOWER Summit earlier this spring, it appears POWER will be around for years to come. But IBM is not abandoning the strategic initiatives either; the new Roadmap promises to support new types of workloads, such as real time analytics, Linux, hyperscale data centers, and more along with support for the current POWER workloads.

power9b

Pictured above: POWER9 Architecture, courtesy of IBM

Specifically, IBM is offering a denser roadmap, not tied to technology and not even tied solely to IBM. It draws on innovations from a handful of the members of the Open POWER Foundation as well as support from Google. The new roadmap also signals IBM’s intention to make a serious run at Intel’s near monopoly on enterprise server processors by offering comparable or better price, performance, and features.

Google, for example, reports porting many of its popular web services to run on Power systems; its toolchain has been updated to output code for x86, ARM, or Power architectures with the flip of a configuration flag. Google, which strives to be everything to everybody, now has a highly viable alternative to Intel in terms of performance and price with POWER. At the OpenPOWER Summit early in the spring, Google made it clear it plans to build scale-out server solutions based on OpenPower.

Don’t even think, however, that Google is abandoning Intel. The majority of its systems are Intel-oriented. Still, POWER and the OpenPOWER community will provide a directly competitive processing alternative.  To underscore the situation Google and Rackspace announced they were working together on Power9 server blueprints for the Open Compute Project, designs that reportedly are compatible with the 48V Open Compute racks Google and Facebook, another hyperscale data center, already are working on.

Google represents another proof point that OpenPOWER is ready for hyperscale data centers. DancingDinosaur, however, really is interested most in what is coming from OpenPOWER that is new and sexy for enterprise data centers, since most DancingDinosaur readers are focused on the enterprise data center. Of course, they still need ever better performance and scalability too. In that regard OpenPOWER has much for them in the works.

For starters, POWER8 is currently delivered as a 12-core, 22nm processor. POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators. That is sure to deliver more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even 7nm, processor, based on the existing micro-architecture.

The real POWER future, arriving around 2020, will feature a new micro-architecture, sport new features and functions, and bring new technology. Expect much, if not almost all, of the new functions to come from various OpenPOWER Foundation partners,

POWER9, only a year or so out, promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and bringing new deployment models for hyperscale, cloud, and technical computing through scale out deployment. This will include deployment in both clustered or multiple formats. It will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

Expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

As a data center manager, will a POWER9 machine change your data center dynamics?  Maybe, you decide: a dual-socket Power9 server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total 44 core count. That’s a lot of computing power in one rack.

Now IBM just has to crank out similar advances for the next z System (a z14 maybe?) through the Open Mainframe Project.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Medical Mutual Gains Fast Access to z/OS Log Data via Splunk and Ironstream

June 3, 2016

Running Syncsort’s Ironstream and leveraging Splunk Enterprise, Medical Mutual of Ohio has now implemented mainframe security in real time through the Splunk® Enterprise platform. One goal is to help protect customer information stored in DB2 from unauthorized access. Syncsorts’s Ironstream, a utility, collects and forwards z/OS log data, including security data, to Splunk Enterprise and Splunk Enterprise Security.

zOS Security2 PNG

z/OS security data, courtesy of Syncsort

“We’ve always had visibility. Now we can get it faster, in real time directly from the mainframe,” said the insurer’s enterprise security supervisor. Previously, the company would do a conventional data transfer, which could take several hours. The new approach, sometimes referred to as a big iron-to-big data strategy, now delivers security log data in near real time. This enables the security team to correlate all the security data from across the enterprise to effectively and quickly gain visibility into user-authentication data and access attempts tracked on the mainframe. And they can do it without needing specialized expertise or different monitoring systems for z/OS.

Real-time analytics, including real-time predictive analytics, are increasingly attractive as solutions for the growng security challenges organizations are facing. These challenges are due, in large part, to the explosion of transaction activity driven by mobile computing, and soon, IoT, and Blockchain, most of which eventually finds its way to the mainframe. All of these present immediate security concerns and require fast, nearly instant security decisions. Even cloud usage, which one would expect to be mainstream in enterprises by now, often is curtailed due to security fears.

With the Ironstream and Splunk combination, Medical Mutual Medical Mutual can see previously slow-to-access mainframe data alongside other security information it was already analyzing in Splunk Enterprise. Splunk Enterprise enables a consolidated enterprise-wide view of machine data collected across the business, which makes it possible to correlate events that might not raise suspicion alone but could be indicative of a threat when seen together.

The deployment proved to be straightforward. Medical Mutual’s in-house IT team set it up in a week with Syncsort answering deployment questions to assist. Although there are numerous tools to capture log data from the mainframe, the insurer chose to go with the Splunk-Ironstream combination because it already was using Splunk in house for centralized logging. Adding mainframe security logs was an easy step. “This was affordable and it saved us from having to learn another product,” the security supervisor added. Medical Mutual runs a z13, model 409 with Ironstream.

According to the announcement, by having Ironstream leverage z/OS log data via Splunk Enterprise, Medical Mutual has enables the organization to:

  • Track security events and data from multiple platforms including IBM z/OS mainframes, Windows and distributed servers and correlate the information in Splunk Enterprise for better security.
  • Diagnose and respond to high severity security issues more quickly since data from across the entire enterprise is being monitored in real time.
  • Provide monthly and daily reporting with an up-to-the-minute account of unusual user activity.
  • Detect security anomalies and analyze their trends – the cornerstone of Security Information and Event Management (SIEM) strategies.

Real time monitoring with analytics has proven crucial for security. You can actually detect fraud while it is taking place and before serious damage is done. It is much harder to recoup loses hours, days, or, what is often the case, months later.

The Splunk platform can handle massive amounts of data from different formats and indexes and decipher and correlate security events through analytics. Ironstream brings the ability to stream mainframe security data for even greater insights, and Ironstream’s low overhead keeps mainframe processing costs low.

To try the big iron-to-big data strategy organizations can download a free Ironstream Starter Edition and begin streaming z/OS Syslog data into Splunk solutions. Unlike typical technology trials, the Starter Edition is not time-limited and may be used in production at no charge. This includes access to the Ironstream applications available for download on Splunkbase.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC Annual Mainframe Survey Starts Now

May 24, 2016

Each year, BMC conducts a survey of mainframe users to identify trends in mainframe usage. The 2016 survey cycle kicks off May 24, and BMC will be collecting data for its eleventh survey, which will be published this fall. The company is reaching out to all those involved in mainframe management or operations to participate.

The BMC mainframe survey is one of the largest in the industry with more than 1,200 mainframe professionals and executives participating. DancingDinosaur almost always covers the new survey. You can see recent coverage  here and hereThe results of the survey are used by vendors, industry analysts, media and technical and executive users to make significant decisions and draw conclusions on the current and future state of mainframe usage.

To participate in the 20 minute survey, technical IT staff involved in mainframe management or operations, mainframe executives, managers or technical architects should visit, http://bmc.co1.qualtrics.com/jfe/form/SV_81caWGB4gUqRncF?survey_source=external . The survey will be open from May 24 to June 6, 2016.

IBM Advances SSD with Phase-Change Memory Breakthrough

May 20, 2016

Facing an incessant demand to speed data through computers the latest IBM storage memory advance, announced earlier this week, will ratchet up the speed another notch or two. Scientists at IBM Research have demonstrated storing 3 bits of data per cell using phase-change memory (PCM). Until now, PCM had been tried but had never caught on for a variety of reasons. By storing 3 bits per cell, IBM can boost PCM capacity and speed and lower the cost.

TLCPCMSmall (1)

IBM multi-bit PCM chip connected to a standard integrated circuit board.

Pictured above, the chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture, IBM explained. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

Although PCM has been around for some years only with this latest advance is it attracting the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility, and density. Specifically, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.  Primary use cases will be capturing massive volumes of data expected from mobile devices and the Internet of Things.

PCM, in effect, adds another tier to the storage/memory hierarchy, coming in between DRAM and Flash at the upper levels of the storage performance pyramid. The IBM researchers envision both standalone PCM and hybrid applications, which combine PCM and flash storage together. For example, PCM can act as an extremely fast cache by storing a mobile phone’s operating system and enabling it to launch in seconds. For enterprise data centers, IBM envisions entire databases could be stored in PCM for blazing fast query processing of time-critical online applications, such as financial transactions.

As reported by CNET, PCM fits neatly between DRAM and flash. DRAM is 5-10x faster at retrieving data than PCM, while PCM is about 70x faster than flash. IBM reportedly expects PCM to be cheaper than DRAM, eventually becoming as cheap as flash (or course flash keeps getting cheaper too). PCM’s ability to hold three bits of data rather than 2 bits, PCM’s previous best, enables packing more data into a chip, which lowers the cost of PCM storage and boosts its competitive position against technologies like Flash and DRAM.

Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” wrote Haris Pozidis, key researcher and manager of non-volatile memory research at IBM Research –in the published announcement. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

IBM explains how PCM works: PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively. In digital systems, data is stored as a 0 or a 1. To store a 0 or a 1 on a PCM cell, a high or medium electrical current is applied to the material. A 0 can be programmed to be written in the amorphous phase or a 1 in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied.

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: 1) a set of drift-immune cell-state metrics and 2) drift-tolerant coding and detection schemes. These new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. The other measures provide additional robustness of the stored data. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

Combined these advancements address the key challenges of multi-bit PCM—drift, variability, temperature sensitivity and endurance cycling, according to IBM. From there, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board

Expect to see PCM first in Power Systems. At the 2016 OpenPOWER Summit in San Jose, CA, last month, IBM scientists demonstrated PCM attached to POWER8-based servers (made by IBM and TYAN® Computer Corp.) via the CAPI (Coherent Accelerator Processor Interface) protocol, which speeds the data to storage or memory. This technology leverages the low latency and small access granularity of PCM, the efficiency of the OpenPOWER architecture, and the efficiency of the CAPI protocol, an example of the OpenPower Foundation in action. Pozidis suggested PCM could be ready by 2017; maybe but don’t bet on it. IBM still needs to line up chip makers to produce it in commercial quantities among other things.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 


Follow

Get every new post delivered to your Inbox.

Join 877 other followers

%d bloggers like this: