Posts Tagged ‘mainframe’

Join Me at Share in Providence

August 3, 2017

Share runs all week but DancingDinosaur plans to be there on Tues., 8/8. Share is happening at the Rhode Island Convention Center, Providence, RI, August 6–11, 2017. Get details at Share.org.  The keynote looks interesting that day: As Share describes it: Security and regulatory compliance are concerns that impact every professional within your IT organization.

In the Tuesday Keynote presentation at SHARE Providence, expert panelists will offer their perspectives on how various roles are specifically impacted by security, and what areas you should be most concerned about in your own roles. Listen to your peers share their insights in a series of TED-style Talks, starting with David Hayes of the Government Accountability Office, who will focus on common compliance and risk frameworks. Stu Henderson of Henderson Consulting will discuss organizational values and how those interacting with the systems are part of the overall control environment, followed by Simon Dodge of Wells Fargo providing a look at proactive activities in the organization that are important for staying ahead of threats and reducing the need to play catch-up when the auditors arrive. In the final talk of the morning, emerging cyber security threats will be discussed by Buzz Woeckener of Nationwide Insurance, along with tips on how to be prepared. At the conclusion of their presentations, the panelists will address audience questions on the topics of security and compliance.

You’ll find me wandering around the sessions and the expo. Will be the guy wearing the Boston hat.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New IBM Z Redefines Mainframe and Security and Cloud

July 19, 2017

By now you have certainly heard of IBM’s latest mainframe, the long-awaited z14, which the company refers to as Z. An announcement of a new mainframe usually doesn’t attract much notice, but maybe this announcement should. Even if you are not a mainframe fan this machine offers a solution that helps everybody—pervasive encryption of all data with no impact on operations or performance and with no need to take much action on your part, except to plug the machine in.

10-core z14 chip

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome automatic pervasive encryption. Yet 96% don’t. Of the 9 billion records breached since 2013 only 4% were encrypted! You already know why: encryption is a chore, impacts staff, slows system performance, costs money, and more. You know all the complaints better than DancingDinosaur.

The z14 changes everything from this point going forward. IBM has committed a 4x increase in silicon dedicated to cryptographic algorithms for pervasive encryption. In effect the Z encrypts all data associated with an entire application, cloud service, and database, in flight and at rest, automatically. This amounts to bulk encryption at cloud scale made possible by a massive 7x increase in cryptographic performance over the z13. This is 18x faster than comparable x86 systems and at just five percent of the cost of x86-based solutions.

In truth, it’s better than this. You get this encryption automatically virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box. DancingDinosaur has not seen any specific prices yet but you are welcome to scream if IBM doesn’t come through.

You immediately get rid of all the encryption headaches; you don’t have to classify data, manage encryption, or do any of the other chores typically associated with encryption. You just get it, automatically. The z14 also relieves you from managing encryption keys; only IBM Z can protect millions of keys (as well as the process of accessing, generating and recycling them) in tamper-responsive hardware that causes keys to be invalidated at any sign of intrusion and then be restored in safety.

When it comes to security, the z14 truly is a game changer. And it finally will get compliance auditors off your back once they realize how extensive z14 protection is.

IBM downplayed speeds and feeds with the z13 but they’re back with the z14. Specifically, a 5.2 GHz (versus 5.0 GHz IBM z13) is still a bit short of z12, which ran 5.5 GHz. But as with the z13, IBM makes up for it with more memory. The z14 can handle 32 TB of memory. It also includes up to 170 configurable cores (up to 10 per chip) for a total of 1832 MIPS. The L1 and L2 cache is on the core.  The L3 cache also sits on chip and is shared by on-chip cores, and communicates with cores, memory, I/O, and system controller as a single chip module.

Maybe not the richest specs but impressive nonetheless. IBM has been tweaking the box from top to bottom to boost performance. And all the while it will take over end-to-end encryption automatically, including encrypted APIs. Surprisingly, IBM has said nothing about Z’s power consumption but constantly on encrpytion/decryption has to draw more power than, say, the z13. Am waiting to hear what IBM has to say.

This is not just for mainframe jocks. Optimized IBM z/OS Connect technologies make it straightforward for cloud developers to discover and call any IBM Z application or data from a cloud service, or for Z developers to call any cloud service. IBM Z now allows organizations to encrypt these APIs and still run nearly 3x faster than alternatives based on comparable x86 systems.  These speeds and feeds have all been thoroughly documented and detailed at the bottom of the IBM Z press release here.

Will the z14 return the mainframe to positive revenue?  Probably for a few quarters, maybe more if non-mainframe shops want the clear payback of pervasive encryption, although it won’t be an easy transition for them without IBM assistance and incentives.

Next week DancingDinosaur will take up the Z’s three new container pricing models intended to make the Z competitive with public clouds and on-premises x86 environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Latest Compuware Tools Bring Mainframe and DevOps Together

July 7, 2017

At the end of June Compuware announced the integration of Topaz for Total Test, an automated unit testing tool for COBOL, with Jenkins, SonarQube and Compuware ISPW. Together, the technologies enable enterprises nimbly, easily and efficiently update their core mainframe applications in response to ever-changing business requirements.  This continues the company’s ongoing quarterly releases of updates and modernization of mainframe tools.

The latest enable mainframe legacy technologies to participate in integrated modern DevOps. They allow enterprise IT to better orchestrate changes to mainframe systems of record with changes to systems of engagement—a significant benefit given the fact that customer-facing digital services often rely on code running across multiple platforms, legacy and distributed.

Compuware Topaz for Total Test

The days when a mainframe shop can get by with leisurely updates of their systems, especially their business critical applications, are long gone.  Organizations need to modernize and integrate their tools to deliver the kind of fast response attributed to DevOps.

Of course, successful DevOps, whether mainframe or distributed, is less a matter of tools than of culture, communication, and process.  Still, there’s no doubt that modern, integrated, and context-aware tools along with automation help by speeding the process and reducing mistakes.

Topaz for Total Test appears to cover all the tool bases. It brings together automated unit testing for COBOL with Jenkins, SonarQube, and Compuware ISPW. Jenkins is an open-source continuous integration software tool written in the Java for testing and reporting on isolated changes in a larger code base in real time. The real time aspect is critical for DevOps, where speed counts. The software enables developers to find and solve defects in a code base rapidly and to automate testing of their builds. SonarQube (formerly Sonar[1]) is an open source platform for continuous inspection of code quality. Again, error elimination counts.

The problem, as Compuware sees it, comes from mainframe shops’ historical inability to update their business-critical COBOL applications fast enough due to antiquated tools, excessive dependence on specialized expertise, and risk concerns. All these combine to produce long delays in updating code.

The addition of Jenkins and SonarQube along with Compuware’s ISPW source code management and deployment produce a pretty complete DevOps package for mainframes. In addition, Compuware strengthened support for DB2. That support entails new stubbing for DB2 databases, which allows developers to run unit tests without requiring an active connection to a live DB2 database. While Topaz for Total Test can be used to test code that processes all types of mainframe data, its stubbing capability for DB2 but also VSAM and QSAM data types. This makes it easier to create repeatable tests fast. Data stubs are created automatically and do not require re-compiling.

Although much of the world’s business activity still revolves in one way or another around the mainframe, many mainframe shops struggle when it comes to updating those applications to reflect rapidly changing business demands. Typically, they are hampered by manual development and testing processes; ongoing loss of specialized COBOL programming knowledge; and the fear of introducing even the slightest defect into core mainframe systems of record, notes Compuware.

And it gets worse. “Given the abject failure of re-platforming initiatives, large enterprises hoping to avoid digital irrelevance must aggressively modernize their mainframe DevOps practices,” said Rich Ptak of IT analyst firm Ptak Associates in Compuware’s Topaz for Total Test announcement. “Key to the modernization and ‘de-legacing’ of mainframe applications is the adoption of unit testing for COBOL code that is equivalent to and well-integrated with unit testing as practiced across the rest of the enterprise codebase.”

Compuware Topaz for Total Test transforms mainframe application development by automatically breaking COBOL code down into units and creating tests for those logical units. Developers at all skill levels—not just mainframe cowboys but preferably those with distributed and open system skills or even systems novices—can quickly and easily perform unit testing on COBOL code just as they do in Java, PHP and other popular programming languages. In fact, Topaz is actually more advanced than typical Java tools, because it requires no coding and automatically generates default unit test result assertions for developers.  So yes, novices are welcome.

With the recently released integrations and enhancements, Compuware has now delivered mainframe innovations for eleven consecutive quarters. Few mainframe shops even try to do this, not even IBM. This reflects Compuware’s commitment to improving innovation throughput and quality using the latest Agile and DevOps methods.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Power and z Platforms Show Renewed Excitement

June 30, 2017

Granted, 20 consecutive quarters of posting negative revenue numbers is enough to get even the most diehard mainframe bigot down. If you ran your life like that your house and your car would have been seized by the bank months ago.

Toward the end of June, however, both z and Power had some good news. First,  a week ago IBM announced that corporate enterprise users ranked the IBM z  enterprise servers as the most reliable hardware platform available on the market today. In its enterprise server category the survey also found that IBM Power Systems achieved the highest levels of reliability and uptime when compared with 14 server hardware options and 11 server hardware virtualization platforms.

IBM links 2 IBM POWER8 with NVIDIA NVLink with 4 NVIDIA Tesla P100 accelerators

The results were compiled and reported by the ITIC 2017 Global Server Hardware and Server OS Reliability survey, which polled 750 organizations worldwide during April/May 2017. Also among the survey finding:

  • IBM z Systems Enterprise mainframe class systems, had zero percent incidents of more than four hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of minutes of unplanned per server, per annual downtime. That equates to 8 seconds per month of “blink and you miss it,” or 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • IBM Power Systems has the least amount of unplanned downtime, with 2.5 minutes per server/per year of any mainstream Linux server platforms.
  • IBM and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.

The survey also highlighted market reliability trends. For nearly all companies surveyed, having four nines (99.99%) of availability, equating to less than one hour of system downtime per year was a key factor in its decision.

Then consider the increasing costs of downtime. Nearly all survey respondents claimed that one hour of downtime costs them more than $150k, with one-third estimating that the same will cost their business up to $400k.

With so much activity going on 24×7, for an increasing number of businesses, 4 nines of availability is no longer sufficient.  These businesses are adopting carrier levels of availability; 5 nines or 6 nines (or 99.999 to 99.9999 percent) availability, which translates to downtime per year of 30 seconds (6 nines) or 5 minutes (5 nines) of downtime per year.

According to ITIC’s 2016 report: IBM’s z Enterprise mainframe customers reported the least amount of unplanned downtime and the highest percentage of five nines (99.999%) uptime of any server hardware platform.

Just this week, IBM announced that according to results from International Data Corporation (IDC) Worldwide Quarterly Server Tracker® (June, 2017) IBM exceeded market growth by 3x compared with the total Linux server market, which grew at 6 percent. The improved performance are the result of success across IBM Power Systems including IBM’s OpenPOWER LC servers and IBM Power Systems running SAP HANA as well as the OpenPOWER-Ready servers developed through the OpenPOWER Foundation.

As IBM explains it: Power Systems market share growth is underpinned by solutions that handle fast growing applications, like the deep learning capabilities within the POWER8 architecture. In addition these are systems that expand IBM’s Linux server portfolio, which have been co-developed with fellow members of the OpenPOWER Foundation

Now all that’s needed is IBM’s sales and marketing teams to translate this into revenue. Between that and the new systems IBM has been hinting at for the past year maybe the consecutive quarterly losses might come to an end this year.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Resurrects Moore’s Law

June 23, 2017

Guess Moore’s Law ain’t as dead as we were led to believe. On Jun 5 IBM and Research Alliance partners GLOBALFOUNDRIES and Samsung, along with equipment suppliers announced the development of an industry-first process to build silicon nano sheet transistors that will enable 5nm chips. Previously, IBM announced a 7nm process using a silicon germanium (SiGe) alloy.

As DancingDinosaur wrote in early Oct. 2015, the last z System that conformed to the expectations of Moore’s Law was the zEC12, introduced Aug 2012. IBM could boast then it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. IBM compensated for the slower chip speed by adding more processors throughout the system to boost I/O and other functions and optimizing the box every way possible.

5nm silicon nano-sheet transistors delivers 40% performance gain

By 2015, the z13 delivered about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even at one-half Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025. The z13 also received and continues to receive praise for its industry leading security ratings as well as its scalability and flexibility.

Just recently Hitachi announced a partnership with IBM to develop a version of the z13 to run its own operating system, VOS3. The resulting z13 will run the next generation of Hitachi’s AP series.

But IBM isn’t back in pursuit of Moore’s Law just to deliver faster traditional mainframe workloads. Rather, the company is being driven by its strategic initiatives, mainly cognitive computing. As IBM explained in the announcement: The resulting increase in performance will help accelerate cognitive computing, the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.

Scientists working as part of the IBM-led Research Alliance at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY achieved the breakthrough by using stacks of silicon nanosheets as the device structure of the transistor instead of the standard FinFET architecture, which is the blueprint for the semiconductor industry up through 7nm node technology. “For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research in the announcement. “That’s why IBM aggressively pursues new and different architectures and materials that push the limits of this industry, and brings them to market in technologies like mainframes and our cognitive systems.”

Compared to the leading edge 10nm technology available in the market, according to IBM, a nanosheet-based 5nm technology can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality, and mobile devices.

These may not sound like the workloads you are running on your mainframe now, but systems with these chips are not going to be shipped in the next mainframe either. So, you have a couple of years. The IBM team expects to make progress toward commercializing 7nm in 2018. By the time they start shipping 5nm systems you might be desperate for a machine to run such workloads and others like them.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Drives zSystem and Distributed Data Integration

June 8, 2017

IBM appears to be so busy pursuing its strategic imperatives—security, blockchain, quantum computing, and cognitive computing—that it seems to have forgotten the daily activities that make up the bread-and-butter of mainframe data centers. Stepping up to fill the gap have been mainframe ISVs like Compuware, Syncsort, Data Kinetics, and a few others.

IBM’s Project DataWorks taps into unstructured data often missed

IBM hasn’t completely ignored this need. For instance, Project DataWorks uses Watson Analytics and natural language processing to analyze and create complex visualizations. Syncsort, on the other hand, latched onto open Apache technologies, starting in the fall of 2015. Back then it introduced a set of tools to facilitate data integration through Apache Kafka and Apache Spark, two of the most active Big Data open source projects for handling real-time, large-scale data processing, feeds, and analytics.

Syncsort’s primary integration vehicle then revolved around the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, both on premise or in the cloud.

Since then Syncsort, in March, announced another big data integration solution. This time its DMX-h, is now integrated with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, Syncsort explained, organizations can quickly pull data into new, ready-to-work clusters in the cloud. This accelerates how quickly they can take advantage of big data cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

A month before that, this past February, Syncsort introduced new enhancements in its Big Data integration solution by again deploying DMX-h to deliver integrated workflow capabilities and Spark 2.0 integration, which simplifies Hadoop and Spark application development, effectively enabling mainframe data centers to extract maximum value from their data assets.

In addition, Syncsort brought new integrated workflow capabilities and Spark 2.0 integration to simplify Hadoop and Spark application development. It lets data centers tap value from their enterprise data assets regardless of where it resides, whether on the mainframe, in distributed systems, or in the cloud.

Syncsort’s new integrated workflow capability also gives organizations a simpler, more flexible way to create and manage their data pipelines. This is done through the company’s design-once, deploy-anywhere architecture with support for Apache Spark 2.0, which makes it easy for organizations to take advantage of the benefits of Spark 2.0 and integrated workflow without spending time and resources redeveloping their jobs.

Assembling such an end-to-end data pipeline can be time-consuming and complicated, with various workloads executed on multiple platforms, all of which need to be orchestrated and kept up to date. Delays in such complicated development, however, can prevent organizations from getting the timely insights they need for effective decision-making.

Enter Syncsort’s Integrated Workflow, which helps organizations manage various workloads, such as batch ETL on large repositories of historical data. This can be done by referencing business rules during data ingest in a single workflow, in effect simplifying and speeding development of the entire data pipeline, from accessing critical enterprise data, to transforming that data, and ultimately analyzing it for business insights.

Finally, in October 2016 Syncsort announced new capabilities in its Ironstream software that allows organizations to access and integrate mainframe log data in real-time to Splunk IT Service Intelligence (ITSI). Further, the integration of Ironstream and Compuware’s Application Audit software deliver the audit data to Splunk Enterprise Security (ES) for Security Information and Event Management (SIEM). This integration improves an organization’s ability to detect threats against critical mainframe data, correlate them with related information and events, and satisfy compliance requirements.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces Hitachi-Specific z13

May 30, 2017

Remember when rumors were flying that Hitachi planned to buy the mainframe z Systems business from IBM?  DancingDinosaur didn’t believe it at that time, and now we have an official announcement that IBM is working with Hitachi to deliver mainframe z System hardware for use with Hitachi customers.

Inside the IBM z13

DancingDinosaur couldn’t see Hitachi buying the z. The overhead would be too great. IBM has been sinking hundreds of millions of dollars into the z, adding new capabilities ranging from Hadoop and Spark natively on z to whatever comes out of the Open Mainframe Project.

The new Hitachi deal takes the z in a completely different direction. The plans calls for using Hitachi’s operating system, VOS3, running on the latest IBM z13 hardware to provide Hitachi users with better performance while sustaining their previous investments in business-critical Hitachi data and software, as IBM noted. VOS3 started as a fork of MVS and has been repeatedly modified since.

According to IBM, Hitachi will exclusively adopt the IBM z Systems high-performance mainframe hardware technology as the only hardware for the next generation of Hitachi’s AP series. These systems primarily serve major organizations in Japan. This work expands Hitachi’s cooperation with IBM to make mainframe development more efficient through IBM’s global capabilities in developing and manufacturing mainframe systems. The Open Mainframe Project, BTW, is a Linux initiative.

The collaboration, noted IBM, reinforces its commitment to delivering new innovations in mainframe technology and fostering an open ecosystem for the mainframe to support a broad range of software and applications. IBM recently launched offerings for IBM z Systems that use the platform’s capabilities for speed, scale and security to deliver cloud-based blockchain services for building new transaction systems and machine learning for analyzing large amounts of data.

If you count VOS3, the mainframe now runs a variety of operating systems, including z/OS, z/TPF and z/VM operating systems as well as the Linux. Reportedly, Hitachi plans to integrate its new mainframe with its Lumada Internet of Things (IoT) offerings. With z scalability, security, massive I/O, and performance the z makes an ideal IoT platform, and IoT is a market IBM targets today. Now IBM is seeding a competitor with the z running whatever appealing capabilities Hitachi’s Lumada offers. Hope whatever revenue or royalties IBM gets is worth it.

IBM and Hitachi, as explained in the announcement, have a long history of cooperation and collaboration in enterprise computing technologies. Hitachi decided to expand this cooperation at this time to utilize IBM’s most advanced mainframe technologies. Hitachi will continue to provide its customers with a highly reliable, high-performance mainframe environment built around the Hitachi VOS3 operating system. Hitachi also continues to strengthen mainframe functionality and services which contributes to lower TCO, improved ease of system introduction and operation, and better serviceability.

Of course, the mainframe story is far from over. IBM has been hinting at a new mainframe coming later this year for months.  Since IBM stopped just automatically cranking up core processor speed to boost price/performance it will employ an array of assist processors and software optimizations to boost performance wherever it can, but particularly in the area of its current critical imperatives—security, cognitive computing, blockchain, and cloud. One thing DancingDinosaur doesn’t expect to see in the new z, however, will be qubits embedded, but who knows?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Insists Storage is Generating Positive Revenue

May 19, 2017

At a recent quarterly briefing on the company’s storage business, IBM managers crowed over its success: 2,000 new Spectrum Storage customers, 1,300 new DS8880 systems shipped, 1500 PB of capacity shipped, 7% revenue gain Q1’17. This appeared to contradict yet another consecutive losing quarter in which only IBM’s Cognitive Solutions (includes Solutions Software and Transaction Processing Software) posted positive revenue.

However, Martin Schroeter, Senior Vice President and Chief Financial Officer (1Q’17 financials here), sounded upbeat about IBM storage in the quarterly statement: Storage hardware was up seven percent this quarter, led by double-digit growth in our all-flash array offerings. Flash contributed to our Storage revenue growth in both midrange and high-end. In storage, we continue to see the shift in value towards software-defined environments, where we continue to lead the market. We again had double-digit revenue growth in Software-Defined Storage, which is not reported in our Systems segment. Storage software now represents more than 40 percent of our total storage revenue.

IBM Flash System A9000

Highly parallel all-flash storage for hyperscale and cloud data centers

Schroeter continued: Storage gross margins are down, as hardware continues to be impacted by price pressure. To summarize Systems, our revenue and gross profit performance were driven by expected cycle declines in z Systems and Power, mitigated by Storage revenue growth. We continue to expand our footprint and add new capabilities, which address changing workloads. While we are facing some shifting market dynamics and ongoing product transitions, our portfolio remains uniquely optimized for cognitive and cloud computing.

DancingDinosaur hopes he is right.  IBM has been signaling a new z System coming for months, along with enhancements to Power storage. Just two weeks ago IBM reported achievements with Power and Nvidia, as DancingDinosaur covered at that time.

If there was any doubt, all-flash storage is the way IBM and most other storage providers are heading for the performance and competitive economics. In January IBM announced three all flash DS888* all flash products, which DancingDinosaur covered at the time here. Specifically:

  • DS8884 F (the F designates all flash)—described by IBM as performance delivered within a flexible and space-saving package
  • DS8886 F—combines performance, capacity, and cost to support a variety of workloads and applications
  • DS8888 F—promises performance and capacity designed to address the most demanding business workload requirements

The three products are intended to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. Doubt that a lot of mainframe data centers are doing much with cognitive systems yet, but that will be coming.

Spectrum Storage also appears to be looming large in IBM’s storage plans. Spectrum Storage is IBM’s software defined storage (SDS) family of products. DancingDinosaur covered the latest refresh of the suite of products this past February.

The highlights of the recent announcement included the addition of Cloud Object Storage and a version of Spectrum Virtualize as software only.  Spectrum Control got a slew of enhancements, including new cloud-based storage analytics for Dell EMC VNX, VNXe, and VMAX; extended capacity planning views for external storage, and transparent cloud tiering for IBM Spectrum Scale.  The on-premises editions added consolidated chargeback/showback and support for Dell EMC VNXe file storage. This should make it clear that Spectrum Storage is not only for underlying IBM storage products.

Along the same lines, Spectrum Storage added VMware 6 support and the certified vSphere Web client. In the area of cloud object storage, IBM added native NFS access, enhance STaaS multi-tenancy, IPV6 support, and preconfigured bundles.

IBM also previewed enhancements coming in 2Q’17.   Of specific interest to DancingDinosaur readers will likely be  the likely updates to the FlashSystem and VeraStack portfolio.

The company is counting on these enhancements and more to help pull IBM out of its tailspin. As Schroeter wrote in the 1Q’17 report: New systems product introductions later in the year will drive improved second half performance as compared to the first. Hope so; already big investors are cashing out. Clients, however, appear to be staying for now.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Demonstrates Quantum Computing Advantage

May 12, 2017

In an announcement last week, IBM reported that scientists from IBM Research and Raytheon BBN have demonstrated one of the first proven examples of a quantum computer’s advantage over a conventional computer. By probing a black box containing an unknown string of bits, they showed that just a few superconducting qubits can discover the hidden string faster and more efficiently than today’s computers. Their research was published in a paper titled, “Demonstration of quantum advantage in machine learning” in nature.com.

With IBM’s current 5 qubit processor, the quantum algorithm consistently identified the sequence in up to 100x fewer computational steps and was more tolerant of noise than the conventional (non-quantum) algorithm. This is much larger than any previous head-to-head comparison between quantum and conventional processors.

Courtesy: IBM Research

The graphic above defines 3 types of quantum computers. At the top is the quantum annealer, described as the least powerful and most restrictive.  In the middle sits analog quantum, 50-100 qubits, a device able to simulate complex quantum interactions. This will probably be IBM’s next quantum machine; currently IBM offers a 5 qubit device. At the bottom sits the universal quantum. IBM suggests this will scale to over 100,000 qubits and be capable of handling machine learning, quantum chemistry, optimization problems, secure computing, and more. It will be exponentially faster than traditional computers and be able to handle just about all the things even the most powerful conventional supercomputers cannot do now.

The most powerful z System, regardless of how many cores or accelerators or memory or bandwidth, remains a traditional, conventional computer. It deals with problems as a series of basic bits, sequences of 0 or 1. That it runs through these sequences astoundingly fast fools us into thinking that there is something beyond the same old digital computing we have known for the last 50 years or more.

Digital computers see the world and the problems you trying to solve as sequences of 0 and 1. That’s it; there is nothing in-between. They store numbers as sequences of 0 and 1 in memory, and they process stored numbers using only the simplest mathematical operations, add and subtract. As a college student DancingDinosaur was given the most powerful TI programmable calculator then available and, with a few buddies, we tried to come up with things it couldn’t do. No matter how many beer-inspired tries, we never found something it couldn’t handle.  The TI was just another digital device.

Quantum computers can digest 0 and 1 but have a broader array of tricks. For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum system can handle multiple, contradictory states. If we had known of Schrödinger’s cat my buddies and I might have stumped that TI calculator.

In an article on supercomputing in Explain That Stuff by Chris Woodford he shows the thinking behind Schrödinger’s cat, called superposition.  This is where two waves, representing a live cat and a dead one, combine to make a third that contains both cats or maybe hundreds of cats. The wave inside the pipe contains all these waves simultaneously: they’re added to make a combined wave that includes them all. Qubits use superposition to represent multiple states (multiple numeric values) simultaneously.

In its latest quantum achievement IBM with only a 5 cubit the quantum algorithm consistently identified the sequence in up to a 100x fewer computational steps and was more tolerant of noise than the conventional (non-quantum) algorithm. This is much larger than any previous head-to-head comparison between quantum and conventional processors.

In effect, the IBM-Raytheon team programmed a black box such that, with the push of a button, it produces a string of bits with a hidden a pattern (such as 0010) for both a conventional computation and a quantum computation. The conventional computer examines the bits one by one. Each result gives a little information about the hidden string. By forcing the conventional computer to query the black box many times it can determine the full answer.

The quantum computer employs a quantum algorithm that extracts the information hidden in the quantum phase — information to which a conventional algorithm is completely blind. The bits are then measured as usual and, in about half the time, the hidden string can be fully revealed.

Most z data centers can’t use quantum capabilities for their daily work, at least not yet. As Woodford noted: It’s very early for the whole field—and most researchers agree that we’re unlikely to see practical quantum computers appearing for many years—perhaps even decades. Don’t bet on it; at the rate IBM is driving this, you’ll probably see useful things much sooner. Maybe tomorrow.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: