Posts Tagged ‘System z’

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New IBM Z Redefines Mainframe and Security and Cloud

July 19, 2017

By now you have certainly heard of IBM’s latest mainframe, the long-awaited z14, which the company refers to as Z. An announcement of a new mainframe usually doesn’t attract much notice, but maybe this announcement should. Even if you are not a mainframe fan this machine offers a solution that helps everybody—pervasive encryption of all data with no impact on operations or performance and with no need to take much action on your part, except to plug the machine in.

10-core z14 chip

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome automatic pervasive encryption. Yet 96% don’t. Of the 9 billion records breached since 2013 only 4% were encrypted! You already know why: encryption is a chore, impacts staff, slows system performance, costs money, and more. You know all the complaints better than DancingDinosaur.

The z14 changes everything from this point going forward. IBM has committed a 4x increase in silicon dedicated to cryptographic algorithms for pervasive encryption. In effect the Z encrypts all data associated with an entire application, cloud service, and database, in flight and at rest, automatically. This amounts to bulk encryption at cloud scale made possible by a massive 7x increase in cryptographic performance over the z13. This is 18x faster than comparable x86 systems and at just five percent of the cost of x86-based solutions.

In truth, it’s better than this. You get this encryption automatically virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box. DancingDinosaur has not seen any specific prices yet but you are welcome to scream if IBM doesn’t come through.

You immediately get rid of all the encryption headaches; you don’t have to classify data, manage encryption, or do any of the other chores typically associated with encryption. You just get it, automatically. The z14 also relieves you from managing encryption keys; only IBM Z can protect millions of keys (as well as the process of accessing, generating and recycling them) in tamper-responsive hardware that causes keys to be invalidated at any sign of intrusion and then be restored in safety.

When it comes to security, the z14 truly is a game changer. And it finally will get compliance auditors off your back once they realize how extensive z14 protection is.

IBM downplayed speeds and feeds with the z13 but they’re back with the z14. Specifically, a 5.2 GHz (versus 5.0 GHz IBM z13) is still a bit short of z12, which ran 5.5 GHz. But as with the z13, IBM makes up for it with more memory. The z14 can handle 32 TB of memory. It also includes up to 170 configurable cores (up to 10 per chip) for a total of 1832 MIPS. The L1 and L2 cache is on the core.  The L3 cache also sits on chip and is shared by on-chip cores, and communicates with cores, memory, I/O, and system controller as a single chip module.

Maybe not the richest specs but impressive nonetheless. IBM has been tweaking the box from top to bottom to boost performance. And all the while it will take over end-to-end encryption automatically, including encrypted APIs. Surprisingly, IBM has said nothing about Z’s power consumption but constantly on encrpytion/decryption has to draw more power than, say, the z13. Am waiting to hear what IBM has to say.

This is not just for mainframe jocks. Optimized IBM z/OS Connect technologies make it straightforward for cloud developers to discover and call any IBM Z application or data from a cloud service, or for Z developers to call any cloud service. IBM Z now allows organizations to encrypt these APIs and still run nearly 3x faster than alternatives based on comparable x86 systems.  These speeds and feeds have all been thoroughly documented and detailed at the bottom of the IBM Z press release here.

Will the z14 return the mainframe to positive revenue?  Probably for a few quarters, maybe more if non-mainframe shops want the clear payback of pervasive encryption, although it won’t be an easy transition for them without IBM assistance and incentives.

Next week DancingDinosaur will take up the Z’s three new container pricing models intended to make the Z competitive with public clouds and on-premises x86 environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Power and z Platforms Show Renewed Excitement

June 30, 2017

Granted, 20 consecutive quarters of posting negative revenue numbers is enough to get even the most diehard mainframe bigot down. If you ran your life like that your house and your car would have been seized by the bank months ago.

Toward the end of June, however, both z and Power had some good news. First,  a week ago IBM announced that corporate enterprise users ranked the IBM z  enterprise servers as the most reliable hardware platform available on the market today. In its enterprise server category the survey also found that IBM Power Systems achieved the highest levels of reliability and uptime when compared with 14 server hardware options and 11 server hardware virtualization platforms.

IBM links 2 IBM POWER8 with NVIDIA NVLink with 4 NVIDIA Tesla P100 accelerators

The results were compiled and reported by the ITIC 2017 Global Server Hardware and Server OS Reliability survey, which polled 750 organizations worldwide during April/May 2017. Also among the survey finding:

  • IBM z Systems Enterprise mainframe class systems, had zero percent incidents of more than four hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of minutes of unplanned per server, per annual downtime. That equates to 8 seconds per month of “blink and you miss it,” or 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • IBM Power Systems has the least amount of unplanned downtime, with 2.5 minutes per server/per year of any mainstream Linux server platforms.
  • IBM and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.

The survey also highlighted market reliability trends. For nearly all companies surveyed, having four nines (99.99%) of availability, equating to less than one hour of system downtime per year was a key factor in its decision.

Then consider the increasing costs of downtime. Nearly all survey respondents claimed that one hour of downtime costs them more than $150k, with one-third estimating that the same will cost their business up to $400k.

With so much activity going on 24×7, for an increasing number of businesses, 4 nines of availability is no longer sufficient.  These businesses are adopting carrier levels of availability; 5 nines or 6 nines (or 99.999 to 99.9999 percent) availability, which translates to downtime per year of 30 seconds (6 nines) or 5 minutes (5 nines) of downtime per year.

According to ITIC’s 2016 report: IBM’s z Enterprise mainframe customers reported the least amount of unplanned downtime and the highest percentage of five nines (99.999%) uptime of any server hardware platform.

Just this week, IBM announced that according to results from International Data Corporation (IDC) Worldwide Quarterly Server Tracker® (June, 2017) IBM exceeded market growth by 3x compared with the total Linux server market, which grew at 6 percent. The improved performance are the result of success across IBM Power Systems including IBM’s OpenPOWER LC servers and IBM Power Systems running SAP HANA as well as the OpenPOWER-Ready servers developed through the OpenPOWER Foundation.

As IBM explains it: Power Systems market share growth is underpinned by solutions that handle fast growing applications, like the deep learning capabilities within the POWER8 architecture. In addition these are systems that expand IBM’s Linux server portfolio, which have been co-developed with fellow members of the OpenPOWER Foundation

Now all that’s needed is IBM’s sales and marketing teams to translate this into revenue. Between that and the new systems IBM has been hinting at for the past year maybe the consecutive quarterly losses might come to an end this year.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces Hitachi-Specific z13

May 30, 2017

Remember when rumors were flying that Hitachi planned to buy the mainframe z Systems business from IBM?  DancingDinosaur didn’t believe it at that time, and now we have an official announcement that IBM is working with Hitachi to deliver mainframe z System hardware for use with Hitachi customers.

Inside the IBM z13

DancingDinosaur couldn’t see Hitachi buying the z. The overhead would be too great. IBM has been sinking hundreds of millions of dollars into the z, adding new capabilities ranging from Hadoop and Spark natively on z to whatever comes out of the Open Mainframe Project.

The new Hitachi deal takes the z in a completely different direction. The plans calls for using Hitachi’s operating system, VOS3, running on the latest IBM z13 hardware to provide Hitachi users with better performance while sustaining their previous investments in business-critical Hitachi data and software, as IBM noted. VOS3 started as a fork of MVS and has been repeatedly modified since.

According to IBM, Hitachi will exclusively adopt the IBM z Systems high-performance mainframe hardware technology as the only hardware for the next generation of Hitachi’s AP series. These systems primarily serve major organizations in Japan. This work expands Hitachi’s cooperation with IBM to make mainframe development more efficient through IBM’s global capabilities in developing and manufacturing mainframe systems. The Open Mainframe Project, BTW, is a Linux initiative.

The collaboration, noted IBM, reinforces its commitment to delivering new innovations in mainframe technology and fostering an open ecosystem for the mainframe to support a broad range of software and applications. IBM recently launched offerings for IBM z Systems that use the platform’s capabilities for speed, scale and security to deliver cloud-based blockchain services for building new transaction systems and machine learning for analyzing large amounts of data.

If you count VOS3, the mainframe now runs a variety of operating systems, including z/OS, z/TPF and z/VM operating systems as well as the Linux. Reportedly, Hitachi plans to integrate its new mainframe with its Lumada Internet of Things (IoT) offerings. With z scalability, security, massive I/O, and performance the z makes an ideal IoT platform, and IoT is a market IBM targets today. Now IBM is seeding a competitor with the z running whatever appealing capabilities Hitachi’s Lumada offers. Hope whatever revenue or royalties IBM gets is worth it.

IBM and Hitachi, as explained in the announcement, have a long history of cooperation and collaboration in enterprise computing technologies. Hitachi decided to expand this cooperation at this time to utilize IBM’s most advanced mainframe technologies. Hitachi will continue to provide its customers with a highly reliable, high-performance mainframe environment built around the Hitachi VOS3 operating system. Hitachi also continues to strengthen mainframe functionality and services which contributes to lower TCO, improved ease of system introduction and operation, and better serviceability.

Of course, the mainframe story is far from over. IBM has been hinting at a new mainframe coming later this year for months.  Since IBM stopped just automatically cranking up core processor speed to boost price/performance it will employ an array of assist processors and software optimizations to boost performance wherever it can, but particularly in the area of its current critical imperatives—security, cognitive computing, blockchain, and cloud. One thing DancingDinosaur doesn’t expect to see in the new z, however, will be qubits embedded, but who knows?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Insists Storage is Generating Positive Revenue

May 19, 2017

At a recent quarterly briefing on the company’s storage business, IBM managers crowed over its success: 2,000 new Spectrum Storage customers, 1,300 new DS8880 systems shipped, 1500 PB of capacity shipped, 7% revenue gain Q1’17. This appeared to contradict yet another consecutive losing quarter in which only IBM’s Cognitive Solutions (includes Solutions Software and Transaction Processing Software) posted positive revenue.

However, Martin Schroeter, Senior Vice President and Chief Financial Officer (1Q’17 financials here), sounded upbeat about IBM storage in the quarterly statement: Storage hardware was up seven percent this quarter, led by double-digit growth in our all-flash array offerings. Flash contributed to our Storage revenue growth in both midrange and high-end. In storage, we continue to see the shift in value towards software-defined environments, where we continue to lead the market. We again had double-digit revenue growth in Software-Defined Storage, which is not reported in our Systems segment. Storage software now represents more than 40 percent of our total storage revenue.

IBM Flash System A9000

Highly parallel all-flash storage for hyperscale and cloud data centers

Schroeter continued: Storage gross margins are down, as hardware continues to be impacted by price pressure. To summarize Systems, our revenue and gross profit performance were driven by expected cycle declines in z Systems and Power, mitigated by Storage revenue growth. We continue to expand our footprint and add new capabilities, which address changing workloads. While we are facing some shifting market dynamics and ongoing product transitions, our portfolio remains uniquely optimized for cognitive and cloud computing.

DancingDinosaur hopes he is right.  IBM has been signaling a new z System coming for months, along with enhancements to Power storage. Just two weeks ago IBM reported achievements with Power and Nvidia, as DancingDinosaur covered at that time.

If there was any doubt, all-flash storage is the way IBM and most other storage providers are heading for the performance and competitive economics. In January IBM announced three all flash DS888* all flash products, which DancingDinosaur covered at the time here. Specifically:

  • DS8884 F (the F designates all flash)—described by IBM as performance delivered within a flexible and space-saving package
  • DS8886 F—combines performance, capacity, and cost to support a variety of workloads and applications
  • DS8888 F—promises performance and capacity designed to address the most demanding business workload requirements

The three products are intended to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. Doubt that a lot of mainframe data centers are doing much with cognitive systems yet, but that will be coming.

Spectrum Storage also appears to be looming large in IBM’s storage plans. Spectrum Storage is IBM’s software defined storage (SDS) family of products. DancingDinosaur covered the latest refresh of the suite of products this past February.

The highlights of the recent announcement included the addition of Cloud Object Storage and a version of Spectrum Virtualize as software only.  Spectrum Control got a slew of enhancements, including new cloud-based storage analytics for Dell EMC VNX, VNXe, and VMAX; extended capacity planning views for external storage, and transparent cloud tiering for IBM Spectrum Scale.  The on-premises editions added consolidated chargeback/showback and support for Dell EMC VNXe file storage. This should make it clear that Spectrum Storage is not only for underlying IBM storage products.

Along the same lines, Spectrum Storage added VMware 6 support and the certified vSphere Web client. In the area of cloud object storage, IBM added native NFS access, enhance STaaS multi-tenancy, IPV6 support, and preconfigured bundles.

IBM also previewed enhancements coming in 2Q’17.   Of specific interest to DancingDinosaur readers will likely be  the likely updates to the FlashSystem and VeraStack portfolio.

The company is counting on these enhancements and more to help pull IBM out of its tailspin. As Schroeter wrote in the 1Q’17 report: New systems product introductions later in the year will drive improved second half performance as compared to the first. Hope so; already big investors are cashing out. Clients, however, appear to be staying for now.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware-Syncsort-Splunk to Boost Mainframe Security

April 6, 2017

The mainframe has proven to be remarkably secure over the years, racking up the highest security certifications available. But there is still room for improvement. Earlier this week Compuware announced Application Audit, a software tool that aims to transform mainframe cybersecurity and compliance through real-time capture of user behavior.

Capturing user behavior, especially in real-time, is seemingly impossible if you have to rely on the data your collect from the various logs and SMF data.  Compuware’s solution, Application Audit, in conjunction with Syncsort and Splunk, fully captures and analyzes start-to-finish mainframe application user behavior.

As Compuware explains: Most enterprises still rely on disparate logs and SMF data from security products such as RACF, CA-ACF2 and CA-Top Secret to piece together user behavior.  This is too slow if you want to capture bad behavior while it’s going on. Some organization try to apply analytics to these logs but that also is too slow. By the time you have collected enough logs to deduce who did what and when the damage may have been done.  Throw in the escalating demands of cross-platform enterprise cybersecurity and increasingly burdensome global compliance mandates you haven’t a chance without an automated tool optimized for this.

Fortunately, the mainframe provides rich and comprehensive session data you can run through and analyze with Application Audit and in conjunction with the organization’s security information and event management (SIEM) systems to more quickly and effectively see what really is happening. Specifically, it can:

  • Detect, investigate, and respond to inappropriate behavior by internal users with access
  • Detect, investigate, and respond to hacked or illegally accessed user accounts
  • Support criminal/legal investigations with complete and credible forensics
  • Fulfill compliance mandates regarding protection of sensitive data

IBM, by the way, is not ignoring the advantages of analytics for z security.  Back in February you read about IBM bringing its cognitive system to the z on DancingDinosaur.  IBM continues to flog cognitive on z for real-time analytics and security; promising to enable faster customer insights, business insights, and systems insights with decisions based on real-time analysis of both current and historical data delivered on an analytics platform designed for availability, optimized for flexibility, and engineered with the highest levels of security. Check out IBM’s full cognitive for z pitch.

The data Compuware and Syncsort collect with Application Audit is particularly valuable for maintaining control of privileged mainframe user accounts. Both private- and public-sector organizations are increasingly concerned about insider threats to both mainframe and non-mainframe systems. Privileged user accounts can be misused by their rightful owners, motivated by everything from financial gain to personal grievances, as well as by malicious outsiders who have illegally acquired the credentials for those accounts. You can imagine what havoc they could wreak.

In addition, with Application Audit Compuware is orchestrating a number of players to deliver the full security picture. Specifically, through collaboration with CorreLog, Syncsort and Splunk, Compuware is enabling enterprise customers to integrate Application Audit’s mainframe intelligence with popular SIEM solutions such as Splunk, IBM QRadar, and HPE Security ArcSight ESM. Additionally, Application Audit provides an out-of-the-box Splunk-based dashboard that delivers value from the start. As Compuware explains, these integrations are particularly useful for discovering and addressing security issues associated with today’s increasingly common composite applications, which have components running on both mainframe and non-mainframe platforms. SIEM integration also ensures that security, compliance and other risk management staff can easily access mainframe-related data in the same manner as they access data from other platforms.

“Effective IT management requires effective monitoring of what is happening for security, cost reduction, capacity planning, service level agreements, compliance, and other purposes,” noted Stu Henderson, Founder and President of the Henderson Group in the Compuware announcement. “This is a major need in an environment where security, technology, budget, and regulatory pressures continue to escalate.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Introduces First Universal Commercial Quantum Computers

March 9, 2017

A few years ago DancingDinosaur first encountered the possibility of quantum computing. It was presented as a real but distant possibility. This is not something I need to consider I thought at the time.  By the time it is available commercially I will be long retired and probably six feet under. Well, I was wrong.

This week IBM unveiled its IBM Q quantum systems. IBM Q will be leading Watson and blockchain to deliver the most advanced set of services on the IBM Cloud platform. There are organizations using it now, and DancingDinosaur continues to be living and working still.

IBM Quantum Computing scientists Hanhee Paik (left) and Sarah Sheldon (right) examine the hardware inside an open dilution fridge at the IBM Q Lab

As IBM explains: While technologies that currently run on classical (or conventional) computers, such as Watson, can help find patterns and insights buried in vast amounts of existing data, quantum computers will deliver solutions to multi-faceted problems where patterns cannot be seen because the data doesn’t exist and the possibilities that you need to explore are too enormous to ever be processed by conventional computers.

Just don’t retire your z or Power system in favor on an IBM Q yet. As IBM explained at a recent briefing on the quantum computing the IBM Q universal quantum computers will be able to do any type of problem that conventional computers do today. However, many of today’s workloads, like on-line transaction processing, data storage, and web serving will continue to run more efficiently on conventional systems. The most powerful quantum systems of the next decade will be a hybrid of quantum computers with conventional computers to control logic and operations on large amounts of data.

The most immediate use cases will involve molecular dynamics, drug design, and materials. The new quantum machine, for example, will allow the healthcare industry to design more effective drugs faster and at less cost and the chemical industry to develop new and improved materials.

Another familiar use case revolves around optimization in finance and manufacturing. The problem here comes down to computers struggling with optimization involving an exponential number of possibilities. Quantum systems, noted IBM, hold the promise of more accurately finding the most profitable investment portfolio in the financial industry, the most efficient use of resources in manufacturing, and optimal routes for logistics in the transportation and retail industries.

To refresh the basics of quantum computing.  The challenges invariably entail exponential scale. You start with 2 basic ideas; 1) the uncertainty principle, which states that attempting to observe a state in general disturbs it while obtaining only partial information about the state. Or 2) where two systems can exist in an entangled state, causing them to behave in ways that cannot be explained by supposing that each has some state of its own. No more zero or 1 only.

The basic unit of quantum computing is the qubit. Today IBM is making available a 5 qubit system, which is pretty small in the overall scheme of things. Large enough, however, to experiment and test some hypotheses; things start getting interesting at 20 qubits. An inflexion point, IBM researchers noted, occurs around 50 qubits. At 50-100 qubits people can begin to do some serious work.

This past week IBM announced three quantum computing advances: the release of a new API for the IBM Quantum Experience that enables developers and programmers to begin building interfaces between IBM’s existing 5 qubit cloud-based quantum computer and conventional computers, without needing a deep background in quantum physics. You can try the 5 qubit quantum system via IBM’s Quantum Experience on Bluemix here.

IBM also released an upgraded simulator on the IBM Quantum Experience that can model circuits with up to 20 qubits. In the first half of 2017, IBM plans to release a full SDK on the IBM Quantum Experience for users to build simple quantum applications and software programs. Only the publically available 5 qubit quantum system with a web-based graphical user interface now; soon to be upgraded to more qubits.

 IBM Research Frontiers Institute allows participants to explore applications for quantum computing in a consortium dedicated to making IBM’s most ambitious research available to its members.

Finally, the IBM Q Early Access Systems allows the purchase of access to a dedicated quantum system hosted and managed by IBM. Initial system is 15+ qubits, with a fast roadmap promised to 50+ qubits.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “We believe that quantum computing promises to be the next major technology that has the potential to drive a new era of innovation across industries.”

Are you ready for quantum computing? Try it today on IBM’s Quantum Experience through Bluemix. Let me know how it works for you.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM and Northern Trust Collaborate on Blockchain for Private Equity Markets

March 3, 2017

At a briefing for IT analysts, IBM laid out how it sees blockchain working in practice. Surprisingly, the platform for the Hyperledger effort was not x86 but LinuxONE due to its inherent security.  As the initiative grows the z-based LinuxONE can also deliver the performance, scalability, and reliability the effort eventually will need too.

IBM describes its collaboration with Northern Trust and other key stakeholders as the first commercial deployment of blockchain technology for the private equity market. Although as the private equity market stands now the infrastructure supporting private equity has seen little innovation in recent years even as investors seek greater transparency, security, and efficiency. Enter the open LinuxONE platform, the Hyperledger fabric, and Unigestion, a Geneva, Switzerland-based asset manager with $20 billion in assets under management.

IBM Chairman and CEO Ginni Rometty discusses how cognitive technology and innovations such as Watson and blockchain have the potential to radically transform the financial services industry at Sibos 2016 in Geneva, Switzerland on Weds., September 28, 2016. (Feature Photo Service)

IBM Chairman and CEO Ginni Rometty discusses  blockchain at Sibos

The new initiative, as IBM explains it, promises a new and comprehensive way to access and visualize data.  Blockchain captures and stores information about every transaction and investment as meta data. It also captures details about relevant documents and commitments. Hyperledger itself is a logging tool that creates an immutable record.

The Northern Trust effort connects business logic, legacy technology, and blockchain technology using a combination of Java/JavaScript and IBMs blockchain product. It runs on IBM Bluemix (cloud) using IBM’s Blockchain High Security Business Network. It also relies on key management to ensure record/data isolation and enforce geographic jurisdiction. In the end it facilitates managing the fund lifecycle more efficiently than the previous primarily paper-based process.

More interesting to DancingDinosaur is the selection of the z through LinuxONE and blockchain’s use of storage.  To begin with blockchain is not really a database. It is more like a log file, but even that is not quite accurate because “it is a database you play as a team sport,” explained Arijit Das, Senior Vice President, FinTech Solutions, at the analyst briefing. That means you don’t perform any of the usual database functions; no deletes or updates, just appends.

Since blockchain is an open technology, you actually could do it on any x86 Linux machine, but DancingDinosaur readers probably wouldn’t want to do that. Blockchain essentially ends up being a distributed group activity and LinuxONE is unusually well optimized for the necessary security. It also brings scalability, reliability, and high performance along with the rock-solid security of the latest mainframe. In general LinuxONE can handle 8000 virtual servers in a single system and tens of thousands of containers. Try doing that with an x86 machine or even dozens.   You can read more on LinuxONE that DancingDinosaur wrote when it was introduced here and here.

But you won’t need near that scalability with the private equity application, at least at first. Blockchain gets more interesting when you think about storage. Blockchain has the potential to generate massive numbers of files fast, but that will only happen when it is part of, say, a supply chain with hundreds, or more likely, thousands of participating nodes on the chain and those nodes are very active. More likely for private equity trading, certainly at the start, blockchain will handle gigabytes of data and maybe only megabytes at first. This is not going to generate much revenue for IBM storage. A little bit of flash could probably do the trick.

Today, current legal and administrative processes that support private equity are time consuming and expensive, according to Peter Cherecwich, president of Corporate & Institutional Services at Northern Trust. They lack transparency while inefficient market practices leads to lengthy, duplicative and fragmented investment and administration processes. Northern Trust’s solution based on blockchain and Hyperledger, however, promises to deliver a significantly enhanced and efficient approach to private equity administration.

Just don’t expect to see overnight results. In fact, you can expect more inefficiency since the new blockchain/Hyperledger-based system is running in parallel with the disjointed manual processes. Previous legacy systems remain; they are not yet being replaced. Still, IBM insists that blockchain is an ideal technology to bring innovation to the private equity market, allowing Northern Trust to improve traditional business processes at each stage to deliver greater transparency and efficiency. Guess we’ll just have to wait and watch.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM On-Premises Cognitive Means z Systems Only

February 16, 2017

Just in case you missed the incessant drumbeat coming out of IBM, the company committed to cognitive computing. But that works for z data centers since IBM’s cognitive system is available on-premises only for the z. Another z first: IBM just introduced Machine Learning (key for cognitive) for the private cloud starting with the z.

ibm-congitive-graphic

There are three ways to get IBM cognitive computing solutions: the IBM Cloud, Watson, or the z System, notes Donna Dillenberger, IBM Fellow, IBM Enterprise Solutions. The z, however, is the only platform that IBM supports for cognitive computing on premises (sorry, no Power). As such, the z represents the apex of programmatic computing, at least as IBM sees it. It also is the only IBM platform that supports cognitive natively; mainly in the form of Hadoop and Spark, both of which are programmatic tools.

What if your z told you that a given strategy had a 92% of success. It couldn’t do that until now with IBM’s recently released cognitive system for z.

Your z system today represents the peak of programmatic computing. That’s what everyone working in computers grew up with, going all the way back to Assembler, COBOL, and FORTRAN. Newer languages and operating systems have arrived since; today your mainframe can respond to Java or Linux and now Python and Anaconda. Still, all are based on the programmatic computing model.

IBM believes the future lies in cognitive computing. Cognitive has become the company’s latest strategic imperative, apparently trumping its previous strategic imperatives: cloud, analytics, big data, and mobile. Maybe only security, which quietly slipped in as a strategic imperative sometime 2016, can rival cognitive, at least for now.

Similarly, IBM describes itself as a cognitive solutions and cloud platform company. IBM’s infatuation with cognitive starts with data. Only cognitive computing will enable organizations to understand the flood of myriad data pouring in—consisting of structured, local data but going beyond to unlock the world of global unstructured data; and then to decision tree-driven, deterministic applications, and eventually, probabilistic systems that co-evolve with their users by learning along with them.

You need cognitive computing. It is the only way, as IBM puts it: to move beyond the constraints of programmatic computing. In the process, cognitive can take you past keyword-based search that provides a list of locations where an answer might be located to an intuitive, conversational means to discover a set of confidence-ranked possibilities.

Dillenberger suggests it won’t be difficult to get to the IBM cognitive system on z . You don’t even program a cognitive system. At most, you train it, and even then the cognitive system will do the heavy lifting by finding the most appropriate training models. If you don’t have preexisting training models, “just use what the cognitive system thinks is best,” she adds. Then the cognitive system will see what happens and learn from it, tweaking the models as necessary based on the results and new data it encounters. This also is where machine learning comes in.

IBM has yet to document payback and ROI data. Dillenberger, however, has spoken with early adopters.  The big promised payback, of course, will come from the new insights uncovered and the payback will be as astronomical or meager as you are in executing on those insights.

But there also is the promise of a quick technical payback for z data centers managers. When the data resides on z—a huge advantage for the z—you just run analytics where the data is. In such cases you can realize up to 3x the performance, Dillenberger noted.  Even if you have to pull data from some other location too you still run faster, maybe 2x faster. Other z advantages include large amounts of memory, multiple levels of cache, and multiple I/O processors get at data without impacting CPU performance.

When the data and IBM’s cognitive system resides on the z you can save significant money. “ETL consumed huge amounts of MIPS. But when the client did it all on the z, it completely avoided the costly ETL process,” Dillenberger noted. As a result, that client reported savings of $7-8 million dollars a year by completely bypassing the x-86 layer and ETL and running Spark natively on the z.

As Dillenberger describes it, cognitive computing on the z is here now, able to deliver a payback fast, and an even bigger payback going forward as you execute on the insights it reveals. And you already have a z, the only on-premises way to IBM’s Cognitive System.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Arcati 2017 Mainframe Survey—Cognitive a No-Show

February 2, 2017

DancingDinosaur checks into Arcati’s annual mainframe survey every few years. You can access a copy of the 2017 report here.  Some of the data doesn’t change much, a few percentage points here or there. For example, 75% of the respondents consider the mainframe too expensive. OK, people have been saying that for years.

On the other hand, 65% of the respondents’ mainframes are involved with web services. Half also run Java-based mainframe apps, up from 30% last year, while 17% more are planning to run Java with their mainframe this year. Similarly, 35% of respondents report running Linux on the mainframe, up from 22% last year. Again, 13% of the respondents expect to add Linux this year.  Driving this is the advantageous cost and management benefits that result from consolidating distributed Linux workloads on the z. Yes, things are changing.

linuxone-5558_d_ibm_linuxone_social_tile_990_550_4_081515

The biggest surprise for DancingDinosaur, however, revolved around IBM’s latest strategic initiatives, especially cognitive computing and blockchain.  Other strategic initiatives may include, depending on who is briefing you at the moment—security, data analytics, cloud, hybrid cloud, and mobile. These strategic imperatives, especially cognitive computing, are expected to drive IBM’s revenue. In the latest statement, reported last week in DancingDinosaur, strategic imperatives amounted to 41% of revenue.  Cloud revenue and Cloud-as-a-service also rose considerably, 35% and 61% respectively.

When DancingDinosaur searched the accompanying Arcati vendor report (over 120 vendors with brief descriptions) for cognitive only GT Software came up. IBM didn’t even mention cognitive in its vendor listing, which admittedly was skimpy. The case was the same with Blockchain; only one vendor, Atos, mentioned it and nothing about blockchain in the IBM listing. More vendors, however, noted supporting one or some of the other supposed strategic initiatives.

Overall, the Arcati survey is quite positive about the mainframe. The survey found that 50 percent of sites viewed their mainframe as a legacy system (down from last year’s 62 percent). However, 22 percent (up from 16 percent last year) viewed mainframe as strategic, with 28 percent (up from 22 percent) viewing mainframes as both strategic and legacy.

Reinforcing the value of the mainframe, the survey found 78 percent of sites experienced some kind of increase in capacity. With increased demand for mainframe resources (data and processing), it should not be surprising that respondents report an 81 percent an increase in technology costs. Yet, 38 percent of sites report their people costs have decreased or stayed the same.

Unfortunately, the survey also found that 70 percent of respondents thought there were a cultural barrier between mainframe and other IT professionals. That did not discourage respondents from pointing out the mainframe advantages: 100 percent highlighted the benefit of the mainframe’s availability, 83 percent highlighted security, 75 percent identified scalability, and 71 percent picked manageability as a mainframe benefit.

Also, social media runs on the mainframe. Respondents found social media (Facebook, Twitter, YouTube) useful for their work on the mainframe. Twenty-seven percent report using social (up slightly from 25 percent last year) with the rest not using it at all despite IBM offering Facebook pages dedicated to IMS, CICS, and DB2. DancingDinosaur, only an occasional FB visitor, will check it out and report.

In terms of how mainframes are being used, the Arcati survey found that 25 percent of sites are planning to use Big Data; five percent of sites have adopted it for DevOps while 48 percent are planning to use mainframe DevOps going forward. Similarly, 14 percent of respondents already are reusing APIs while another 41 percent are planning to.

Arcati points out another interesting thought: The survey showed a 55:45 percent split in favor of distributed systems. So, you might expect the spend on the two types of platform to be similar. Yet, the survey found that 87 percent of an organization’s IT spend was going to distributed systems! Apparently mainframes aren’t as expensive as people think. Or put it another way, the cost of owning and operating distributed systems with mainframe-caliber QoS amounts to a lot more than people are admitting.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: