Posts Tagged ‘System z’

IBM Moves Quantum Computing Toward Commercial Systems

September 20, 2017

IBM seem determined to advance quantum computing. Just this week IBM announced its researchers developed a new approach to simulate molecules on a quantum computer that may one day help revolutionize chemistry and materials science. In this case, the researchers implemented a novel algorithm that is efficient with respect to the number of quantum operations required for the simulation. This involved a 7-qubit processor.

7-cubit processor

In the diagram above IBM scientists successfully used six qubits on a purpose-built seven-qubit quantum device to address the molecular structure problem for beryllium hydride (BeH2) – the largest molecule simulated on a quantum computer to date.

Back in May IBM announced an even bigger quantum device. It prototyped the first commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM. This week’s announcement certainly didn’t surpass it in size. IBM engineered the 17-qubit system to be at least twice as powerful as what is available today to the public on the IBM Cloud and it will be the basis for the first IBM Q early-access commercial systems.

It has become apparent to the scientists and researchers who try to work with complex mathematical problems and simulations that the most powerful conventional commercial computers are not up to the task. Even the z14 with its 10-core CPU and hundreds of additional processors dedicated to I/O cannot do the job.

As IBM puts it: Even today’s most powerful supercomputers cannot exactly simulate the interacting behavior of all the electrons contained in a simple chemical compound such as caffeine. The ability of quantum computers to analyze molecules and chemical reactions could help accelerate research and lead to the creation of novel materials, development of more personalized drugs, or discovery of more efficient and sustainable energy sources.

The interplay of atoms and molecules is responsible for all matter that surrounds us in the world. Now “we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q, IBM Research. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do, and start becoming a tool for experts in areas such as chemistry, biology, healthcare and materials science.”

So commercial quantum systems are coming.  Are you ready to bring a quantum system into you data center? Actually you can try one today for free here  or through GitHub, which offers a Python software development kit for writing quantum computing experiments, programs, and applications. Although DancingDinosaur will gladly stumble through conventional coding, quantum computing probably exceeds his frustration level even with a Python development kit.

However, if your organization is involved in these industries—materials science, chemistry, and the like or is wrestling with a problem you cannot do on a conventional computer—it probably is worth a try, especially for free. You can try an easy demo card game that compares quantum computing with conventional computing.

But as reassuringly as IBM makes quantum computing sound, don’t kid yourself; it is very complicated.  Deploying even a small qubit machine is not going to be like buying your first PC. Quantum bits, reportedly, are very fragile or transitory. Labs will keep them very cold just to better stabilize the system and keep them from switching their states before they should.  Just think how you’d feel about your PC if the bit states of 0 and 1 suddenly and inextricably changed.

That’s not the only possible headache. You only have limited time to work on cubits given their current volatility when not super cooled. Also, work still is progressing on advancing the quantum frameworks and mapping out ecosystem enablement.

Even IBM researchers admit that some problems may not be better on quantum computers. Still, until you pass certain threshold, like qubit volume, your workload might not perform better on a quantum computer. The IBM quantum team suggests it will take until 2021 to consistently solve a problem that has commercial relevance using quantum computing.

Until then, and even after, IBM is talking about a hybrid approach in which parts of a problem are solved with a quantum computer and the rest with a conventional system. So don’t plan on replacing your Z with a few dozen or even hundreds of qubits anytime soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Blockchain Platform Aims for Immutable Accuracy

August 25, 2017

Earlier this week IBM announced a major blockchain collaboration among group of leading companies across the global food supply chain. The goal is to reduce the number of people falling ill or even dying from eating contaminated food. IBM’s solution is its blockchain platform, which it believes is ideally suited to help address these challenges because it establishes a trusted environment that tracks all transactions, an accurate, consistent, immutable version.

Blockchain can improve food traceability

The food segment is just one of many industries IBM will target for its blockchain platform. It describes the platform as ideally suited to help address varied industry challenges because it establishes a trusted environment for all transactions. IBM claims it as the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance and operation of a multi-institution business network. Rival vendors, like Accenture, may disagree.  In the case of the global food supply chain, all participants -growers, suppliers, processors, distributors, retailers, regulators and consumers – can gain permissioned access to known and trusted information regarding the origin and state of food. In December 2016 DancingDinosaur reported on IBM and Walmart using blockchain for food safety.

IBM’s blockchain platform is built around Hyperledger Composer, integrated with popular development environments using open developer tools, and accepted business terms to generate blockchain code and smart contracts. It also includes sample industry use cases.  Using IBM’s platform, developers can create standard business language in JavaScript and the APIs help keep development work at the business level, rather than being highly technical. This makes it possible for most any programmer to be a blockchain developer. Additionally, a variety of IBM Developer Journeys for blockchain are available featuring free open source code, documentation, APIs, architecture diagrams, and one-click deployment Git repositories to fast-track building, according to IBM.

For governance and operation it also provides activation tools for new networks, members, smart contracts and transaction channels. It also includes multi-party workflow tool with member activities panel, integrated notifications, and secure signature collection for policy voting. In addition, a new class of democratic governance tools designed to help improve productivity across the organizations uses a voting process that collects signatures from members to govern member invitation distribution of smart contracts and the creation of transactions channels. By enabling the quick onboarding of participants, assigning roles, and managing access, organizations can begin transacting via the blockchain fast.

In operating the network IBM blockchain platform provides always-on, high availability with seamless software and blockchain network updates, a hardened security stack with no privileged access, which blocks malware, and built-in blockchain monitoring for full network visibility. Woven throughout the platform is the Hyperledger Fabric. It also provides the highest-level, commercially available tamper resistant FIPS140-2 level 4 protection for encryption keys.

Along with its blockchain platform, IBM is advancing other blockchain supply chain initiatives by using the platform for an automated billing and invoicing system. Initial work to use blockchain for invoicing also is underway starting with Lenovo. This will provide an audit-ready solution with full traceability of billing and operational data, and help speed on-boarding time for new vendors and new contract requirements, according to IBM.

The platform leverages IBM’s work for more than 400 organizations. It includes insights gained as IBM has built blockchain networks across industries ranging from financial services, supply chain and logistics, retail, government, and healthcare.

Extensively tested and piloted, the IBM’s new blockchain platform addresses a wide range of enterprise pain points, including both business and technical requirements around security, performance, collaboration and privacy. It includes innovation developed through open source collaboration in the Hyperledger community, including the newest Hyperledger Fabric v1.0 framework and Hyperledger Composer blockchain tool, both hosted by the Linux Foundation.

DancingDinosaur has previously noted that the z appears ideal for blockchain. DancingDinosaur based this on the z13’s scalability, security, and performance. The new z14, with its automated, pervasive encryption may be even better.  The Hyperledger Composer capabilities along with the sample use cases promise an easy, simple way to try blockchain among some suppliers and partners.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New IBM Z Redefines Mainframe and Security and Cloud

July 19, 2017

By now you have certainly heard of IBM’s latest mainframe, the long-awaited z14, which the company refers to as Z. An announcement of a new mainframe usually doesn’t attract much notice, but maybe this announcement should. Even if you are not a mainframe fan this machine offers a solution that helps everybody—pervasive encryption of all data with no impact on operations or performance and with no need to take much action on your part, except to plug the machine in.

10-core z14 chip

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome automatic pervasive encryption. Yet 96% don’t. Of the 9 billion records breached since 2013 only 4% were encrypted! You already know why: encryption is a chore, impacts staff, slows system performance, costs money, and more. You know all the complaints better than DancingDinosaur.

The z14 changes everything from this point going forward. IBM has committed a 4x increase in silicon dedicated to cryptographic algorithms for pervasive encryption. In effect the Z encrypts all data associated with an entire application, cloud service, and database, in flight and at rest, automatically. This amounts to bulk encryption at cloud scale made possible by a massive 7x increase in cryptographic performance over the z13. This is 18x faster than comparable x86 systems and at just five percent of the cost of x86-based solutions.

In truth, it’s better than this. You get this encryption automatically virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box. DancingDinosaur has not seen any specific prices yet but you are welcome to scream if IBM doesn’t come through.

You immediately get rid of all the encryption headaches; you don’t have to classify data, manage encryption, or do any of the other chores typically associated with encryption. You just get it, automatically. The z14 also relieves you from managing encryption keys; only IBM Z can protect millions of keys (as well as the process of accessing, generating and recycling them) in tamper-responsive hardware that causes keys to be invalidated at any sign of intrusion and then be restored in safety.

When it comes to security, the z14 truly is a game changer. And it finally will get compliance auditors off your back once they realize how extensive z14 protection is.

IBM downplayed speeds and feeds with the z13 but they’re back with the z14. Specifically, a 5.2 GHz (versus 5.0 GHz IBM z13) is still a bit short of z12, which ran 5.5 GHz. But as with the z13, IBM makes up for it with more memory. The z14 can handle 32 TB of memory. It also includes up to 170 configurable cores (up to 10 per chip) for a total of 1832 MIPS. The L1 and L2 cache is on the core.  The L3 cache also sits on chip and is shared by on-chip cores, and communicates with cores, memory, I/O, and system controller as a single chip module.

Maybe not the richest specs but impressive nonetheless. IBM has been tweaking the box from top to bottom to boost performance. And all the while it will take over end-to-end encryption automatically, including encrypted APIs. Surprisingly, IBM has said nothing about Z’s power consumption but constantly on encrpytion/decryption has to draw more power than, say, the z13. Am waiting to hear what IBM has to say.

This is not just for mainframe jocks. Optimized IBM z/OS Connect technologies make it straightforward for cloud developers to discover and call any IBM Z application or data from a cloud service, or for Z developers to call any cloud service. IBM Z now allows organizations to encrypt these APIs and still run nearly 3x faster than alternatives based on comparable x86 systems.  These speeds and feeds have all been thoroughly documented and detailed at the bottom of the IBM Z press release here.

Will the z14 return the mainframe to positive revenue?  Probably for a few quarters, maybe more if non-mainframe shops want the clear payback of pervasive encryption, although it won’t be an easy transition for them without IBM assistance and incentives.

Next week DancingDinosaur will take up the Z’s three new container pricing models intended to make the Z competitive with public clouds and on-premises x86 environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Power and z Platforms Show Renewed Excitement

June 30, 2017

Granted, 20 consecutive quarters of posting negative revenue numbers is enough to get even the most diehard mainframe bigot down. If you ran your life like that your house and your car would have been seized by the bank months ago.

Toward the end of June, however, both z and Power had some good news. First,  a week ago IBM announced that corporate enterprise users ranked the IBM z  enterprise servers as the most reliable hardware platform available on the market today. In its enterprise server category the survey also found that IBM Power Systems achieved the highest levels of reliability and uptime when compared with 14 server hardware options and 11 server hardware virtualization platforms.

IBM links 2 IBM POWER8 with NVIDIA NVLink with 4 NVIDIA Tesla P100 accelerators

The results were compiled and reported by the ITIC 2017 Global Server Hardware and Server OS Reliability survey, which polled 750 organizations worldwide during April/May 2017. Also among the survey finding:

  • IBM z Systems Enterprise mainframe class systems, had zero percent incidents of more than four hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of minutes of unplanned per server, per annual downtime. That equates to 8 seconds per month of “blink and you miss it,” or 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • IBM Power Systems has the least amount of unplanned downtime, with 2.5 minutes per server/per year of any mainstream Linux server platforms.
  • IBM and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.

The survey also highlighted market reliability trends. For nearly all companies surveyed, having four nines (99.99%) of availability, equating to less than one hour of system downtime per year was a key factor in its decision.

Then consider the increasing costs of downtime. Nearly all survey respondents claimed that one hour of downtime costs them more than $150k, with one-third estimating that the same will cost their business up to $400k.

With so much activity going on 24×7, for an increasing number of businesses, 4 nines of availability is no longer sufficient.  These businesses are adopting carrier levels of availability; 5 nines or 6 nines (or 99.999 to 99.9999 percent) availability, which translates to downtime per year of 30 seconds (6 nines) or 5 minutes (5 nines) of downtime per year.

According to ITIC’s 2016 report: IBM’s z Enterprise mainframe customers reported the least amount of unplanned downtime and the highest percentage of five nines (99.999%) uptime of any server hardware platform.

Just this week, IBM announced that according to results from International Data Corporation (IDC) Worldwide Quarterly Server Tracker® (June, 2017) IBM exceeded market growth by 3x compared with the total Linux server market, which grew at 6 percent. The improved performance are the result of success across IBM Power Systems including IBM’s OpenPOWER LC servers and IBM Power Systems running SAP HANA as well as the OpenPOWER-Ready servers developed through the OpenPOWER Foundation.

As IBM explains it: Power Systems market share growth is underpinned by solutions that handle fast growing applications, like the deep learning capabilities within the POWER8 architecture. In addition these are systems that expand IBM’s Linux server portfolio, which have been co-developed with fellow members of the OpenPOWER Foundation

Now all that’s needed is IBM’s sales and marketing teams to translate this into revenue. Between that and the new systems IBM has been hinting at for the past year maybe the consecutive quarterly losses might come to an end this year.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces Hitachi-Specific z13

May 30, 2017

Remember when rumors were flying that Hitachi planned to buy the mainframe z Systems business from IBM?  DancingDinosaur didn’t believe it at that time, and now we have an official announcement that IBM is working with Hitachi to deliver mainframe z System hardware for use with Hitachi customers.

Inside the IBM z13

DancingDinosaur couldn’t see Hitachi buying the z. The overhead would be too great. IBM has been sinking hundreds of millions of dollars into the z, adding new capabilities ranging from Hadoop and Spark natively on z to whatever comes out of the Open Mainframe Project.

The new Hitachi deal takes the z in a completely different direction. The plans calls for using Hitachi’s operating system, VOS3, running on the latest IBM z13 hardware to provide Hitachi users with better performance while sustaining their previous investments in business-critical Hitachi data and software, as IBM noted. VOS3 started as a fork of MVS and has been repeatedly modified since.

According to IBM, Hitachi will exclusively adopt the IBM z Systems high-performance mainframe hardware technology as the only hardware for the next generation of Hitachi’s AP series. These systems primarily serve major organizations in Japan. This work expands Hitachi’s cooperation with IBM to make mainframe development more efficient through IBM’s global capabilities in developing and manufacturing mainframe systems. The Open Mainframe Project, BTW, is a Linux initiative.

The collaboration, noted IBM, reinforces its commitment to delivering new innovations in mainframe technology and fostering an open ecosystem for the mainframe to support a broad range of software and applications. IBM recently launched offerings for IBM z Systems that use the platform’s capabilities for speed, scale and security to deliver cloud-based blockchain services for building new transaction systems and machine learning for analyzing large amounts of data.

If you count VOS3, the mainframe now runs a variety of operating systems, including z/OS, z/TPF and z/VM operating systems as well as the Linux. Reportedly, Hitachi plans to integrate its new mainframe with its Lumada Internet of Things (IoT) offerings. With z scalability, security, massive I/O, and performance the z makes an ideal IoT platform, and IoT is a market IBM targets today. Now IBM is seeding a competitor with the z running whatever appealing capabilities Hitachi’s Lumada offers. Hope whatever revenue or royalties IBM gets is worth it.

IBM and Hitachi, as explained in the announcement, have a long history of cooperation and collaboration in enterprise computing technologies. Hitachi decided to expand this cooperation at this time to utilize IBM’s most advanced mainframe technologies. Hitachi will continue to provide its customers with a highly reliable, high-performance mainframe environment built around the Hitachi VOS3 operating system. Hitachi also continues to strengthen mainframe functionality and services which contributes to lower TCO, improved ease of system introduction and operation, and better serviceability.

Of course, the mainframe story is far from over. IBM has been hinting at a new mainframe coming later this year for months.  Since IBM stopped just automatically cranking up core processor speed to boost price/performance it will employ an array of assist processors and software optimizations to boost performance wherever it can, but particularly in the area of its current critical imperatives—security, cognitive computing, blockchain, and cloud. One thing DancingDinosaur doesn’t expect to see in the new z, however, will be qubits embedded, but who knows?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Insists Storage is Generating Positive Revenue

May 19, 2017

At a recent quarterly briefing on the company’s storage business, IBM managers crowed over its success: 2,000 new Spectrum Storage customers, 1,300 new DS8880 systems shipped, 1500 PB of capacity shipped, 7% revenue gain Q1’17. This appeared to contradict yet another consecutive losing quarter in which only IBM’s Cognitive Solutions (includes Solutions Software and Transaction Processing Software) posted positive revenue.

However, Martin Schroeter, Senior Vice President and Chief Financial Officer (1Q’17 financials here), sounded upbeat about IBM storage in the quarterly statement: Storage hardware was up seven percent this quarter, led by double-digit growth in our all-flash array offerings. Flash contributed to our Storage revenue growth in both midrange and high-end. In storage, we continue to see the shift in value towards software-defined environments, where we continue to lead the market. We again had double-digit revenue growth in Software-Defined Storage, which is not reported in our Systems segment. Storage software now represents more than 40 percent of our total storage revenue.

IBM Flash System A9000

Highly parallel all-flash storage for hyperscale and cloud data centers

Schroeter continued: Storage gross margins are down, as hardware continues to be impacted by price pressure. To summarize Systems, our revenue and gross profit performance were driven by expected cycle declines in z Systems and Power, mitigated by Storage revenue growth. We continue to expand our footprint and add new capabilities, which address changing workloads. While we are facing some shifting market dynamics and ongoing product transitions, our portfolio remains uniquely optimized for cognitive and cloud computing.

DancingDinosaur hopes he is right.  IBM has been signaling a new z System coming for months, along with enhancements to Power storage. Just two weeks ago IBM reported achievements with Power and Nvidia, as DancingDinosaur covered at that time.

If there was any doubt, all-flash storage is the way IBM and most other storage providers are heading for the performance and competitive economics. In January IBM announced three all flash DS888* all flash products, which DancingDinosaur covered at the time here. Specifically:

  • DS8884 F (the F designates all flash)—described by IBM as performance delivered within a flexible and space-saving package
  • DS8886 F—combines performance, capacity, and cost to support a variety of workloads and applications
  • DS8888 F—promises performance and capacity designed to address the most demanding business workload requirements

The three products are intended to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. Doubt that a lot of mainframe data centers are doing much with cognitive systems yet, but that will be coming.

Spectrum Storage also appears to be looming large in IBM’s storage plans. Spectrum Storage is IBM’s software defined storage (SDS) family of products. DancingDinosaur covered the latest refresh of the suite of products this past February.

The highlights of the recent announcement included the addition of Cloud Object Storage and a version of Spectrum Virtualize as software only.  Spectrum Control got a slew of enhancements, including new cloud-based storage analytics for Dell EMC VNX, VNXe, and VMAX; extended capacity planning views for external storage, and transparent cloud tiering for IBM Spectrum Scale.  The on-premises editions added consolidated chargeback/showback and support for Dell EMC VNXe file storage. This should make it clear that Spectrum Storage is not only for underlying IBM storage products.

Along the same lines, Spectrum Storage added VMware 6 support and the certified vSphere Web client. In the area of cloud object storage, IBM added native NFS access, enhance STaaS multi-tenancy, IPV6 support, and preconfigured bundles.

IBM also previewed enhancements coming in 2Q’17.   Of specific interest to DancingDinosaur readers will likely be  the likely updates to the FlashSystem and VeraStack portfolio.

The company is counting on these enhancements and more to help pull IBM out of its tailspin. As Schroeter wrote in the 1Q’17 report: New systems product introductions later in the year will drive improved second half performance as compared to the first. Hope so; already big investors are cashing out. Clients, however, appear to be staying for now.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware-Syncsort-Splunk to Boost Mainframe Security

April 6, 2017

The mainframe has proven to be remarkably secure over the years, racking up the highest security certifications available. But there is still room for improvement. Earlier this week Compuware announced Application Audit, a software tool that aims to transform mainframe cybersecurity and compliance through real-time capture of user behavior.

Capturing user behavior, especially in real-time, is seemingly impossible if you have to rely on the data your collect from the various logs and SMF data.  Compuware’s solution, Application Audit, in conjunction with Syncsort and Splunk, fully captures and analyzes start-to-finish mainframe application user behavior.

As Compuware explains: Most enterprises still rely on disparate logs and SMF data from security products such as RACF, CA-ACF2 and CA-Top Secret to piece together user behavior.  This is too slow if you want to capture bad behavior while it’s going on. Some organization try to apply analytics to these logs but that also is too slow. By the time you have collected enough logs to deduce who did what and when the damage may have been done.  Throw in the escalating demands of cross-platform enterprise cybersecurity and increasingly burdensome global compliance mandates you haven’t a chance without an automated tool optimized for this.

Fortunately, the mainframe provides rich and comprehensive session data you can run through and analyze with Application Audit and in conjunction with the organization’s security information and event management (SIEM) systems to more quickly and effectively see what really is happening. Specifically, it can:

  • Detect, investigate, and respond to inappropriate behavior by internal users with access
  • Detect, investigate, and respond to hacked or illegally accessed user accounts
  • Support criminal/legal investigations with complete and credible forensics
  • Fulfill compliance mandates regarding protection of sensitive data

IBM, by the way, is not ignoring the advantages of analytics for z security.  Back in February you read about IBM bringing its cognitive system to the z on DancingDinosaur.  IBM continues to flog cognitive on z for real-time analytics and security; promising to enable faster customer insights, business insights, and systems insights with decisions based on real-time analysis of both current and historical data delivered on an analytics platform designed for availability, optimized for flexibility, and engineered with the highest levels of security. Check out IBM’s full cognitive for z pitch.

The data Compuware and Syncsort collect with Application Audit is particularly valuable for maintaining control of privileged mainframe user accounts. Both private- and public-sector organizations are increasingly concerned about insider threats to both mainframe and non-mainframe systems. Privileged user accounts can be misused by their rightful owners, motivated by everything from financial gain to personal grievances, as well as by malicious outsiders who have illegally acquired the credentials for those accounts. You can imagine what havoc they could wreak.

In addition, with Application Audit Compuware is orchestrating a number of players to deliver the full security picture. Specifically, through collaboration with CorreLog, Syncsort and Splunk, Compuware is enabling enterprise customers to integrate Application Audit’s mainframe intelligence with popular SIEM solutions such as Splunk, IBM QRadar, and HPE Security ArcSight ESM. Additionally, Application Audit provides an out-of-the-box Splunk-based dashboard that delivers value from the start. As Compuware explains, these integrations are particularly useful for discovering and addressing security issues associated with today’s increasingly common composite applications, which have components running on both mainframe and non-mainframe platforms. SIEM integration also ensures that security, compliance and other risk management staff can easily access mainframe-related data in the same manner as they access data from other platforms.

“Effective IT management requires effective monitoring of what is happening for security, cost reduction, capacity planning, service level agreements, compliance, and other purposes,” noted Stu Henderson, Founder and President of the Henderson Group in the Compuware announcement. “This is a major need in an environment where security, technology, budget, and regulatory pressures continue to escalate.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Introduces First Universal Commercial Quantum Computers

March 9, 2017

A few years ago DancingDinosaur first encountered the possibility of quantum computing. It was presented as a real but distant possibility. This is not something I need to consider I thought at the time.  By the time it is available commercially I will be long retired and probably six feet under. Well, I was wrong.

This week IBM unveiled its IBM Q quantum systems. IBM Q will be leading Watson and blockchain to deliver the most advanced set of services on the IBM Cloud platform. There are organizations using it now, and DancingDinosaur continues to be living and working still.

IBM Quantum Computing scientists Hanhee Paik (left) and Sarah Sheldon (right) examine the hardware inside an open dilution fridge at the IBM Q Lab

As IBM explains: While technologies that currently run on classical (or conventional) computers, such as Watson, can help find patterns and insights buried in vast amounts of existing data, quantum computers will deliver solutions to multi-faceted problems where patterns cannot be seen because the data doesn’t exist and the possibilities that you need to explore are too enormous to ever be processed by conventional computers.

Just don’t retire your z or Power system in favor on an IBM Q yet. As IBM explained at a recent briefing on the quantum computing the IBM Q universal quantum computers will be able to do any type of problem that conventional computers do today. However, many of today’s workloads, like on-line transaction processing, data storage, and web serving will continue to run more efficiently on conventional systems. The most powerful quantum systems of the next decade will be a hybrid of quantum computers with conventional computers to control logic and operations on large amounts of data.

The most immediate use cases will involve molecular dynamics, drug design, and materials. The new quantum machine, for example, will allow the healthcare industry to design more effective drugs faster and at less cost and the chemical industry to develop new and improved materials.

Another familiar use case revolves around optimization in finance and manufacturing. The problem here comes down to computers struggling with optimization involving an exponential number of possibilities. Quantum systems, noted IBM, hold the promise of more accurately finding the most profitable investment portfolio in the financial industry, the most efficient use of resources in manufacturing, and optimal routes for logistics in the transportation and retail industries.

To refresh the basics of quantum computing.  The challenges invariably entail exponential scale. You start with 2 basic ideas; 1) the uncertainty principle, which states that attempting to observe a state in general disturbs it while obtaining only partial information about the state. Or 2) where two systems can exist in an entangled state, causing them to behave in ways that cannot be explained by supposing that each has some state of its own. No more zero or 1 only.

The basic unit of quantum computing is the qubit. Today IBM is making available a 5 qubit system, which is pretty small in the overall scheme of things. Large enough, however, to experiment and test some hypotheses; things start getting interesting at 20 qubits. An inflexion point, IBM researchers noted, occurs around 50 qubits. At 50-100 qubits people can begin to do some serious work.

This past week IBM announced three quantum computing advances: the release of a new API for the IBM Quantum Experience that enables developers and programmers to begin building interfaces between IBM’s existing 5 qubit cloud-based quantum computer and conventional computers, without needing a deep background in quantum physics. You can try the 5 qubit quantum system via IBM’s Quantum Experience on Bluemix here.

IBM also released an upgraded simulator on the IBM Quantum Experience that can model circuits with up to 20 qubits. In the first half of 2017, IBM plans to release a full SDK on the IBM Quantum Experience for users to build simple quantum applications and software programs. Only the publically available 5 qubit quantum system with a web-based graphical user interface now; soon to be upgraded to more qubits.

 IBM Research Frontiers Institute allows participants to explore applications for quantum computing in a consortium dedicated to making IBM’s most ambitious research available to its members.

Finally, the IBM Q Early Access Systems allows the purchase of access to a dedicated quantum system hosted and managed by IBM. Initial system is 15+ qubits, with a fast roadmap promised to 50+ qubits.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “We believe that quantum computing promises to be the next major technology that has the potential to drive a new era of innovation across industries.”

Are you ready for quantum computing? Try it today on IBM’s Quantum Experience through Bluemix. Let me know how it works for you.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM and Northern Trust Collaborate on Blockchain for Private Equity Markets

March 3, 2017

At a briefing for IT analysts, IBM laid out how it sees blockchain working in practice. Surprisingly, the platform for the Hyperledger effort was not x86 but LinuxONE due to its inherent security.  As the initiative grows the z-based LinuxONE can also deliver the performance, scalability, and reliability the effort eventually will need too.

IBM describes its collaboration with Northern Trust and other key stakeholders as the first commercial deployment of blockchain technology for the private equity market. Although as the private equity market stands now the infrastructure supporting private equity has seen little innovation in recent years even as investors seek greater transparency, security, and efficiency. Enter the open LinuxONE platform, the Hyperledger fabric, and Unigestion, a Geneva, Switzerland-based asset manager with $20 billion in assets under management.

IBM Chairman and CEO Ginni Rometty discusses how cognitive technology and innovations such as Watson and blockchain have the potential to radically transform the financial services industry at Sibos 2016 in Geneva, Switzerland on Weds., September 28, 2016. (Feature Photo Service)

IBM Chairman and CEO Ginni Rometty discusses  blockchain at Sibos

The new initiative, as IBM explains it, promises a new and comprehensive way to access and visualize data.  Blockchain captures and stores information about every transaction and investment as meta data. It also captures details about relevant documents and commitments. Hyperledger itself is a logging tool that creates an immutable record.

The Northern Trust effort connects business logic, legacy technology, and blockchain technology using a combination of Java/JavaScript and IBMs blockchain product. It runs on IBM Bluemix (cloud) using IBM’s Blockchain High Security Business Network. It also relies on key management to ensure record/data isolation and enforce geographic jurisdiction. In the end it facilitates managing the fund lifecycle more efficiently than the previous primarily paper-based process.

More interesting to DancingDinosaur is the selection of the z through LinuxONE and blockchain’s use of storage.  To begin with blockchain is not really a database. It is more like a log file, but even that is not quite accurate because “it is a database you play as a team sport,” explained Arijit Das, Senior Vice President, FinTech Solutions, at the analyst briefing. That means you don’t perform any of the usual database functions; no deletes or updates, just appends.

Since blockchain is an open technology, you actually could do it on any x86 Linux machine, but DancingDinosaur readers probably wouldn’t want to do that. Blockchain essentially ends up being a distributed group activity and LinuxONE is unusually well optimized for the necessary security. It also brings scalability, reliability, and high performance along with the rock-solid security of the latest mainframe. In general LinuxONE can handle 8000 virtual servers in a single system and tens of thousands of containers. Try doing that with an x86 machine or even dozens.   You can read more on LinuxONE that DancingDinosaur wrote when it was introduced here and here.

But you won’t need near that scalability with the private equity application, at least at first. Blockchain gets more interesting when you think about storage. Blockchain has the potential to generate massive numbers of files fast, but that will only happen when it is part of, say, a supply chain with hundreds, or more likely, thousands of participating nodes on the chain and those nodes are very active. More likely for private equity trading, certainly at the start, blockchain will handle gigabytes of data and maybe only megabytes at first. This is not going to generate much revenue for IBM storage. A little bit of flash could probably do the trick.

Today, current legal and administrative processes that support private equity are time consuming and expensive, according to Peter Cherecwich, president of Corporate & Institutional Services at Northern Trust. They lack transparency while inefficient market practices leads to lengthy, duplicative and fragmented investment and administration processes. Northern Trust’s solution based on blockchain and Hyperledger, however, promises to deliver a significantly enhanced and efficient approach to private equity administration.

Just don’t expect to see overnight results. In fact, you can expect more inefficiency since the new blockchain/Hyperledger-based system is running in parallel with the disjointed manual processes. Previous legacy systems remain; they are not yet being replaced. Still, IBM insists that blockchain is an ideal technology to bring innovation to the private equity market, allowing Northern Trust to improve traditional business processes at each stage to deliver greater transparency and efficiency. Guess we’ll just have to wait and watch.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: