IBM Moves Quantum Computing Toward Commercial Systems

September 20, 2017

IBM seem determined to advance quantum computing. Just this week IBM announced its researchers developed a new approach to simulate molecules on a quantum computer that may one day help revolutionize chemistry and materials science. In this case, the researchers implemented a novel algorithm that is efficient with respect to the number of quantum operations required for the simulation. This involved a 7-qubit processor.

7-cubit processor

In the diagram above IBM scientists successfully used six qubits on a purpose-built seven-qubit quantum device to address the molecular structure problem for beryllium hydride (BeH2) – the largest molecule simulated on a quantum computer to date.

Back in May IBM announced an even bigger quantum device. It prototyped the first commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM. This week’s announcement certainly didn’t surpass it in size. IBM engineered the 17-qubit system to be at least twice as powerful as what is available today to the public on the IBM Cloud and it will be the basis for the first IBM Q early-access commercial systems.

It has become apparent to the scientists and researchers who try to work with complex mathematical problems and simulations that the most powerful conventional commercial computers are not up to the task. Even the z14 with its 10-core CPU and hundreds of additional processors dedicated to I/O cannot do the job.

As IBM puts it: Even today’s most powerful supercomputers cannot exactly simulate the interacting behavior of all the electrons contained in a simple chemical compound such as caffeine. The ability of quantum computers to analyze molecules and chemical reactions could help accelerate research and lead to the creation of novel materials, development of more personalized drugs, or discovery of more efficient and sustainable energy sources.

The interplay of atoms and molecules is responsible for all matter that surrounds us in the world. Now “we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q, IBM Research. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do, and start becoming a tool for experts in areas such as chemistry, biology, healthcare and materials science.”

So commercial quantum systems are coming.  Are you ready to bring a quantum system into you data center? Actually you can try one today for free here  or through GitHub, which offers a Python software development kit for writing quantum computing experiments, programs, and applications. Although DancingDinosaur will gladly stumble through conventional coding, quantum computing probably exceeds his frustration level even with a Python development kit.

However, if your organization is involved in these industries—materials science, chemistry, and the like or is wrestling with a problem you cannot do on a conventional computer—it probably is worth a try, especially for free. You can try an easy demo card game that compares quantum computing with conventional computing.

But as reassuringly as IBM makes quantum computing sound, don’t kid yourself; it is very complicated.  Deploying even a small qubit machine is not going to be like buying your first PC. Quantum bits, reportedly, are very fragile or transitory. Labs will keep them very cold just to better stabilize the system and keep them from switching their states before they should.  Just think how you’d feel about your PC if the bit states of 0 and 1 suddenly and inextricably changed.

That’s not the only possible headache. You only have limited time to work on cubits given their current volatility when not super cooled. Also, work still is progressing on advancing the quantum frameworks and mapping out ecosystem enablement.

Even IBM researchers admit that some problems may not be better on quantum computers. Still, until you pass certain threshold, like qubit volume, your workload might not perform better on a quantum computer. The IBM quantum team suggests it will take until 2021 to consistently solve a problem that has commercial relevance using quantum computing.

Until then, and even after, IBM is talking about a hybrid approach in which parts of a problem are solved with a quantum computer and the rest with a conventional system. So don’t plan on replacing your Z with a few dozen or even hundreds of qubits anytime soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the new IBM LinuxONE Emperor II

September 15, 2017

Early this week IBM introduced the newest generation of the LinuxONE, the IBM LinuxONE Emperor II, built on the same technology as the IBM z14, which DancingDinosaur covered on July 19. The key feature of the new LinuxONE Emperor II, is IBM Secure Service Container, presented as an exclusive LinuxONE technology representing a significant leap forward in data privacy and security capabilities. With the z14 the key capability was pervasive encryption. This time the Emperor II promises very high levels of security and data privacy assurance while rapidly addressing unpredictable data and transaction growth. Didn’t we just hear a story like this a few weeks ago?

IBM LinuxONE Emperor (not II)

Through the IBM Secure Service Container, for the first time data can be protected against internal threats at the system level from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats. Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container to be ready for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use.

The Emperor II and the LinuxONE are being positioned as the premier Linux system for highly secured data serving. To that end, it promises:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers (SoD)
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

With the z14 you got this too, maybe worded slightly differently.

In terms of performance and scalability, IBM promises:

  • Industry-leading performance of Java workloads, up to 50% faster than Intel
  • Vertical scale to 170 cores, equivalent to hundreds of x86 cores
  • Simplification to make the most of your Linux skill base and speed time to value
  • SIMD to accelerate analytics workloads & decimal compute (critical to financial applications)
  • Pause-less garbage collection to enable vertical scaling while maintaining predictable performance

Like the z14, the Emperor II also lays a foundation for data serving and next gen apps, specifically:

  • Adds performance and security to new open source DBaaS deployments
  • Develops new blockchain applications based on the proven IBM Blockchain Platform—in terms of security, blockchain may prove more valuable than even secure containers or pervasive encryption
  • Support for data-in-memory applications and new workloads using 32 TB of memory—that’s enough to run production databases entirely in memory (of course, you’ll have to figure out if the increased performance, which should be significant, is worth the extra memory cost)
  • A build-your-cloud approach for providers wanting a secure, scalable, open source platform

If you haven’t figured it out yet, IBM sees itself in a titanic struggle with Intel’s x86 platform.  With the LinuxONE Emperor II IBM senses it can gain the upper hand with certain workloads. Specifically:

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (that aren’t included in the core count) giving the platform the best I/O capacity and performance in the industry
  • Its shared memory, vertical scale architecture delivers a measurably better architecture for stateful workloads like databases and systems of record
  • The LinuxONE/z14 hardware designed to still give good response time at up to 100% utilization, which simplifies the solution and reduces the extra costs many data centers assume are necessary because they’re used to 50% utilization
  • The Emperor II can be ordered designed and tested for earthquake resistance
  • The z-based LinuxONE infrastructure has survived fire and flood scenarios where all other server infrastructures have failed

That doesn’t mean, however, the Emperor II is a Linux no brainer, even for shops facing pressure around security compliance, never-fail mission critical performance, high capacity, and high performance. Change is hard and there remains a cultural mindset based on the lingering myth of the cheap PC of decades ago.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Finds New Corporate Home and Friend

September 8, 2017

Centerbridge Partners, L.P. a private investment firm, completed the $1.26 billion acquisitions of enterprise software providers Syncsort Incorporated and Vision Solutions, Inc. from affiliates of Clearlake Capital Group, L.P. Clearlake, which acquired Syncsort in 2015 and Vision in 2016, will retain a minority ownership stake in the combined company.

Syncsort is a provider of enterprise software and a player in Big Iron to Big Data solutions. DancingDinosaur has covered it here and here. According to the company, customers in more than 85 countries rely on Syncsort to move and transform mission-critical data and workloads. Vision Solutions provides business resilience tools addressing high availability, disaster recovery, migration, and data sharing for IBM Power Systems.

The company apparently hasn’t suffered from being passed between owners. Syncsort has been active in tech acquisitions for the past two years as it builds its data transformation footprint. Just a couple of weeks ago, it acquired Metron, a provider of cross-platform capacity management software, services. Metron’s signature athene solution delivers trend-based forecasting, capacity modeling, and planning capabilities that enable enterprises to optimize their data infrastructure to improve performance and control costs on premise or in the cloud.

This acquisition is the first since the announcement that Syncsort and Vision Solutions are combining, adding expertise and proven leadership in IBMi and AIX Power Systems platforms and to reinforce its ‘Big Iron to big data’ focus. Syncsort has also long established player in the mainframe business. Its Big Iron to Big Data promises to be a fast-growing market segment comprised of solutions that optimize traditional data systems and deliver mission-critical data from these systems to next-generation analytic environments using innovative Big Data technologies. Metron’s solutions and expertise is expected to contribute to the company’s data infrastructure optimization portfolio.

Syncsort has been on a roll since late in 2016 when, backed by Clearlake, it acquired Trillium Software, a global provider of data quality solutions. The acquisition of Trillium was the largest in Syncsort’s history then, and brings together data quality and data integration technology for enterprise environments. The combination of Syncsort and Trillium, according to the company, enables enterprises to harness all their valuable data assets for greater business insights, applying high-performance and scalable data movement, transformation, profiling, and quality across traditional data management technology stacks as well as Hadoop and cloud environments.

Specifically, Syncsort and Trillium both have a substantial number of large enterprise customers seeking to generate new insights by combining traditional corporate data with diverse information sources from mobile, online, social, and the Internet of Things. Syncsort expects these organizations to continue to rely heavily on next-generation analytic capabilities, creating a growing need for its best-in-class data integration and quality solutions to make their Big Data initiatives successful. Together, Syncsort and Trillium will continue to focus on providing customers with these capabilities for traditional environments, while leading the industry in delivering them for Hadoop and Spark too.

Earlier this year Syncsort integrated its own Big Data integration solution, DMX-h, with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, organizations can quickly pull data into new, ready-to-work clusters in the cloud—accelerating the time to capture cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

“As organizations liberate data from across the enterprise and deliver it into the cloud, they are looking for a self-service, elastic experience that’s easy to deploy and manage. This is a requirement for a variety of use cases – from data archiving to analytics that combine data originating in the cloud with on premise reference data,” said Tendü Yoğurtçu, Chief Technology Officer.

“By integrating DMX-h with Cloudera Director,” Yoğurtçu continued, “DMX-h is instantly available and ready to put enterprise data to work in newly activated cloud clusters.”

Syncsort DMX-h pulls enterprise data into Hadoop in the cloud and prepares that data for business workloads using native Hadoop frameworks, Apache Spark, or MapReduce, effectively enabling IT to achieve time-to-value goals and quickly deliver business insights.

It is always encouraging to see the mainframe eco-system continue to thrive. IBM’s own performance over the past few years has been anything but encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Promises Easy Fast Data Protection

September 1, 2017

Data protection used to be simple. You simply made a couple of copies of your data and stored them someplace safe. That hasn’t worked for years with most enterprises and certainly won’t work going forward. There are too many systems and data. Now you have to contend with virtual machines, NoSQL databases, cloud storage, and more. In the face of growing compliance mandates and threats like ransomware, and a bevy of data protection threats data protection has gotten much more complicated.

Last week IBM simplified it again by announcing IBM Spectrum Protect Plus. It promises to make data protection available in as little as one hour.

IBM achieves tape breakthrough

Turned out August proved to be a good month for IBM storage. In addition to introducing Spectrum Protect Plus IBM and Sony researchers achieved a record of 201 Gb/in2 (gigabits per square inch) in areal density. That translates into the potential to record up to about 330 terabytes (TB) of uncompressed data on a single tape cartridge. Don’t expect commercially available products with this density soon. But you will want it sooner than you may think as organizations face the need to collect, store, and protect massive amounts of data for a wide range of use cases, from surveillance images to analytics to cognitive to, eventually, quantum computing.

IBM Spectrum Protect Plus delivers data availability using snapshot technology for rapid backup, recovery and data management. Designed to be used by virtual machines (VM) and application administrators, it also provides data clone functionality to support and automate DevOps workflows. Unlike other data availability solutions, IBM Spectrum Protect Plus performs data protection and monitoring based on automated Service Level Agreements to ensure proper backup status and retention compliance, noted IBM.

The company has taken to referring Spectrum Protect Plus as the future of data protection, recovery and data reuse. IBM designed it to be fast, modern, light weight, low cost, easy to use, and simple to deploy while delivering rapid time to value.  As noted at the top, the company claims it can make effective data protection available in an hour without relying on highly trained storage experts. Spectrum Protect Plus, delivers data protection, according to IBM, “anyone can manage,” adding that it installs in less than 15 mins.

You get instant data and virtual machine recovery, which you grab from a snapshot. It is so slick, IBM managers say, that “when someone sends you a ransomware letter you can just laugh at them.” Only, of course, if you have been diligent in making backups. Don’t blame the Protect Plus tool, which is thoroughly automated behind scenes. It was announced last week but won’t be available until the fourth quarter of this year.

Protect Plus also brings a handful of new goodies for different stakeholders, as IBM describes it:

  • CIOs get a single view of the backup and recovery status across the data portfolio and the elimination of silos of data backup and recovery.
  • Senior IT Manager (VM and Application Admins) can rapidly self-serve their data availability without complexity. IBM Spectrum Protect Plus also provides an ability to integrate the VM and application backups into the business rules of the enterprise.
  • Senior Application LOB owners can experience data lifecycle management with near instantaneous recovery, copy management, and global search for fast data access and recovery

Specifically designed for virtual machine (VM) environments to support daily administration the product rapidly deploys without agents. It also features a simple, role-based user interface (UI) with intuitive global search for fast recovery.

Data backup and recovery, always a pain in the neck, has gotten even far more complex. For an enterprise data center facing stringent data protection and compliance obligations and juggling the backup of virtual and physical systems, probably across multiple clouds and multiple data centers the challenges and risks have grown by orders of magnitude. You will need tools like Spectrum Protect Plus, especially the Plus part, which IBM insists is a completely new offering.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Blockchain Platform Aims for Immutable Accuracy

August 25, 2017

Earlier this week IBM announced a major blockchain collaboration among group of leading companies across the global food supply chain. The goal is to reduce the number of people falling ill or even dying from eating contaminated food. IBM’s solution is its blockchain platform, which it believes is ideally suited to help address these challenges because it establishes a trusted environment that tracks all transactions, an accurate, consistent, immutable version.

Blockchain can improve food traceability

The food segment is just one of many industries IBM will target for its blockchain platform. It describes the platform as ideally suited to help address varied industry challenges because it establishes a trusted environment for all transactions. IBM claims it as the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance and operation of a multi-institution business network. Rival vendors, like Accenture, may disagree.  In the case of the global food supply chain, all participants -growers, suppliers, processors, distributors, retailers, regulators and consumers – can gain permissioned access to known and trusted information regarding the origin and state of food. In December 2016 DancingDinosaur reported on IBM and Walmart using blockchain for food safety.

IBM’s blockchain platform is built around Hyperledger Composer, integrated with popular development environments using open developer tools, and accepted business terms to generate blockchain code and smart contracts. It also includes sample industry use cases.  Using IBM’s platform, developers can create standard business language in JavaScript and the APIs help keep development work at the business level, rather than being highly technical. This makes it possible for most any programmer to be a blockchain developer. Additionally, a variety of IBM Developer Journeys for blockchain are available featuring free open source code, documentation, APIs, architecture diagrams, and one-click deployment Git repositories to fast-track building, according to IBM.

For governance and operation it also provides activation tools for new networks, members, smart contracts and transaction channels. It also includes multi-party workflow tool with member activities panel, integrated notifications, and secure signature collection for policy voting. In addition, a new class of democratic governance tools designed to help improve productivity across the organizations uses a voting process that collects signatures from members to govern member invitation distribution of smart contracts and the creation of transactions channels. By enabling the quick onboarding of participants, assigning roles, and managing access, organizations can begin transacting via the blockchain fast.

In operating the network IBM blockchain platform provides always-on, high availability with seamless software and blockchain network updates, a hardened security stack with no privileged access, which blocks malware, and built-in blockchain monitoring for full network visibility. Woven throughout the platform is the Hyperledger Fabric. It also provides the highest-level, commercially available tamper resistant FIPS140-2 level 4 protection for encryption keys.

Along with its blockchain platform, IBM is advancing other blockchain supply chain initiatives by using the platform for an automated billing and invoicing system. Initial work to use blockchain for invoicing also is underway starting with Lenovo. This will provide an audit-ready solution with full traceability of billing and operational data, and help speed on-boarding time for new vendors and new contract requirements, according to IBM.

The platform leverages IBM’s work for more than 400 organizations. It includes insights gained as IBM has built blockchain networks across industries ranging from financial services, supply chain and logistics, retail, government, and healthcare.

Extensively tested and piloted, the IBM’s new blockchain platform addresses a wide range of enterprise pain points, including both business and technical requirements around security, performance, collaboration and privacy. It includes innovation developed through open source collaboration in the Hyperledger community, including the newest Hyperledger Fabric v1.0 framework and Hyperledger Composer blockchain tool, both hosted by the Linux Foundation.

DancingDinosaur has previously noted that the z appears ideal for blockchain. DancingDinosaur based this on the z13’s scalability, security, and performance. The new z14, with its automated, pervasive encryption may be even better.  The Hyperledger Composer capabilities along with the sample use cases promise an easy, simple way to try blockchain among some suppliers and partners.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Cognitive Computing Results Here

August 18, 2017

IBM released the latest study on cognitive computing from its Institute of Business Value (IBV) and it brings a classic good news/bad news story line. The good news: professionals using cognitive computing are able to create and deliver the personalized, intuitive experiences customers demand. The bad news: cognitive computing could also be one of the most disruptive forces their organizations face. However you see the cognitive glass—half full or half empty—IBV calls cognitive computing “game-changing technology.”

ibm ibv info graphic

Cognitive could be the answer to marketers’ and sellers’ prayers or nightmares Still, it appears to the IBV researchers that Chief Marketing Officers (CMOs) and heads of sales are ready to take the gamble and make the cognitive leap. Nearly two-thirds of those surveyed believe their industries will be ready to adopt cognitive solutions by 2020.

The technology can quickly make sense of vast amounts of structured and unstructured data, including sounds and images, in ways similar to humans—by reasoning, learning, and interacting to improve accuracy overtime. Companies identified as outperformers report cognitive is already operational at their organizations, with 73 percent already collecting and analyzing external market data.

Many marketing and sales executives, IBV researchers report, expect their organizations’ cognitive spend to increase within the next three years. Today, 63 percent estimate that cognitive accounts for 5 percent or less of their organizations’ IT budgets, including 18 percent who say it constitutes zero. Of those, 5 percent say their cognitive budget will still be zero in three years. By then, 21 percent expect it to grow to 5 – 10 percent. Almost a quarter of outperformers say it could account for more than 20 percent of their IT spend.

Respondents have high expectations that this technology will pay off. Nearly a third say their organizations would need a 10 – 15 percent return to justify their investment. More than half expect their organizations to recover their cognitive investment within 2-4 years. Clearly they expect the payoff from cognitive insights to come fast.

Today’s big driver, IBV researchers found, is customer satisfaction. Meeting or exceeding customer expectations is a common buzzword of many managers. But practically speaking, the researchers continued, many of those surveyed say they aren’t sure their organizations are currently set up to make a successful transition. The study, conducted in cooperation with Oxford Economics, is based on a global survey with 525 CMOs.

Traditional analytics long provided data for businesses to draw insights. Cognitive analytics goes further, providing predictive outcomes that turn insights into forward-directed recommendations, which, hopefully, impact real business decisions.

But the results are not guaranteed. The IBV researchers found that it’s important that organizations not simply focus on their expectations of better marketing and sales results. Rather, they also need to take into account the efficiencies and cost savings they could potentially gain with cognitive solutions, particularly for sales prospecting and management, as well as the media and marketing spend. Additionally, cognitive computing’s ability to help companies improve their customer experience should be included in any company’s ROI calculations.

Many eager adopters, however, report being hampered by a number of challenges, according to the IBV report. CMOs, for instance, complain they lack the technology needed to implement cognitive solutions. Additionally, they don’t believe they have enough of the required skills and expertise. Others suggest data governance and data sharing policies present a barrier while CMOs worry about security and privacy implications.

Still others cite a more elemental concern:  a lack of executive support for cognitive computing and worry their organizational culture may not be a good fit for a cognitive solution. At first glance, this seems surprising, given their apparent faith in cognitive benefits and the anticipation their industries will be adopting cognitive as the new normal.

The IBV findings, however, reveal executives’ mixed emotions about what the change cognitive represents for their companies. Many say they are feeling overwhelmed by the challenge of adopting yet another new technology and processes. Some marketing teams are in the midst of their own digital transformations. The adoption of cognitive computing, however, can be integrated into their current digital strategy and the tools they are using today, if they have some key capabilities in place. To some, that might be a big if.

To ensure the best results, IBV suggests:

  • Make room for cognitive solutions in your businesses’ digital reinvention strategy from the start
  • Enhance employees’ business skills, not just their data analytics skills
  • Make cognitive a golden opportunity for collaboration, innovation, and closer alignment within the C-suite
  • Start small if necessary but do start, preferably now

Cognitive can applied in many ways in different areas of the business, potentially benefiting any of them. But until you try you can’t tell. The real risk, notes IBV, is waiting too long on the sidelines while the competition forges ahead. That’s a story you’ve heard before.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

SHARE Attracts Mainframers to Providence

August 10, 2017

IBM CEO Ginni Rometty keynoted the 4-day SHARE conference, which opened in Providence, RI, on Monday. DancingDinosaur didn’t arrive until Tuesday—sorry Ginni, missed your speech–but found the event packed with sessions and a lively vendor expo.

10-core z14 processor

z14 10-core chip design

The session on making the business case for Linux on z and IBM Linux One proved interesting. Dancing Dinosaur has covered this topic multiple times. The main point: Linux on the z drives greater cost efficiency when compared with Linux on distributed servers.  Just add up the number of servers and admins you will need to host, say, a few thousand Linux instances.  Other points the speaker, an IBMer named Eduardo Olivera, were security, even without the z14’s pervasive encryption, which is coming to Linux on z, and high availability. It also brings non-disruptive scalability as well as delivering excellent Linux performance on the z. Linux on LinuxOne, he added, costs less since the core that runs Linux costs less than cores running z/OS.

Multiple instances of Linux benefits from a hypervisor as an added layer. On LinuxOne the primary hypervisor is KVM.

Another session was titled IBM MQ and the IBM Integration Bus for Docker Container and Cloud environments and led by Mark Taylor and David Coles. With the advent of the z14 and its new container pricing, containers will become a bigger topic going forward. DancingDinosaur covered container pricing a couple of weeks ago here with more coming in upcoming posts. Suffice for now that there is more in the z world than Docker containers.

Most of the session focused on MQ, which is IBM’s message hub. MQ essentially decouples applications and systems, which opens a range of possibilities. For starters, MQ v9 works with IBM’s Bluemix and has been simplified through the availability of pre-configured default and templates. According to Taylor there is almost no situation in which, with a little creativity, you cannot deploy MQ to advantage. IBM, he suggested, is preparing to offer MQ as a hosted service (MQaaS) through the IBM cloud. In that case, MQ will become considerably easier, especially for mainframe novices.

Rocket hosted an interesting session titled the API Revolution on Z—unlock your mainframe for accelerated digital transformation. In effect, Rocket laid out the business case for IBM z/OS Connect and went on to describe how it augments the IBM product with connections to more obscure products, like Adabas, and different ways data sets are organized, such as physically sequential, indexed sequential, and others.

Driving the need for APIs, RESTful and others, coincides with the fading away of data warehouses. Until recently, organizations created large data warehouses in which they parked data for analytics.  Today, if an organization wants to be competitive, it needs to access the data where it resides in, at the least, near real time and in its native form. Organizations don’t have the time to transform or normalize the data before analyzing it.

Instead, applications need to grab the latest data through an API and analyze it in whatever format it takes—SQL, JSON, NoSQL, SMF, basically anything, on the spot through Spark or something even faster. And Rocket wasn’t talking about data that was read-only; the data has to be fully transaction compliant.

Brocade hosted an interesting session titled Data Center Modernization for Mainframe Environments. DancingDinosaur has sat through many mainframe modernization presentations and they always focus on something different. Sometimes they focus on managing software deployment to minimize peak hourly charges. Other times the modernization strategy focuses on storage. Since this was Brocade’s session mainframe modernization focused on the network and the switches to optimize packet speed.

This is becoming increasingly important as flash storage gets coupled with really fast, highly optimized servers that easily saturate an older network design. Brocade’s latest Gen6 Directors promise to solve the problem, at least until the next generation of even faster flash and server processors arrive. Remember, the new z14 has a 10-core, 5.2 GHz processor that has been optimized at more levels than we can track.

For Brocade modernization comes down to chasing anything that impact latency. To modernize your mainframe environment just identify all the little things that add latency. It’s not just about buying a bigger pipe or faster switch; it might require cleaning dirt from fiber optic cables that add to latency in barely discernible increments.

But maybe the biggest delight for many at SHARE on Tuesday was the lobster and shellfish served during the vendor expo. For over an hour everyone feasted on lobster rolls and other shellfish. Unfortunately, DancingDinosaur is allergic to lobster.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Join Me at Share in Providence

August 3, 2017

Share runs all week but DancingDinosaur plans to be there on Tues., 8/8. Share is happening at the Rhode Island Convention Center, Providence, RI, August 6–11, 2017. Get details at Share.org.  The keynote looks interesting that day: As Share describes it: Security and regulatory compliance are concerns that impact every professional within your IT organization.

In the Tuesday Keynote presentation at SHARE Providence, expert panelists will offer their perspectives on how various roles are specifically impacted by security, and what areas you should be most concerned about in your own roles. Listen to your peers share their insights in a series of TED-style Talks, starting with David Hayes of the Government Accountability Office, who will focus on common compliance and risk frameworks. Stu Henderson of Henderson Consulting will discuss organizational values and how those interacting with the systems are part of the overall control environment, followed by Simon Dodge of Wells Fargo providing a look at proactive activities in the organization that are important for staying ahead of threats and reducing the need to play catch-up when the auditors arrive. In the final talk of the morning, emerging cyber security threats will be discussed by Buzz Woeckener of Nationwide Insurance, along with tips on how to be prepared. At the conclusion of their presentations, the panelists will address audience questions on the topics of security and compliance.

You’ll find me wandering around the sessions and the expo. Will be the guy wearing the Boston hat.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New IBM Z Redefines Mainframe and Security and Cloud

July 19, 2017

By now you have certainly heard of IBM’s latest mainframe, the long-awaited z14, which the company refers to as Z. An announcement of a new mainframe usually doesn’t attract much notice, but maybe this announcement should. Even if you are not a mainframe fan this machine offers a solution that helps everybody—pervasive encryption of all data with no impact on operations or performance and with no need to take much action on your part, except to plug the machine in.

10-core z14 chip

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome automatic pervasive encryption. Yet 96% don’t. Of the 9 billion records breached since 2013 only 4% were encrypted! You already know why: encryption is a chore, impacts staff, slows system performance, costs money, and more. You know all the complaints better than DancingDinosaur.

The z14 changes everything from this point going forward. IBM has committed a 4x increase in silicon dedicated to cryptographic algorithms for pervasive encryption. In effect the Z encrypts all data associated with an entire application, cloud service, and database, in flight and at rest, automatically. This amounts to bulk encryption at cloud scale made possible by a massive 7x increase in cryptographic performance over the z13. This is 18x faster than comparable x86 systems and at just five percent of the cost of x86-based solutions.

In truth, it’s better than this. You get this encryption automatically virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box. DancingDinosaur has not seen any specific prices yet but you are welcome to scream if IBM doesn’t come through.

You immediately get rid of all the encryption headaches; you don’t have to classify data, manage encryption, or do any of the other chores typically associated with encryption. You just get it, automatically. The z14 also relieves you from managing encryption keys; only IBM Z can protect millions of keys (as well as the process of accessing, generating and recycling them) in tamper-responsive hardware that causes keys to be invalidated at any sign of intrusion and then be restored in safety.

When it comes to security, the z14 truly is a game changer. And it finally will get compliance auditors off your back once they realize how extensive z14 protection is.

IBM downplayed speeds and feeds with the z13 but they’re back with the z14. Specifically, a 5.2 GHz (versus 5.0 GHz IBM z13) is still a bit short of z12, which ran 5.5 GHz. But as with the z13, IBM makes up for it with more memory. The z14 can handle 32 TB of memory. It also includes up to 170 configurable cores (up to 10 per chip) for a total of 1832 MIPS. The L1 and L2 cache is on the core.  The L3 cache also sits on chip and is shared by on-chip cores, and communicates with cores, memory, I/O, and system controller as a single chip module.

Maybe not the richest specs but impressive nonetheless. IBM has been tweaking the box from top to bottom to boost performance. And all the while it will take over end-to-end encryption automatically, including encrypted APIs. Surprisingly, IBM has said nothing about Z’s power consumption but constantly on encrpytion/decryption has to draw more power than, say, the z13. Am waiting to hear what IBM has to say.

This is not just for mainframe jocks. Optimized IBM z/OS Connect technologies make it straightforward for cloud developers to discover and call any IBM Z application or data from a cloud service, or for Z developers to call any cloud service. IBM Z now allows organizations to encrypt these APIs and still run nearly 3x faster than alternatives based on comparable x86 systems.  These speeds and feeds have all been thoroughly documented and detailed at the bottom of the IBM Z press release here.

Will the z14 return the mainframe to positive revenue?  Probably for a few quarters, maybe more if non-mainframe shops want the clear payback of pervasive encryption, although it won’t be an easy transition for them without IBM assistance and incentives.

Next week DancingDinosaur will take up the Z’s three new container pricing models intended to make the Z competitive with public clouds and on-premises x86 environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: