Posts Tagged ‘Linux’

IBM Q Network Promises to Commercialize Quantum

December 14, 2017

The dash to quantum computing is well underway and IBM is preparing to be one of the leaders. When IBM gets there it will find plenty of company. HPE, Dell/EMC, Microsoft and more are staking out quantum claims. In response IBM is speeding the build-out of its quantum ecosystem, the IBM Q Network, which it announced today.

IBM’s 50 qubit system prototype

Already IBM introduced its third generation of quantum computers in Nov., a prototype 50 qubit system. IBM promises online access to the IBM Q systems by the end of 2017, with a series of planned upgrades during 2018. IBM is focused on making available advanced, scalable universal quantum computing systems to clients to explore practical applications.

Further speeding the process, IBM is building a quantum computing ecosystem of big companies and research institutions. The result, dubbed IBM Q Network, will consist of a worldwide network of individuals and organizations, including scientists, engineers, business leaders, and forward thinking companies, academic institutions, and national research labs enabled by IBM Q. Its mission: advancing quantum computing and launching the first commercial applications.

Two particular goals stand out: Engage industry leaders to combine quantum computing expertise with industry-oriented, problem-specific expertise to accelerate development of early commercial uses. The second: expand and train the ecosystem of users, developers, and application specialists that will be essential to the adoption and scaling of quantum computing.

The key to getting this rolling is the groundwork IBM has laid with the IBM Q experience, which the company unveiled last March. It involves the company’s effort make available a commercial universal quantum computer for business and science applications. The Q systems started at 5 cubits and increased with each successive rev until today with a prototype 50 cubit system and services delivered via the IBM Cloud platform.

IBM opened public access to its quantum processors over a year ago  to serve as an enablement tool for scientific research, a resource for university classrooms, and a catalyst for enthusiasm. Since then, participants ran more than 1.7M quantum experiments on the IBM Cloud.

To date IBM was pretty easy going about access to the quantum computers but now that they have a 20 cubit system and 50 cubit system coming the company has become a little more restrictive about who can use them. Participation in the IBM Q Network is the only way to access these advanced systems, which involves a commitment of money, intellectual property, and agreement to share and cooperate, although IBM implied at any early briefing that it could be flexible about what was shared and what could remain an organization’s proprietary IP.

A key reason an organization might want to join the Q Network is QISKit, an open source quantum computing SDK. Most DancingDinosaur readers, if they want to participate at all, will do so as either partners or members. Another option, a Hub, is really targeted for bigger, more ambitious early adopters. Hubs, as IBM puts it, provide access to IBM Q systems, technical support, educational and training resources, community workshops and events, and opportunities for joint work.

The Q Network has already attracted some significant interest for organizations at every level and across a variety of industry segments. These include automotive, financial, electronics, chemical, and materials players from across the globe. Initial participants announced today include  JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University, and University of Melbourne.

As noted at the top, other major players are staking out their quantum claims, but none seem as far along or as comprehensive as IBM:

  • Dell/EMC is aiming to solve complex, life-impacting analytic problems like autonomous vehicles, smart cities, and precision medicine.
  • HPE appears to be focusing its initial quantum efforts on encryption.
  • Microsoft, not surprisingly, expects to release a new programming language and computing simulator designed for quantum computing.

As you would expect, IBM also is rolling out IBM Q Consulting to help organizations envision new business value through the application of quantum computing technology and provide customized roadmaps to help enterprises become quantum-ready.

Will quantum computing actually happen? Your guess is as good as anyone’s. I first heard about quantum physics in high school 40-odd years ago. It was baffling but intriguing then. Today it appears more real but still nothing is assured. If you’re willing to burn some time and resources to try it, go right ahead. Please tell DancingDinosaur what you find.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

BMC’s 12th Annual Mainframe Survey Shows Z Staying Power

November 17, 2017

ARM processors are invading HPC and supercomputer segments. The Power9 is getting closer and closer to general commercial availability. IBM unveiled not one but two new quantum computers. Meanwhile, the Z continues to roll right along without skipping a beat, according to BMC’s 12th mainframe survey.

There is no doubt that the computing landscape is changing dramatically and will continue to change. Yet mainframe shops appear to be taking it all in stride. As Mark Wilson reported on the recently completed SHARE Europe conference in the UK, citing the keynote delivered by Compuware’s CEO Chris O’Malley: “By design, the post-modern mainframe is the most future ready platform in the world: the most reliable, securable, scalable, and cost efficient. Unsurprisingly, the mainframe remains the dominant, growing, and vital backbone for the worldwide economy. However, outdated processes and tools ensnared in an apathetic culture doggedly resistant to change, prevent far too many enterprises from unleashing its unique technical virtues and business value.”  If you doubt we are entering the post-modern mainframe era just look at the LinuxONE Emperor II or the z14.

Earlier this month BMC released its 12th annual mainframe survey. Titled 5 Myths Busted, you can find the report here.  See these myths right below:

  • Myth 1: Organizations have fully optimized mainframe availability
  • Myth 2: The mainframe is in maintenance mode; no one is modernizing
  • Myth 3: Executives are planning to replace their mainframes
  • Myth 4: Younger IT professionals are pessimistic about mainframe careers
  • Myth 5: People working on the mainframe today are all older

Everyone from prestigious executives like O’Malley to a small army of IBMers to lowly bloggers and analysts like DancingDinosaur have been pounding away at discrediting these myths for years. And this isn’t the first survey to thoroughly discredit mainframe skeptics.

The mainframe is growing: 48% of respondents saw MIPS growth in the last 12 months, over 50% of respondents forecast MIPS growth in the next 12 months, and 71% of large shops (10,000 MIPS or more) experienced MIPS growth in the last year. Better yet, these same shops forecast more growth in the next 12 months.

OK, the top four priorities of respondents remained the same this year. The idea that mainframe shops, however, are fully optimized and just cruising is dead wrong. Survey respondents still have a list of to-do of priorities:

  1. Cost reduction/optimization
  2. Data privacy/compliance
  3. Availability
  4. Application modernization

Maybe my favorite myth is that younger people have given up on the mainframe. BMC found that 53% of respondents are under age 50 and of this group, (age 30-49 with under 10 years of experience) overwhelmingly report a very positive view of the the mainframe future. The majority went so far as to say they see the workload of their mainframe growing and also view the mainframe as having a strong position of growth in the industry overall. This is reinforced by the growth of IBM’s Master of the Mainframe competition, which attracts young people in droves, over 85,000 to date, to work with the so-called obsolete mainframe.

And the mainframe, both the Z and the LinuxONE, is packed with technology that will continue to attract young people: Linux, Docker, Kubernetes, Java, Spark, and support for a wide range of both relational databases like DB2 and NoSQL databases like MongoDB. They use this technology to do mobile, IoT, blockchain, and more. Granted most mainframe shops are not ready yet to run these kinds of workloads. IBM, however, even introduced new container pricing for the new Z to encourage such workloads.

John McKenny, BMC’s VP of Strategy, has noticed growing interest in new workloads. “Yes, they continue to be mainly transactional applications but they are aimed to support new digital workloads too, such as doing business with mobile devices,” he noted.  Mobility and analytics, he added, are used increasingly to improve operations, and just about every mainframe shop has some form of cloud computing, often multiple clouds.

The adoption of Linux on the mainframe a decade ago imediatey put an end to the threat posed by x86. Since then, IBM has become a poster child for open source and a slew of new technologies, from Java to Hadoop to Spark to whatever comes next. Although traditional mainframe data centers have been slow to adopt these new technologies some are starting, and that along with innovative machines like the z14 and LinuxONE Emperor ll are what, ultimately, will keep the mainframe young and competitive.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM 3Q17 Results Break Consecutive Quarters Losing Streak

November 2, 2017

DancingDinosaur generally does not follow the daily gyrations of IBM’s stock, assuming that readers like you are not really active investors in the company’s stock. That is not to say, however, that you don’t have an important, even critical interest in the company’s fortunes.  As users of Z or Power systems, you want to know that IBM has the means to continue to invest in and advance your preferred platform.  And a 20+ consecutive quarters losing streak doesn’t exactly inspire confidence.

What is interesting about IBM’s latest 3Q17 financials, which ends the string of consecutive revenue losses, is the performance of the Z and storage, two things most of us are concerned with.

Blockchain simplifies near real-time clearing and settlement

Here is what Martin Schroeter, IBM Senior Vice President and Chief Financial Officer said to the investment analysts he briefs: In Systems, we had strong growth driven by the third consecutive quarter of growth in storage, and a solid launch of our new z14 mainframe, now just called Z, which was available for the last two weeks of the quarter.

DancingDinosaur has followed the mainframe for several decades at least, and the introduction of a new mainframe always boosts revenue for the next quarter or two. The advantages were apparent on Day 1 when the machine was introduced. As DancingDinosaur wrote: You get this encryption automatically, virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box.

A few months later IBM introduced a new LinuxOne mainframe, the Emperor II. The new LinuxOne doesn’t yet offer pervasive encryption but provides Secure Service Containers. As it was described here at that time: Through the Secure Service Container data can be protected against internal threats at the system level even from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats.

Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use. Again, it will likely take a few quarters for LinuxONE shops and other Linux shops to seek out the Emperor II and Secure Service Containers.

Similarly, in recent weeks, IBM has been bolstering its storage offerings. As Schroeter noted, storage, including Spectrum storage and Flash, have been experiencing a few positive quarters and new products should help to continue that momentum. For example, products like IBM Spectrum Protect Plus promises to make data protection available in as little as one hour.

Or the IBM FlashSystem 900, introduced at the end of October promises to deliver efficient, ultra dense flash with CAPEX and OPEX savings due to 3x more capacity in a 2U enclosure. It also offers to maximize efficiency using inline data compression with no application performance impact as it achieves consistent 95 microsecond response times.

But probably the best 3Q news came from the continuing traction IBM’s strategic imperatives are gaining. Here these imperatives—cloud, security, cognitive computing—continue to make a serious contribution to IBM revenue. Third-quarter cloud revenues increased 20 percent to $4.1 billion.  Cloud revenue over the last 12 months was $15.8 billion, including $8.8 billion delivered as-a-service and $7.0 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions.  The annual exit run rate for as-a-service revenue increased to $9.4 billion from $7.5 billion in the third quarter of 2016.  In the quarter, revenues from analytics increased 5 percent.  Revenues from mobile increased 7 percent and revenues from security increased 51 percent. Added Schroeter: Revenue from our strategic imperatives over the last 12 months was also up 10% to $34.9 billion, and now represents 45% of IBM.

OK, so IBM is no longer a $100 + billion company and hasn’t been for some time. Maybe in a few years if blockchain and the strategic imperatives continue to grow and quantum catches fire it may be back over the $100 billion mark, but not sure how much it matters.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Brings the Mainframe to AWS

October 6, 2017

IBM talks about the power of the cloud for the mainframe and has turned Bluemix into a cloud development and deployment platform for open systems. Where’s the Z?

Now Compuware has made for the past several years quarterly advances in its mainframe tooling, which are now  available through AWS. Not only have those advances made mainframe management and operations more intuitive and graphical through a string of Topaz releases, but with AWS it is now more accessible from anywhere. DancingDinosaur has been reporting on Compuware’s string of Topaz advances for two years, here, here, and here.

By tapping the power of both the cloud and the mainframe, enterprises can deploy Topaz to their global development workforce in minutes, accelerating the modernization of their mainframe environments. As Compuware noted: mainframe shops now have the choice of deploying Topaz on-premise or on AWS. By leveraging the cloud, they can deploy Topaz more quickly, securely, and scale without capital costs while benefiting from new Topaz features as soon as the company delivers them.

To make Topaz work on AWS Compuware turned to Amazon AppStream 2.0 technology, which provides for global development, test, and ops teams with immediate and secure cloud access to Compuware’s entire innovative mainframe Agile/DevOps solution stack, mainly Topaz. Amazon AppStream 2.0 is a fully managed, secure application streaming service that allows users to stream desktop applications from AWS to any device running a web browser.

Cloud-based deployment of Topaz, Compuware notes, allows for significantly faster implementation, simple administration, a virtual integrated development environment (IDE), adaptive capacity, and immediate developer access to software updates. The last of these is important, since Compuware has been maintaining a quarterly upgrade release schedule, in effect delivering new capabilities every 90 days.

Compuware is in the process of patenting technology to offer an intuitive, streamlined configuration menu that leverages AWS best practices to make it easy for mainframe admins to quickly configure secure connectivity between Topaz on AWS and their mainframe environment. It also enables the same connectivity to their existing cross-platform enterprise DevOps toolchains running on-premise, in the cloud, or both. The upshot: organizations can deploy Topaz across their global development workforce in minutes, accelerating the modernization of their mainframe environments.

Using Topaz on AWS, notes Compuware, mainframe shops can benefit in a variety of ways, specifically:

  • Modify, test and debug COBOL, PL/I, Assembler and other mainframe code via an Eclipse-based virtual IDE
  • Visualize complex and/or undocumented application logic and data relationships
  • Manage source code and promote artifacts through the DevOps lifecycle
  • Perform common tasks such as job submission, review, print and purge
  • Leverage a single data editor to discover, visualize, edit, compare, and protect mainframe files and data

The move to the Eclipse-based IDE presents a giant step for traditional mainframe shops trying to modernize. Eclipse is a leading open source IDE with IBM as a founding member. In addition to Eclipse, Compuware also integrates with other modern tools, including Jenkins, SonarSource, Altassian. Jenkins is an open source automation server written in Java that helps to automate the non-human part of software development process with continuous integration while facilitating technical aspects of continuous delivery. SonarSource enables visibility into mainframe application quality. Atlassian develops products for software developers, project managers, and content management and is best known for Jira, its issue tracking application.

Unlike many mainframe ISVs, Compuware has been actively partnering with various innovative vendors to extend the mainframe’s tool footprint and bring the kind of tools to the mainframe that young developers, especially Millennials, want. Yes, it is possible to access the sexy REST-based Web and mobile tools through IBM’s Bluemix, but for mainframe shops it appears kludgy. By giving its mainframe customers access through AWS to advanced tools, Compuware improves on this. And AWS beats Bluemix in terms of cloud penetration and low cost.

All mainframe ISVs should make their mainframe products accessible through the cloud if they want to keep their mainframe products relevant. IBM has its cloud; of course there is AWS, Microsoft has Azure, and Google rounds out the top four. These and others will keep cloud economics competitive for the foreseeable future. Hope to see you in the cloud.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Moves Quantum Computing Toward Commercial Systems

September 20, 2017

IBM seem determined to advance quantum computing. Just this week IBM announced its researchers developed a new approach to simulate molecules on a quantum computer that may one day help revolutionize chemistry and materials science. In this case, the researchers implemented a novel algorithm that is efficient with respect to the number of quantum operations required for the simulation. This involved a 7-qubit processor.

7-cubit processor

In the diagram above IBM scientists successfully used six qubits on a purpose-built seven-qubit quantum device to address the molecular structure problem for beryllium hydride (BeH2) – the largest molecule simulated on a quantum computer to date.

Back in May IBM announced an even bigger quantum device. It prototyped the first commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM. This week’s announcement certainly didn’t surpass it in size. IBM engineered the 17-qubit system to be at least twice as powerful as what is available today to the public on the IBM Cloud and it will be the basis for the first IBM Q early-access commercial systems.

It has become apparent to the scientists and researchers who try to work with complex mathematical problems and simulations that the most powerful conventional commercial computers are not up to the task. Even the z14 with its 10-core CPU and hundreds of additional processors dedicated to I/O cannot do the job.

As IBM puts it: Even today’s most powerful supercomputers cannot exactly simulate the interacting behavior of all the electrons contained in a simple chemical compound such as caffeine. The ability of quantum computers to analyze molecules and chemical reactions could help accelerate research and lead to the creation of novel materials, development of more personalized drugs, or discovery of more efficient and sustainable energy sources.

The interplay of atoms and molecules is responsible for all matter that surrounds us in the world. Now “we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q, IBM Research. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do, and start becoming a tool for experts in areas such as chemistry, biology, healthcare and materials science.”

So commercial quantum systems are coming.  Are you ready to bring a quantum system into you data center? Actually you can try one today for free here  or through GitHub, which offers a Python software development kit for writing quantum computing experiments, programs, and applications. Although DancingDinosaur will gladly stumble through conventional coding, quantum computing probably exceeds his frustration level even with a Python development kit.

However, if your organization is involved in these industries—materials science, chemistry, and the like or is wrestling with a problem you cannot do on a conventional computer—it probably is worth a try, especially for free. You can try an easy demo card game that compares quantum computing with conventional computing.

But as reassuringly as IBM makes quantum computing sound, don’t kid yourself; it is very complicated.  Deploying even a small qubit machine is not going to be like buying your first PC. Quantum bits, reportedly, are very fragile or transitory. Labs will keep them very cold just to better stabilize the system and keep them from switching their states before they should.  Just think how you’d feel about your PC if the bit states of 0 and 1 suddenly and inextricably changed.

That’s not the only possible headache. You only have limited time to work on cubits given their current volatility when not super cooled. Also, work still is progressing on advancing the quantum frameworks and mapping out ecosystem enablement.

Even IBM researchers admit that some problems may not be better on quantum computers. Still, until you pass certain threshold, like qubit volume, your workload might not perform better on a quantum computer. The IBM quantum team suggests it will take until 2021 to consistently solve a problem that has commercial relevance using quantum computing.

Until then, and even after, IBM is talking about a hybrid approach in which parts of a problem are solved with a quantum computer and the rest with a conventional system. So don’t plan on replacing your Z with a few dozen or even hundreds of qubits anytime soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the new IBM LinuxONE Emperor II

September 15, 2017

Early this week IBM introduced the newest generation of the LinuxONE, the IBM LinuxONE Emperor II, built on the same technology as the IBM z14, which DancingDinosaur covered on July 19. The key feature of the new LinuxONE Emperor II, is IBM Secure Service Container, presented as an exclusive LinuxONE technology representing a significant leap forward in data privacy and security capabilities. With the z14 the key capability was pervasive encryption. This time the Emperor II promises very high levels of security and data privacy assurance while rapidly addressing unpredictable data and transaction growth. Didn’t we just hear a story like this a few weeks ago?

IBM LinuxONE Emperor (not II)

Through the IBM Secure Service Container, for the first time data can be protected against internal threats at the system level from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats. Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container to be ready for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use.

The Emperor II and the LinuxONE are being positioned as the premier Linux system for highly secured data serving. To that end, it promises:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers (SoD)
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

With the z14 you got this too, maybe worded slightly differently.

In terms of performance and scalability, IBM promises:

  • Industry-leading performance of Java workloads, up to 50% faster than Intel
  • Vertical scale to 170 cores, equivalent to hundreds of x86 cores
  • Simplification to make the most of your Linux skill base and speed time to value
  • SIMD to accelerate analytics workloads & decimal compute (critical to financial applications)
  • Pause-less garbage collection to enable vertical scaling while maintaining predictable performance

Like the z14, the Emperor II also lays a foundation for data serving and next gen apps, specifically:

  • Adds performance and security to new open source DBaaS deployments
  • Develops new blockchain applications based on the proven IBM Blockchain Platform—in terms of security, blockchain may prove more valuable than even secure containers or pervasive encryption
  • Support for data-in-memory applications and new workloads using 32 TB of memory—that’s enough to run production databases entirely in memory (of course, you’ll have to figure out if the increased performance, which should be significant, is worth the extra memory cost)
  • A build-your-cloud approach for providers wanting a secure, scalable, open source platform

If you haven’t figured it out yet, IBM sees itself in a titanic struggle with Intel’s x86 platform.  With the LinuxONE Emperor II IBM senses it can gain the upper hand with certain workloads. Specifically:

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (that aren’t included in the core count) giving the platform the best I/O capacity and performance in the industry
  • Its shared memory, vertical scale architecture delivers a measurably better architecture for stateful workloads like databases and systems of record
  • The LinuxONE/z14 hardware designed to still give good response time at up to 100% utilization, which simplifies the solution and reduces the extra costs many data centers assume are necessary because they’re used to 50% utilization
  • The Emperor II can be ordered designed and tested for earthquake resistance
  • The z-based LinuxONE infrastructure has survived fire and flood scenarios where all other server infrastructures have failed

That doesn’t mean, however, the Emperor II is a Linux no brainer, even for shops facing pressure around security compliance, never-fail mission critical performance, high capacity, and high performance. Change is hard and there remains a cultural mindset based on the lingering myth of the cheap PC of decades ago.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Promises Easy Fast Data Protection

September 1, 2017

Data protection used to be simple. You simply made a couple of copies of your data and stored them someplace safe. That hasn’t worked for years with most enterprises and certainly won’t work going forward. There are too many systems and data. Now you have to contend with virtual machines, NoSQL databases, cloud storage, and more. In the face of growing compliance mandates and threats like ransomware, and a bevy of data protection threats data protection has gotten much more complicated.

Last week IBM simplified it again by announcing IBM Spectrum Protect Plus. It promises to make data protection available in as little as one hour.

IBM achieves tape breakthrough

Turned out August proved to be a good month for IBM storage. In addition to introducing Spectrum Protect Plus IBM and Sony researchers achieved a record of 201 Gb/in2 (gigabits per square inch) in areal density. That translates into the potential to record up to about 330 terabytes (TB) of uncompressed data on a single tape cartridge. Don’t expect commercially available products with this density soon. But you will want it sooner than you may think as organizations face the need to collect, store, and protect massive amounts of data for a wide range of use cases, from surveillance images to analytics to cognitive to, eventually, quantum computing.

IBM Spectrum Protect Plus delivers data availability using snapshot technology for rapid backup, recovery and data management. Designed to be used by virtual machines (VM) and application administrators, it also provides data clone functionality to support and automate DevOps workflows. Unlike other data availability solutions, IBM Spectrum Protect Plus performs data protection and monitoring based on automated Service Level Agreements to ensure proper backup status and retention compliance, noted IBM.

The company has taken to referring Spectrum Protect Plus as the future of data protection, recovery and data reuse. IBM designed it to be fast, modern, light weight, low cost, easy to use, and simple to deploy while delivering rapid time to value.  As noted at the top, the company claims it can make effective data protection available in an hour without relying on highly trained storage experts. Spectrum Protect Plus, delivers data protection, according to IBM, “anyone can manage,” adding that it installs in less than 15 mins.

You get instant data and virtual machine recovery, which you grab from a snapshot. It is so slick, IBM managers say, that “when someone sends you a ransomware letter you can just laugh at them.” Only, of course, if you have been diligent in making backups. Don’t blame the Protect Plus tool, which is thoroughly automated behind scenes. It was announced last week but won’t be available until the fourth quarter of this year.

Protect Plus also brings a handful of new goodies for different stakeholders, as IBM describes it:

  • CIOs get a single view of the backup and recovery status across the data portfolio and the elimination of silos of data backup and recovery.
  • Senior IT Manager (VM and Application Admins) can rapidly self-serve their data availability without complexity. IBM Spectrum Protect Plus also provides an ability to integrate the VM and application backups into the business rules of the enterprise.
  • Senior Application LOB owners can experience data lifecycle management with near instantaneous recovery, copy management, and global search for fast data access and recovery

Specifically designed for virtual machine (VM) environments to support daily administration the product rapidly deploys without agents. It also features a simple, role-based user interface (UI) with intuitive global search for fast recovery.

Data backup and recovery, always a pain in the neck, has gotten even far more complex. For an enterprise data center facing stringent data protection and compliance obligations and juggling the backup of virtual and physical systems, probably across multiple clouds and multiple data centers the challenges and risks have grown by orders of magnitude. You will need tools like Spectrum Protect Plus, especially the Plus part, which IBM insists is a completely new offering.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Blockchain Platform Aims for Immutable Accuracy

August 25, 2017

Earlier this week IBM announced a major blockchain collaboration among group of leading companies across the global food supply chain. The goal is to reduce the number of people falling ill or even dying from eating contaminated food. IBM’s solution is its blockchain platform, which it believes is ideally suited to help address these challenges because it establishes a trusted environment that tracks all transactions, an accurate, consistent, immutable version.

Blockchain can improve food traceability

The food segment is just one of many industries IBM will target for its blockchain platform. It describes the platform as ideally suited to help address varied industry challenges because it establishes a trusted environment for all transactions. IBM claims it as the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance and operation of a multi-institution business network. Rival vendors, like Accenture, may disagree.  In the case of the global food supply chain, all participants -growers, suppliers, processors, distributors, retailers, regulators and consumers – can gain permissioned access to known and trusted information regarding the origin and state of food. In December 2016 DancingDinosaur reported on IBM and Walmart using blockchain for food safety.

IBM’s blockchain platform is built around Hyperledger Composer, integrated with popular development environments using open developer tools, and accepted business terms to generate blockchain code and smart contracts. It also includes sample industry use cases.  Using IBM’s platform, developers can create standard business language in JavaScript and the APIs help keep development work at the business level, rather than being highly technical. This makes it possible for most any programmer to be a blockchain developer. Additionally, a variety of IBM Developer Journeys for blockchain are available featuring free open source code, documentation, APIs, architecture diagrams, and one-click deployment Git repositories to fast-track building, according to IBM.

For governance and operation it also provides activation tools for new networks, members, smart contracts and transaction channels. It also includes multi-party workflow tool with member activities panel, integrated notifications, and secure signature collection for policy voting. In addition, a new class of democratic governance tools designed to help improve productivity across the organizations uses a voting process that collects signatures from members to govern member invitation distribution of smart contracts and the creation of transactions channels. By enabling the quick onboarding of participants, assigning roles, and managing access, organizations can begin transacting via the blockchain fast.

In operating the network IBM blockchain platform provides always-on, high availability with seamless software and blockchain network updates, a hardened security stack with no privileged access, which blocks malware, and built-in blockchain monitoring for full network visibility. Woven throughout the platform is the Hyperledger Fabric. It also provides the highest-level, commercially available tamper resistant FIPS140-2 level 4 protection for encryption keys.

Along with its blockchain platform, IBM is advancing other blockchain supply chain initiatives by using the platform for an automated billing and invoicing system. Initial work to use blockchain for invoicing also is underway starting with Lenovo. This will provide an audit-ready solution with full traceability of billing and operational data, and help speed on-boarding time for new vendors and new contract requirements, according to IBM.

The platform leverages IBM’s work for more than 400 organizations. It includes insights gained as IBM has built blockchain networks across industries ranging from financial services, supply chain and logistics, retail, government, and healthcare.

Extensively tested and piloted, the IBM’s new blockchain platform addresses a wide range of enterprise pain points, including both business and technical requirements around security, performance, collaboration and privacy. It includes innovation developed through open source collaboration in the Hyperledger community, including the newest Hyperledger Fabric v1.0 framework and Hyperledger Composer blockchain tool, both hosted by the Linux Foundation.

DancingDinosaur has previously noted that the z appears ideal for blockchain. DancingDinosaur based this on the z13’s scalability, security, and performance. The new z14, with its automated, pervasive encryption may be even better.  The Hyperledger Composer capabilities along with the sample use cases promise an easy, simple way to try blockchain among some suppliers and partners.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Get a Next-Gen Datacenter with IBM-Nutanix POWER8 System

July 14, 2017

First announced by IBM on May 16 here, this solution, driven by client demand for a simplified hyperconverged—combined server, network, storage, hardware, software—infrastructure, is designed for data-intensive enterprise workloads.  Aimed for companies increasingly looking for the ease of deployment, use, and management that hyperconverged solutions promise. It is being offered as an integrated hardware and software offering in order to deliver on that expectation.

Music made with IBM servers, storage, and infrastructure

IBM’s new POWER8 hyperconverged solutions enable a public cloud-like experience through on-premises infrastructure with top virtualization and automation capabilities combined with Nutanix’s public and on-premises cloud capabilities. They provide a combination of reliable storage, fast networks, scalability and extremely powerful computing in modular, scalable, manageable building blocks that can be scaled simply by adding nodes when needed.

Over time, IBM suggests a roadmap of offerings that will roll out as more configurations are needed to satisfy client demand and as feature and function are brought into both the IBM Cognitive Systems portfolio and the Nutanix portfolio. Full integration is key to the value proposition of this offering so more roadmap options will be delivered as soon as feature function is delivered and integration testing can be completed.

Here are three immediate things you might do with these systems:

  1. Mission-critical workloads, such as databases, large data warehouses, web infrastructure, and mainstream enterprise apps
  2. Cloud native workloads, including full stack open source middleware, enterprise databases
    and containers
  3. Next generation cognitive workloads, including big data, machine learning, and AI

Note, however, the change in IBM’s pricing strategy. The products will be priced with the goal to remain neutral on total cost of acquisition (TCA) to comparable offerings on x86. In short, IBM promises to be competitive with comparable x86 systems in terms of TCA. This is a significant deviation from IBM’s traditional pricing, but as we have started to see already and will continue to see going forward IBM clearly is ready to play pricing flexibility to win the deals on products it wants to push.

IBM envisions the new hyperconverged systems to bring data-intensive enterprise workloads like EDB Postgres, MongoDB and WebSphere into a simple-to-manage, on-premises cloud environment. Running these complex workloads on IBM Hyperconverged Nutanix POWER8 system can help an enterprise quickly and easily deploy open source databases and web-serving applications in the data center without the complexity of setting up all of the underlying infrastructure plumbing and wrestling with hardware-software integration.

And maybe more to IBM’s ultimate aim, these operational data stores may become the foundational building blocks enterprises will use to build a data center capable of taking on cognitive workloads. These ever-advancing workloads in advanced analytics, machine learning and AI will require the enterprise to seamlessly tap into data already housed on premises. Soon expect IBM to bring new offerings to market through an entire family of hyperconverged systems that will be designed to simply and easily deploy and scale a cognitive cloud infrastructure environment.

Currently, IBM offers two systems: the IBM CS821 and IBM CS822. These servers are the industry’s first hyperconverged solutions that marry Nutanix’s one-click software simplicity and scalability with the proven performance of the IBM POWER architecture, which is designed specifically for data-intensive workloads. The IBM CS822 (the larger of the two offerings) sports 22 POWER8 processor cores. That’s 176 compute threads, with up to 512 GB of memory and 15.36 TB of flash storage in a compact server that meshes seamlessly with simple Nutanix Prism management.

This server runs Nutanix Acropolis with AHV and little endian Linux. If IBM honors its stated pricing policy promise, the cost should be competitive on the total cost of acquisition for comparable offerings on x86. DancingDinosaur is not a lawyer (to his mother’s disappointment), but it looks like there is considerable wiggle room in this promise. IBM Hyperconverged-Nutanix Systems will be released for general availability in Q3 2017. Specific timelines, models, and supported server configurations will be announced at the time of availability.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: