Posts Tagged ‘Google’

Pushing Quantum Onto the Cloud

September 4, 2020

Did you ever imagine the cloud would become your quantum computing platform, a place where you would run complex quantum algorithms requiring significant specialized processing across multi-qubit machines available at a click? But that is exactly what is happening.

IBM started it a few years back by making their small qubit machines available in the cloud and even larger ones now. Today Xanadu is offering 8-qubit or 12-qubit chips, and even a 24-qubit chip in the next month or so, according to the Toronto-based company.

Xanadu quantum processor

As DancingDinosaur has previously reported, there are even more: Google reports a quantum computer lab with five machines and Honeywell has six quantum machines. D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.

D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.In September, Xanadu introduced its quantum cloud platform. This allows developers to access its gate-based photonic quantum processors with 8-qubit or 12-qubit chips across the cloud.

Photonics-based quantum machines have certain advantages over other platforms, according to the company. Xanadu’s quantum processors operate at room temperature, not low Kelvin temperatures. They can easily integrate into an existing fiber optic-based telecommunication infrastructure, enabling quantum computers to be networked. It also offers scalability and fault tolerance, owing to error-resistant physical qubits and flexibility in designing error correction codes. Xanadu’s type of qubit is based on squeezed states – a special type of light generated by its own chip-integrated silicon photonic devices, it claims.

DancingDinosaur recommends you check out Xanadu’s documentation and details. It does not have sufficient familiarity with photonics, especially as related to quantum computing, to judge any of the above statements. The company also notes it offers a cross-platform Python library for simulating and executing programs on quantum photonic hardware. Its open source tools are available on GitHub.

Late in August IBM has unveiled a new milestone on its quantum computing road map, achieving the company’s highest Quantum Volume to date. By following the link, you see that Quantum Value is a metric conceived by IBM to measure and compare quantum computing power. DancingDinosaur is not aware of any other quantum computing vendors using it, which doesn’t mean anything of course. Quantum computing is so new and so different and with many players joining in with different approaches it will be years before anadu see what metrics prove most useful. 

To come up with its Quantum Volume rating, IBM  combined a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit, systems to achieve the high Quantum Volume rating. The company has made a total of 28 quantum computers available over the last four years through the IBM Quantum Experience, which companies join to gain access to its quantum machines and tools, including its software development toolset, 

Do not confuse Quantum Volume with Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer versus a conventional one. Quantum Advantage will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume, notes IBM, measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve its Quantum Volume milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit. The IBM Quantum team has shared details on the technical improvements made across the full stack to reach Quantum Volume 64 in a preprint released on arXiv, today.

What is most exciting is that the latest quantum happenings are things quantum you can access over the cloud without having to cool your data center to near zero Kelvin temperatures. If you try any of these, DancingDinosaur would love to hear how it goes.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

D-Wave and NEC Advance Quantum Computing

June 22, 2020

IBM boasts of 18 quantum computer models, based on the number of qbits, but it isn’t the only player staking out the quantum market. Last week D-Wave, another early shipper of quantum systems, announced a joint quantum product development and marketing initiative with NEC, which made a $10 million investment in D-Wave.

D-Wave NEC Qauntum Leap

The two companies, according to the announcement,  will work together on the development of hybrid quantum/classical technologies and services that combine the best features of classical computers and quantum computers; the development of new hybrid applications that make use of those services; and joint marketing and sales go-to-market activities to promote quantum computing. Until quantum matures, expect to see more combinations of quantum and classical computing as companies try to figure out how these seemingly incompatible technologies can work together.

For example the two companies suggest that NEC and D-Wave will create practical business and scientific quantum applications in fields ranging from transportation to materials science to machine learning, using D-Wave’s Leap with new joint hybrid services. Or, the two companies might apply D-Wave’s collection of over 200 early customer quantum applications to six markets identified by NEC, such as finance, manufacturing and distribution.

“We are very excited to collaborate with D-Wave. This announcement marks the latest of many examples where NEC has partnered with universities and businesses to jointly develop various applications and technologies. This collaborative agreement aims to leverage the strengths of both companies to fuel quantum application development and business value today,” said Motoo Nishihara, Executive Vice President and CTO, NEC.

Also, NEC and D-Wave intend to create practical business and scientific quantum applications in fields ranging from transportation to materials science to machine learning, using Leap and the new joint hybrid services. The two companies also will apply D-Wave’s collection of over 200 early customer applications to six markets identified by NEC, such as finance, manufacturing and distribution. The two companies will also explore the possibility of enabling the use of NEC’s supercomputers on D-Wave’s Leap quantum cloud service.

“By combining efforts with NEC, we believe we can bring even more quantum benefit to the entire Japanese market that is building business-critical hybrid quantum applications in both the public and private sectors,” said Alan Baratz, CEO of D-Wave. He adds: ” We’re united in the belief that hybrid software and systems are the future of commercial quantum computing. Our joint collaboration will further the adoption of quantum computing in the Japanese market and beyond.”

IBM continues to be the leader in quantum computing, boasting 18 quantum computers of various qubit counts. And they are actually available for use via the Internet, where IBM keeps them running and sufficiently cold–a few degrees above absolute zero–to ensure computational stability. Quantum computers clearly are not something you want to buy for your data center.

But other companies are rushing into the market. Google operates a quantum computer lab with five machines and Honeywell has six quantum machines, according to published reports. Others include Microsoft and Intel. Plus there are startups: IonQ, Quantum Circuits, and Rigetti Computing. All of these have been referenced previously in earlier DancingDinosaur, which just hopes to live long enough to see useful quantum computing come about.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Pushes Hybrid Cloud

December 14, 2018

Between quantum computing, blockchain, and hybrid cloud IBM is pursuing a pretty ambitious agenda. Of the three, hybrid promises the most immediate payback. Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a new Forrester report, Predictions 2019: Cloud Computing

Of course, IBM didn’t wait until 2019. It purchased Red Hat Linux at the end of Oct. 2018. DancingDinosaur covered it here a few days later. At that time IBM Chairman Ginni Rometty called the acquisition of Red Hat a game-changer. “It changes everything about the cloud market,” she noted. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer.

Forrester continues, predicting that in 2019 the cloud will reach its more interesting young adult years, bringing innovative development services to enterprise apps rather than just serving up cheaper, temporary servers and storage, which is how it has primarily grown over the past decade. Who hasn’t turned to one or another cloud provider to augment its IT resources as needed, whether backup or server capacity, and network?

As Forrester puts it: The six largest hyperscale cloud leaders — Alibaba, Amazon Web Services [AWS], Google, IBM, Microsoft Azure, and Oracle — will all grow larger in 2019, as service catalogs and global regions expand. Meanwhile, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion in 2019, expanding at more than 20%, the research firm predicts.

Hybrid clouds, which provide two or more cloud providers or platforms, are emerging as the preferred way for enterprises to go.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but that also monitors trends and usage to prevent outages.

Of course, IBM also offers a solution for this; the company’s Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud.

Along with hybrid clouds containers are huge in Forrester’s view. Powered by cloud-native open source components and tools, companies will start rolling out their own digital application platforms that will span clouds, include serverless and event-driven services, and form the foundation for modernizing core business apps for the next decade, the researchers observed. Next year’s hottest trend, according to Forrester, will be making containers easier to deploy, secure, monitor, scale, and upgrade. “Enterprise-ready container platforms from Docker, IBM, Mesosphere, Pivotal, Rancher, Red Hat, VMware, and others are poised to grow rapidly,” the researchers noted.

This may not be as straightforward as the researchers imply. Each organization must select for itself which private cloud strategy is most appropriate, they note. They anticipate greater private cloud structure emerging in 2019. It noted that organizations face three basic private cloud paths: building internally, using vSphere sprinkled with developer-focused tools and software-defined infrastructure; and having its cloud environment custom-built with converged or hyperconverged software stacks to minimize the tech burden. Or lastly, building its cloud infrastructure internally with OpenStack, relying on the hard work of its own tech-savvy team. Am sure there are any number of consultants, contractors, and vendors eager to step in and do this for you.

If you aren’t sure, IBM is offering a number of free trials that you can play with.

As Forrester puts it: Buckle up; for 2019 expect the cloud ride to accelerate.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Shouldn’t Forget Its Server Platforms

April 5, 2018

The word coming out of IBM brings a steady patter about cognitive, Watson, and quantum computing, for which IBM predicted quantum would be going mainstream within five years. Most DancingDinosaur readers aren’t worrying about what’s coming in 2023 although maybe they should. They have data centers to run now and are wondering where they are going to get the system horsepower they will need to deliver IoT or Blockchain or any number of business initiatives clamoring for system resources today or tomorrow and all they’ve got are the z14 and the latest LinuxONE. As powerful as they were when first announced, do you think that will be enough tomorrow?

IBM’s latest server, the Z

Timothy Prickett Morgan, analyst at The Next Platform, apparently isn’t so sure. He writes in a recent piece how Google and the other hyperscalers need to add serious power to today’s server options. The solution involves “putting systems based on IBM’s Power9 processor into production.” This shouldn’t take anybody by surprise; almost as soon as IBM set up the Open Power consortium Rackspace, Google, and a handful of others started making noises about using Open POWER for a new type of data center server. The most recent announcements around Power9, covered here back in Feb., promise some new options with even more coming.

Writes Morgan: “Google now has seven applications that have more than 1 billion users – adding Android, Maps, Chrome, and Play to the mix – and as the company told us years ago, it is looking for any compute, storage, and networking edge that will allow it to beat Moore’s Law.” Notice that this isn’t about using POWER9 to drive down Intel’s server prices; Google faces a more important nemesis, the constraints of Moore’s Law.

Google has not been secretive about this, at least not recently. To its credit Google is making its frustrations known at appropriate industry events:  “With a technology trend slowdown and growing demand and changing demand, we have a pretty challenging situation, what we call a supply-demand gap, which means the supply on the technology side is not keeping up with this phenomenal demand growth,” explained Maire Mahony, systems hardware engineer at Google and its key representative at the OpenPower Foundation that is steering the Power ecosystem. “That makes it hard to for us to balance that curve we call performance per TCO dollar. This problem is not unique to Google. This is an industry-wide problem.” True, but the majority of data centers, even the biggest ones, don’t face looming multi-billion user performance and scalability demands.

Morgan continued: “Google has absolutely no choice but to look for every edge. The benefits of homogeneity, which have been paramount for the first decade of hyperscaling, no longer outweigh the need to have hardware that better supports the software companies like Google use in production.”

This isn’t Intel’s problem alone although it introduced a new generation of systems, dubbed Skylake, to address some of these concerns. As Morgan noted recently, “various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers.” So can AMD’s Epyc X86 processors. Similarly, the Open Power consortium offers an alternative in POWER9.

Morgan went on: IBM differentiated the hardware with its NVLink versions and, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Still, it is not apparent to Morgan how POWER9 will compete.

Success may come down to a battle of vendor ecosystems. As Morgan points out: aside from the POWER9 system that Google co-engineered with Rackspace Hosting, the most important contributions that Google has made to the OpenPower effort is to work with IBM to create the OPAL firmware, the OpenKVM hypervisor, and the OpenBMC baseboard management controller, which are all crafted to support little endian Linux, as is common on x86.

Guess this is the time wade into the endian morass. Endian refers to the byte ordering that is used, and IBM chips and a few others do them in reverse of the x86 and Arm architectures. The Power8 chip and its POWER9 follow-on support either mode, big or little endian. By making all of these changes, IBM has made the Power platform more palatable to the hyperscalers, which is why Google, Tencent, Alibaba, Uber, and PayPal all talk about how they make use of Power machinery, particularly to accelerate machine learning and generic back-end workloads. But as quickly as IBM jumped on the problem recently after letting it linger for years, it remains one more complication that must be considered. Keep that in mind when a hyperscaler like Google talks about performance per TCO dollar.

Where is all this going? Your guess is as good as any. The hyperscalers and the consortia eventually should resolve this and DancingDinosaur will keep watching. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Introduces Cloud Private to Hybrid Clouds

November 10, 2017

When you have enough technologies lying around your basement, sometimes you can cobble a few pieces together, mix it with some sexy new stuff and, bingo, you have something that meets a serious need of a number of disparate customers. That’s essentially what IBM did with Cloud Private, which it announced Nov. 1.

IBM staff test Cloud Private automation software

IBM intended Cloud Private to enable companies to create on-premises cloud capabilities similar to public clouds to accelerate app dev. Don’t think it as just old stuff; the new platform is built on the open source Kubernetes-based container architecture and supports both Docker containers and Cloud Foundry. This facilitates integration and portability of workloads, enabling them to evolve to almost any cloud environment, including—especially—the public IBM Cloud.

Also IBM announced container-optimized versions of core enterprise software, including IBM WebSphere Liberty, DB2 and MQ – widely used to run and manage the world’s most business-critical applications and data. This makes it easier to share data and evolve applications as needed across the IBM Cloud, private, public clouds, and other cloud environments with a consistent developer, administrator, and user experience.

Cloud Private amounts to a new software platform, which relies on open source container technology to unlock billions of dollars in core data and applications incorporating legacy software like WebSphere and Db2. The purpose is to extend cloud-native tools across public and private clouds. For z data centers that have tons of valuable, reliable working systems years away from being retired, if ever, Cloud Private may be just what they need.

Almost all enterprise systems vendors are trying to do the same hybrid cloud computing enablement. HPE, Microsoft, Cisco, which is partnering with Google on this, and more. This is a clear indication that the cloud and especially the hybrid cloud is crossing the proverbial chasm. In years past IT managers and C-level executives didn’t want anything to do with the cloud; the IT folks saw it as a threat to their on premises data center and the C-suite was scared witless about security.

Those issues haven’t gone away although the advent of hybrid clouds have mitigated some of the fears among both groups. Similarly, the natural evolution of the cloud and advances in hybrid cloud computing make this more practical.

The private cloud too is growing. According to IBM, while public cloud adoption continues to grow at a rapid pace, organizations, especially in regulated industries of finance and health care, are continuing to leverage private clouds as part of their journey to public cloud environments to quickly launch and update applications. This also is what is driving hybrid clouds. IBM estimates companies will spend more than $50 billion globally starting in 2017 to create and evolve private clouds with growth rates of 15 to 20 percent a year through 2020, according to IBM market projections.

The problem facing IBM and the other enterprise systems vendors scrambling for hybrid clouds is how to transition legacy systems into cloud native systems. The hybrid cloud in effect acts as facilitating middleware. “Innovation and adoption of public cloud services has been constrained by the challenge of transitioning complex enterprise systems and applications into a true cloud-native environment,” said Arvind Krishna, Senior Vice President for IBM Hybrid Cloud and Director of IBM Research. IBM’s response is Cloud Private, which brings rapid application development and modernization to existing IT infrastructure while combining it with the service of a public cloud platform.

Hertz adopted this approach. “Private cloud is a must for many enterprises such as ours working to reduce or eliminate their dependence on internal data centers,” said Tyler Best, Hertz Chief Information Officer.  A strategy consisting of public, private and hybrid cloud is essential for large enterprises to effectively make the transition from legacy systems to cloud.

IBM is serious about cloud as a strategic initiative. Although not as large as Microsoft Azure or Amazon Web Service (AWS) in the public cloud, a recent report by Synergy Research found that IBM is a major provider of private cloud services, making the company the third-largest overall cloud provider.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Open POWER-Open Compute-POWER9 at Open Compute Summit

March 16, 2017

Bryan Talik, President, OpenPOWER Foundation provides a detailed rundown on the action at the Open Compute  Summit held last week in Santa Clara. After weeks of writing about Cognitive, Machine Learning, Blockchain, and even quantum computing, it is a nice shift to conventional computing platforms that should still be viewed as strategic initiatives.

The OpenPOWER, Open Compute gospel was filling the air in Santa Clara.  As reported, Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explained, “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

Added Adam Smith, CEO of Alpha Data:  “Open standards and communities lead to rapid innovation…We are proud to support the latest advances of OpenPOWER accelerator technology featuring Xilinx FPGAs.”

John Zannos, Canonical OpenPOWER Board Chair chimed in: For 2017, the OpenPOWER Board approved four areas of focus that include machine learning/AI, database and analytics, cloud applications and containers. The strategy for 2017 also includes plans to extend OpenPOWER’s reach worldwide and promote technical innovations at various academic labs and in industry. Finally, the group plans to open additional application-oriented workgroups to further technical solutions that benefits specific application areas.

Not surprisingly, some members even see collaboration as the key to satisfying the performance demands that the computing market craves. “The computing industry is at an inflection point between conventional processing and specialized processing,” according to Aaron Sullivan, distinguished engineer at Rackspace. “

To satisfy this shift, Rackspace and Google announced an OCP-OpenPOWER server platform last year, codenamed Zaius and Barreleye G2.  It is based on POWER9. At the OCP Summit, both companies put on a public display of the two products.

This server platform promises to improve the performance, bandwidth, and power consumption demands for emerging applications that leverage machine learning, cognitive systems, real-time analytics and big data platforms. The OCP players plan to continue their work alongside Google, OpenPOWER, OpenCAPI, and other Zaius project members.

Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explains: “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

This Zaius and Barreleye G@ server platforms promise to advance the performance, bandwidth and power consumption demands for emerging applications that leverage the latest advanced technologies. These latest technologies are none other than the strategic imperatives–cognitive, machine learning, real-time analytics–IBM has been repeating like a mantra for months.

Open Compute Projects also were displayed at the Summit. Specifically, as reported: Google and Rackspace, published the Zaius specification to Open Compute in October 2016, and had engineers to explain the specification process and to give attendees a starting point for their own server design.

Other Open Compute members, reportedly, also were there. Inventec showed a POWER9 OpenPOWER server based on the Zaius server specification. Mellanox showcased ConnectX-5, its next generation networking adaptor that features 100Gb/s Infiniband and Ethernet. This adaptor supports PCIe Gen4 and CAPI2.0, providing a higher performance and a coherent connection to the POWER9 processor vs. PCIe Gen3.

Others, reported by Talik, included Wistron and E4 Computing, which showcased their newly announced OCP-form factor POWER8 server. Featuring two POWER8 processors, four NVIDIA Tesla P100 GPUs with the NVLink interconnect, and liquid cooling, the new platform represents an ideal OCP-compliant HPC system.

Talik also reported IBM, Xilinx, and Alpha Data showed their line ups of several FPGA adaptors designed for both POWER8 and POWER9. Featuring PCIe Gen3, CAPI1.0 for POWER8 and PCIe Gen4, CAPI2.0 and 25G/s CAPI3.0 for POWER9 these new FPGAs bring acceleration to a whole new level. OpenPOWER member engineers were on-hand to provide information regarding the CAPI SNAP developer and programming framework as well as OpenCAPI.

Not to be left out, Talik reported that IBM showcased products it previously tested and demonstrated: POWER8-based OCP and OpenPOWER Barreleye servers running IBM’s Spectrum Scale software, a full-featured global parallel file system with roots in HPC and now widely adopted in commercial enterprises across all industries for data management at petabyte scale.  Guess compute platform isn’t quite the dirty phrase IBM has been implying for months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: