Posts Tagged ‘Google’

IBM Pushes Hybrid Cloud

December 14, 2018

Between quantum computing, blockchain, and hybrid cloud IBM is pursuing a pretty ambitious agenda. Of the three, hybrid promises the most immediate payback. Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a new Forrester report, Predictions 2019: Cloud Computing

Of course, IBM didn’t wait until 2019. It purchased Red Hat Linux at the end of Oct. 2018. DancingDinosaur covered it here a few days later. At that time IBM Chairman Ginni Rometty called the acquisition of Red Hat a game-changer. “It changes everything about the cloud market,” she noted. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer.

Forrester continues, predicting that in 2019 the cloud will reach its more interesting young adult years, bringing innovative development services to enterprise apps rather than just serving up cheaper, temporary servers and storage, which is how it has primarily grown over the past decade. Who hasn’t turned to one or another cloud provider to augment its IT resources as needed, whether backup or server capacity, and network?

As Forrester puts it: The six largest hyperscale cloud leaders — Alibaba, Amazon Web Services [AWS], Google, IBM, Microsoft Azure, and Oracle — will all grow larger in 2019, as service catalogs and global regions expand. Meanwhile, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion in 2019, expanding at more than 20%, the research firm predicts.

Hybrid clouds, which provide two or more cloud providers or platforms, are emerging as the preferred way for enterprises to go.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but that also monitors trends and usage to prevent outages.

Of course, IBM also offers a solution for this; the company’s Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud.

Along with hybrid clouds containers are huge in Forrester’s view. Powered by cloud-native open source components and tools, companies will start rolling out their own digital application platforms that will span clouds, include serverless and event-driven services, and form the foundation for modernizing core business apps for the next decade, the researchers observed. Next year’s hottest trend, according to Forrester, will be making containers easier to deploy, secure, monitor, scale, and upgrade. “Enterprise-ready container platforms from Docker, IBM, Mesosphere, Pivotal, Rancher, Red Hat, VMware, and others are poised to grow rapidly,” the researchers noted.

This may not be as straightforward as the researchers imply. Each organization must select for itself which private cloud strategy is most appropriate, they note. They anticipate greater private cloud structure emerging in 2019. It noted that organizations face three basic private cloud paths: building internally, using vSphere sprinkled with developer-focused tools and software-defined infrastructure; and having its cloud environment custom-built with converged or hyperconverged software stacks to minimize the tech burden. Or lastly, building its cloud infrastructure internally with OpenStack, relying on the hard work of its own tech-savvy team. Am sure there are any number of consultants, contractors, and vendors eager to step in and do this for you.

If you aren’t sure, IBM is offering a number of free trials that you can play with.

As Forrester puts it: Buckle up; for 2019 expect the cloud ride to accelerate.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Shouldn’t Forget Its Server Platforms

April 5, 2018

The word coming out of IBM brings a steady patter about cognitive, Watson, and quantum computing, for which IBM predicted quantum would be going mainstream within five years. Most DancingDinosaur readers aren’t worrying about what’s coming in 2023 although maybe they should. They have data centers to run now and are wondering where they are going to get the system horsepower they will need to deliver IoT or Blockchain or any number of business initiatives clamoring for system resources today or tomorrow and all they’ve got are the z14 and the latest LinuxONE. As powerful as they were when first announced, do you think that will be enough tomorrow?

IBM’s latest server, the Z

Timothy Prickett Morgan, analyst at The Next Platform, apparently isn’t so sure. He writes in a recent piece how Google and the other hyperscalers need to add serious power to today’s server options. The solution involves “putting systems based on IBM’s Power9 processor into production.” This shouldn’t take anybody by surprise; almost as soon as IBM set up the Open Power consortium Rackspace, Google, and a handful of others started making noises about using Open POWER for a new type of data center server. The most recent announcements around Power9, covered here back in Feb., promise some new options with even more coming.

Writes Morgan: “Google now has seven applications that have more than 1 billion users – adding Android, Maps, Chrome, and Play to the mix – and as the company told us years ago, it is looking for any compute, storage, and networking edge that will allow it to beat Moore’s Law.” Notice that this isn’t about using POWER9 to drive down Intel’s server prices; Google faces a more important nemesis, the constraints of Moore’s Law.

Google has not been secretive about this, at least not recently. To its credit Google is making its frustrations known at appropriate industry events:  “With a technology trend slowdown and growing demand and changing demand, we have a pretty challenging situation, what we call a supply-demand gap, which means the supply on the technology side is not keeping up with this phenomenal demand growth,” explained Maire Mahony, systems hardware engineer at Google and its key representative at the OpenPower Foundation that is steering the Power ecosystem. “That makes it hard to for us to balance that curve we call performance per TCO dollar. This problem is not unique to Google. This is an industry-wide problem.” True, but the majority of data centers, even the biggest ones, don’t face looming multi-billion user performance and scalability demands.

Morgan continued: “Google has absolutely no choice but to look for every edge. The benefits of homogeneity, which have been paramount for the first decade of hyperscaling, no longer outweigh the need to have hardware that better supports the software companies like Google use in production.”

This isn’t Intel’s problem alone although it introduced a new generation of systems, dubbed Skylake, to address some of these concerns. As Morgan noted recently, “various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers.” So can AMD’s Epyc X86 processors. Similarly, the Open Power consortium offers an alternative in POWER9.

Morgan went on: IBM differentiated the hardware with its NVLink versions and, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Still, it is not apparent to Morgan how POWER9 will compete.

Success may come down to a battle of vendor ecosystems. As Morgan points out: aside from the POWER9 system that Google co-engineered with Rackspace Hosting, the most important contributions that Google has made to the OpenPower effort is to work with IBM to create the OPAL firmware, the OpenKVM hypervisor, and the OpenBMC baseboard management controller, which are all crafted to support little endian Linux, as is common on x86.

Guess this is the time wade into the endian morass. Endian refers to the byte ordering that is used, and IBM chips and a few others do them in reverse of the x86 and Arm architectures. The Power8 chip and its POWER9 follow-on support either mode, big or little endian. By making all of these changes, IBM has made the Power platform more palatable to the hyperscalers, which is why Google, Tencent, Alibaba, Uber, and PayPal all talk about how they make use of Power machinery, particularly to accelerate machine learning and generic back-end workloads. But as quickly as IBM jumped on the problem recently after letting it linger for years, it remains one more complication that must be considered. Keep that in mind when a hyperscaler like Google talks about performance per TCO dollar.

Where is all this going? Your guess is as good as any. The hyperscalers and the consortia eventually should resolve this and DancingDinosaur will keep watching. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Introduces Cloud Private to Hybrid Clouds

November 10, 2017

When you have enough technologies lying around your basement, sometimes you can cobble a few pieces together, mix it with some sexy new stuff and, bingo, you have something that meets a serious need of a number of disparate customers. That’s essentially what IBM did with Cloud Private, which it announced Nov. 1.

IBM staff test Cloud Private automation software

IBM intended Cloud Private to enable companies to create on-premises cloud capabilities similar to public clouds to accelerate app dev. Don’t think it as just old stuff; the new platform is built on the open source Kubernetes-based container architecture and supports both Docker containers and Cloud Foundry. This facilitates integration and portability of workloads, enabling them to evolve to almost any cloud environment, including—especially—the public IBM Cloud.

Also IBM announced container-optimized versions of core enterprise software, including IBM WebSphere Liberty, DB2 and MQ – widely used to run and manage the world’s most business-critical applications and data. This makes it easier to share data and evolve applications as needed across the IBM Cloud, private, public clouds, and other cloud environments with a consistent developer, administrator, and user experience.

Cloud Private amounts to a new software platform, which relies on open source container technology to unlock billions of dollars in core data and applications incorporating legacy software like WebSphere and Db2. The purpose is to extend cloud-native tools across public and private clouds. For z data centers that have tons of valuable, reliable working systems years away from being retired, if ever, Cloud Private may be just what they need.

Almost all enterprise systems vendors are trying to do the same hybrid cloud computing enablement. HPE, Microsoft, Cisco, which is partnering with Google on this, and more. This is a clear indication that the cloud and especially the hybrid cloud is crossing the proverbial chasm. In years past IT managers and C-level executives didn’t want anything to do with the cloud; the IT folks saw it as a threat to their on premises data center and the C-suite was scared witless about security.

Those issues haven’t gone away although the advent of hybrid clouds have mitigated some of the fears among both groups. Similarly, the natural evolution of the cloud and advances in hybrid cloud computing make this more practical.

The private cloud too is growing. According to IBM, while public cloud adoption continues to grow at a rapid pace, organizations, especially in regulated industries of finance and health care, are continuing to leverage private clouds as part of their journey to public cloud environments to quickly launch and update applications. This also is what is driving hybrid clouds. IBM estimates companies will spend more than $50 billion globally starting in 2017 to create and evolve private clouds with growth rates of 15 to 20 percent a year through 2020, according to IBM market projections.

The problem facing IBM and the other enterprise systems vendors scrambling for hybrid clouds is how to transition legacy systems into cloud native systems. The hybrid cloud in effect acts as facilitating middleware. “Innovation and adoption of public cloud services has been constrained by the challenge of transitioning complex enterprise systems and applications into a true cloud-native environment,” said Arvind Krishna, Senior Vice President for IBM Hybrid Cloud and Director of IBM Research. IBM’s response is Cloud Private, which brings rapid application development and modernization to existing IT infrastructure while combining it with the service of a public cloud platform.

Hertz adopted this approach. “Private cloud is a must for many enterprises such as ours working to reduce or eliminate their dependence on internal data centers,” said Tyler Best, Hertz Chief Information Officer.  A strategy consisting of public, private and hybrid cloud is essential for large enterprises to effectively make the transition from legacy systems to cloud.

IBM is serious about cloud as a strategic initiative. Although not as large as Microsoft Azure or Amazon Web Service (AWS) in the public cloud, a recent report by Synergy Research found that IBM is a major provider of private cloud services, making the company the third-largest overall cloud provider.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Open POWER-Open Compute-POWER9 at Open Compute Summit

March 16, 2017

Bryan Talik, President, OpenPOWER Foundation provides a detailed rundown on the action at the Open Compute  Summit held last week in Santa Clara. After weeks of writing about Cognitive, Machine Learning, Blockchain, and even quantum computing, it is a nice shift to conventional computing platforms that should still be viewed as strategic initiatives.

The OpenPOWER, Open Compute gospel was filling the air in Santa Clara.  As reported, Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explained, “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

Added Adam Smith, CEO of Alpha Data:  “Open standards and communities lead to rapid innovation…We are proud to support the latest advances of OpenPOWER accelerator technology featuring Xilinx FPGAs.”

John Zannos, Canonical OpenPOWER Board Chair chimed in: For 2017, the OpenPOWER Board approved four areas of focus that include machine learning/AI, database and analytics, cloud applications and containers. The strategy for 2017 also includes plans to extend OpenPOWER’s reach worldwide and promote technical innovations at various academic labs and in industry. Finally, the group plans to open additional application-oriented workgroups to further technical solutions that benefits specific application areas.

Not surprisingly, some members even see collaboration as the key to satisfying the performance demands that the computing market craves. “The computing industry is at an inflection point between conventional processing and specialized processing,” according to Aaron Sullivan, distinguished engineer at Rackspace. “

To satisfy this shift, Rackspace and Google announced an OCP-OpenPOWER server platform last year, codenamed Zaius and Barreleye G2.  It is based on POWER9. At the OCP Summit, both companies put on a public display of the two products.

This server platform promises to improve the performance, bandwidth, and power consumption demands for emerging applications that leverage machine learning, cognitive systems, real-time analytics and big data platforms. The OCP players plan to continue their work alongside Google, OpenPOWER, OpenCAPI, and other Zaius project members.

Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explains: “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

This Zaius and Barreleye G@ server platforms promise to advance the performance, bandwidth and power consumption demands for emerging applications that leverage the latest advanced technologies. These latest technologies are none other than the strategic imperatives–cognitive, machine learning, real-time analytics–IBM has been repeating like a mantra for months.

Open Compute Projects also were displayed at the Summit. Specifically, as reported: Google and Rackspace, published the Zaius specification to Open Compute in October 2016, and had engineers to explain the specification process and to give attendees a starting point for their own server design.

Other Open Compute members, reportedly, also were there. Inventec showed a POWER9 OpenPOWER server based on the Zaius server specification. Mellanox showcased ConnectX-5, its next generation networking adaptor that features 100Gb/s Infiniband and Ethernet. This adaptor supports PCIe Gen4 and CAPI2.0, providing a higher performance and a coherent connection to the POWER9 processor vs. PCIe Gen3.

Others, reported by Talik, included Wistron and E4 Computing, which showcased their newly announced OCP-form factor POWER8 server. Featuring two POWER8 processors, four NVIDIA Tesla P100 GPUs with the NVLink interconnect, and liquid cooling, the new platform represents an ideal OCP-compliant HPC system.

Talik also reported IBM, Xilinx, and Alpha Data showed their line ups of several FPGA adaptors designed for both POWER8 and POWER9. Featuring PCIe Gen3, CAPI1.0 for POWER8 and PCIe Gen4, CAPI2.0 and 25G/s CAPI3.0 for POWER9 these new FPGAs bring acceleration to a whole new level. OpenPOWER member engineers were on-hand to provide information regarding the CAPI SNAP developer and programming framework as well as OpenCAPI.

Not to be left out, Talik reported that IBM showcased products it previously tested and demonstrated: POWER8-based OCP and OpenPOWER Barreleye servers running IBM’s Spectrum Scale software, a full-featured global parallel file system with roots in HPC and now widely adopted in commercial enterprises across all industries for data management at petabyte scale.  Guess compute platform isn’t quite the dirty phrase IBM has been implying for months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Fires a Shot at Intel with its Latest POWER Roadmap

June 17, 2016

In case you worry that IBM will abandon hardware in the pursuit of its strategic initiatives focusing on cloud, mobile, analytics and more; well, stop worrying. With the announcement of its POWER Roadmap at the OpenPOWER Summit earlier this spring, it appears POWER will be around for years to come. But IBM is not abandoning the strategic initiatives either; the new Roadmap promises to support new types of workloads, such as real time analytics, Linux, hyperscale data centers, and more along with support for the current POWER workloads.

power9b

Pictured above: POWER9 Architecture, courtesy of IBM

Specifically, IBM is offering a denser roadmap, not tied to technology and not even tied solely to IBM. It draws on innovations from a handful of the members of the Open POWER Foundation as well as support from Google. The new roadmap also signals IBM’s intention to make a serious run at Intel’s near monopoly on enterprise server processors by offering comparable or better price, performance, and features.

Google, for example, reports porting many of its popular web services to run on Power systems; its toolchain has been updated to output code for x86, ARM, or Power architectures with the flip of a configuration flag. Google, which strives to be everything to everybody, now has a highly viable alternative to Intel in terms of performance and price with POWER. At the OpenPOWER Summit early in the spring, Google made it clear it plans to build scale-out server solutions based on OpenPower.

Don’t even think, however, that Google is abandoning Intel. The majority of its systems are Intel-oriented. Still, POWER and the OpenPOWER community will provide a directly competitive processing alternative.  To underscore the situation Google and Rackspace announced they were working together on Power9 server blueprints for the Open Compute Project, designs that reportedly are compatible with the 48V Open Compute racks Google and Facebook, another hyperscale data center, already are working on.

Google represents another proof point that OpenPOWER is ready for hyperscale data centers. DancingDinosaur, however, really is interested most in what is coming from OpenPOWER that is new and sexy for enterprise data centers, since most DancingDinosaur readers are focused on the enterprise data center. Of course, they still need ever better performance and scalability too. In that regard OpenPOWER has much for them in the works.

For starters, POWER8 is currently delivered as a 12-core, 22nm processor. POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators. That is sure to deliver more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even 7nm, processor, based on the existing micro-architecture.

The real POWER future, arriving around 2020, will feature a new micro-architecture, sport new features and functions, and bring new technology. Expect much, if not almost all, of the new functions to come from various OpenPOWER Foundation partners,

POWER9, only a year or so out, promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and bringing new deployment models for hyperscale, cloud, and technical computing through scale out deployment. This will include deployment in both clustered or multiple formats. It will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

Expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

As a data center manager, will a POWER9 machine change your data center dynamics?  Maybe, you decide: a dual-socket Power9 server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total 44 core count. That’s a lot of computing power in one rack.

Now IBM just has to crank out similar advances for the next z System (a z14 maybe?) through the Open Mainframe Project.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Put the Mainframe at the Heart of the Internet of Things

August 4, 2014

Does the Internet of things (IoT) sound familiar? It should. Remember massive networks of ATMs connecting back to the mainframe?

The mainframe is poised to take on the IoT challenge writes Advanced Software Products Group, Inc. (ASPG), a company specializing in mainframe software, in an interesting document called the Future of the Mainframe.  Part of that future is the IoT, which IBM refers to in a Redbook Point of View as the Interconnecting of Everything.

In that Redbook the IoT is defined as a network of Internet-enabled, real-world objects—things—ranging from nanotechnology objects to consumer electronics, home appliances, sensors of all kinds, embedded systems, and personal mobile devices. The IoT also will encompass enabling network and communication technologies, such as IPv6 to get the unique address capacity, web services, RFID, and 4G networks.

The IoT Redbook cites industry predictions of upwards of 50 billion connected devices by 2020, a number 10x that of all current Internet hosts, including connected mobile phones. Based on that the Redbook authors note two primary IoT scalability issues:

  1. The sheer number of connected devices; the quantity of connected devices, mainly the number of concurrent connections (throughput) a system can support and the quality of service (QoS) that can be delivered. Here, authors note, Internet scalability is a critical factor. Currently, most Internet-connected devices use IPv4, which is based on a 32-bit. Clearly, the industry has to speed the transition to IPv6, which implements a 128-bit addressing scheme that can support up to 2128 addresses or 4 x 1038 devices, although some tweaking of the IPv6 standard is being proposed for IoT.
  1. The volume of generated data and the performance issues associated with data collection, processing, storage, query, and display. IoT systems need to handle both device and data scalability issues. From a data standpoint, this is big data on steroids.

As ASPG noted in its paper cited above, the mainframe is well suited to provide a central platform for IoT. The zEnterprise has the power to connect large dispersed networks, capture and process the mountains of data produced every minute, and provide the security and privacy companies and individuals demand. In addition, it can accept, process, and interpret all that data in a useful way. In short, it may be the only general commercial computing platform powerful enough today to crunch vast quantities of data very quickly and is already proven to perform millions of transactions per second and do it securely.

Even with a top end zEC12 configured to the hilt and proven to handle maximum transactions per second, you are not quite yet ready to handle the IoT as it is currently being envisioned. This IoT vision is much more heterogeneous in all dimensions than the massive reservation or POS or ATM networks the mainframe has proven itself with.

At least one major piece still needed: an industry-wide standard that defines how the various devices capture myriad information for a diverse set of applications involving numerous vendors and ensure everything can communicate and exchange information in a meaningful way. Not surprisingly, the industry already is working on it.

Actually, maybe too many groups. The IEEE points to a number of standards, projects and activities it is involved with that address the creation of what it considers a vibrant IoT. The Open Internet Consortium, consisting of a slew of tech-industry heavyweights like Intel, Broadcom, and Samsung, hope to develop standards and certification for devices involved in the IoT. Another group, the AllSeen Alliance, is promoting an open standard called AllJoyn with the goal of enabling ubiquitously connected devices. Even Google is getting into the act by opening up its Nest acquisition so developers can connect their various home devices (thermostats, security alarm controllers, garage door openers, and such) via a home IoT.

This will likely shake out the way IT standards usually do with several competing groups fighting it out. Probably too early to start placing bets. But you can be sure IBM will be right there. The company already has put an IoT stake in the ground here (as if the z wasn’t enough).  Whatever eventually shakes out, System z shops should be right in the middle of the IoT action.

Expect this will be subject of discussion at the upcoming IBM Enterprise 2014 conference, Oct. 6-10 in Las Vegas. Your blogger expects to be there. DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog or Technologywriter.com.


%d bloggers like this: