Posts Tagged ‘Kinetica’

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Gets Serious About Open Data Science (ODS) with Anaconda

April 21, 2017

As IBM rapidly ramps up cognitive systems in various forms, its two remaining platforms, z System and POWER, get more and more interesting. This week IBM announced it was bringing the Anaconda Open Data Science (ODS) platform to its Cognitive Systems and PowerAI.

Anaconda, Courtesy Pinterest

Specifically, Anaconda will integrate with the PowerAI software distribution for machine learning (ML) and deep learning (DL). The goal: make it simple and fast to take advantage of Power performance and GPU optimization for data-intensive cognitive workloads.

“Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale,” said Bob Picciano, senior vice president of IBM Cognitive Systems. Added Travis Oliphant, co-founder and chief data scientist, Continuum Analytics, which introduced the Anaconda platform: “By optimizing Anaconda on Power, developers will also gain access to the libraries in the PowerAI Platform for exploration and deployment in Anaconda Enterprise.”

With more than 16 million downloads to date, Anaconda has emerged as the Open Data Science platform leader. It is empowering leading businesses across industries worldwide with tools to identify patterns in data, uncover key insights, and transform basic data into the intelligence required to solve the world’s most challenging problems.

As one of the fastest growing fields of AI, DL makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. DL is transforming the businesses of leading consumer Web and mobile application companies, and it is catching on with more traditional business.

IBM developed PowerAI to accelerate enterprise adoption of open-source ML and DL frameworks used to build cognitive applications. PowerAI promises to reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture and is tuned for high performance, according to IBM. With PowerAI, organizations also can realize the benefit of enterprise support on IBM Cognitive Systems HPC platforms used in the most demanding commercial, academic, and hyperscale environments

For POWER shops getting into Anaconda, which is based on Python, is straightforward. You need a Power8 with IBM GPU hardware or a Power8 combined with a Nvidia GPU, in effect a Minsky machine. It’s essentially a developer’s tool although ODS proponents see it more broadly, bridging the gap between traditional IT and lines of business, shifting traditional roles, and creating new roles. In short, they envision scientists, mathematicians, engineers, business people, and more getting involved in ODS.

The technology is designed to run on the user’s desktop but is packaged and priced as a cloud subscription with a base package of 20 users. User licenses range from $500 per year to $30,000 per year depending on which bells and whistles you include. The number of options is pretty extensive.

According to IBM, this started with PowerAI to accelerate enterprise adoption of open-source ML/DL learning frameworks used to build cognitive applications. Overall, the open Anaconda platform brings capabilities for large-scale data processing, predictive analytics, and scientific computing to simplify package management and deployment. Developers using open source ML/DL components can use Power as the deployment platform and take advantage of Power optimization and GPU differentiation for NVIDIA.

Not to be left out, IBM noted growing support for the OpenPOWER Foundation, which recently announced the OpenPOWER Machine Learning Work Group (OPMLWG). The new OPMLWG includes members like Google, NVIDIA and Mellanox to provide a forum for collaboration that will help define frameworks for the productive development and deployment of ML solutions using OpenPOWER ecosystem technology. The foundation has also surpassed 300-members, with new participants such as Kinetica, Red Hat, and Toshiba. For traditional enterprise data centers, the future increasingly is pointing toward cognitive in one form or another.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: