Posts Tagged ‘hybrid computing’

IBM Expands and Enhances its Cloud Offerings

June 15, 2018

IBM announced 18 new availability zones in North America, Europe, and Asia Pacific to bolster its IBM Cloud business and try to keep pace with AWS, the public cloud leader, and Microsoft. The new availability zones are located in Europe (Germany and UK), Asia-Pacific (Tokyo and Sydney), and North America (Washington, DC and Dallas).

IBM cloud availability zone, Dallas

In addition, organizations will be able to deploy multi-zone Kubernetes clusters across the availability zones via the IBM Cloud Kubernetes Service. This will simplify how they deploy and manage containerized applications and add further consistency to their cloud experience. Furthermore, deploying multi-zone clusters will have minimal impact on performance, about 2 ms latency between availability zones.

An availability zone, according to IBM, is an isolated instance of a cloud inside a data center region. Each zone brings independent power, cooling, and networking to strengthen fault tolerance. While IBM Cloud already operates in nearly 60 locations, the new zones add even more capacity and capability in these key centers. This global cloud footprint becomes especially critical as clients look to gain greater control of their data in the face of tightening data regulations, such as the European Union’s new General Data Protection Regulation (GDPR). See DancingDinosaur June 1, IBM preps z world for GDPR.

In its Q1 earnings IBM reported cloud revenue of $17.7bn over the past year, up 22 percent over the previous year, but that includes two quarters of outstanding Z revenue that is unlikely to be sustained,  at least until the next Z comes out, which is at least a few quarters away.  AWS meanwhile reported quarterly revenues up 49 percent to $5.4 billion, while Microsoft recently reported 93 percent growth for Azure revenues.

That leaves IBM trying to catch up the old fashioned way by adding new cloud capabilities, enhancing existing cloud capabilities, and attracting more clients to its cloud capabilities however they may be delivered. For example, IBM announced it is the first cloud provider to let developers run managed Kubernetes containers directly on bare metal servers with direct access to GPUs to improve the performance of machine-learning applications, which is critical to any AI effort.  Along the same lines, IBM will extend its IBM Cloud Private and IBM Cloud Private for Data and middleware to Red Hat’s OpenShift Container Platform and Certified Containers. Red Hat already is a leading provider of enterprise Linux to Z shops.

IBM has also expanded its cloud offerings to support the widest range of platforms. Not just Z, LinuxONE, and Power9 for Watson, but also x86 and a variety of non-IBM architectures and platforms. Similarly, notes IBM, users have gotten accustomed to accessing corporate databases wherever they reside, but proximity to cloud data centers still remains important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.

Contrary to simplifying things, the propagation of more and different types of clouds and cloud strategies complicate an organization’s cloud approach. Already, today companies are managing complex, hybrid public-private cloud environments. At the same time, eighty percent of the world’s data is sitting on private servers. It just is not practical or even permissible in some cases to move all the data to the public cloud. Other organizations are run very traditional workloads that they’re looking to modernize over time as they acquire new cloud-native skills. The new IBM cloud centers can host data in multiple formats and databases including DB2, SQLBase, PostreSQL, or NoSQL, all exposed as cloud services, if desired.

The IBM cloud centers, the company continues, also promise common logging and services between the on-prem environment and IBM’s public cloud environment. In fact, IBM will make all its cloud services, including the Watson AI service, consistent across all its availability zones, and offer multi-cluster support, in effect enabling the ability to run workloads and do backups across availability zones.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Preps Z World for GDPR

June 1, 2018

Remember Y2K?  That was when calendars rolled over from the 1999 to 2000. It was hyped as an event that would screw up computers worldwide. Sorry, planes did not fall out of the sky overnight (or at all), elevators didn’t plummet to the basement, and hospitals and banks did not cease functioning. DancingDinosaur did OK writing white papers on preparing for Y2K. Maybe nothing bad happened because companies read papers like those and worked on changing their date fields.

Starting May 25, 2018 GDPR became the new Y2K. GRDP, the EC’s (or EU) General Data Protection Regulation (GDPR), an overhaul of existing EC data protection rules, promises to strengthen and unify those laws for EC citizens and organizations anywhere collecting and exchanging data involving its citizens. That is probably most of the readers of DancingDinosaur. GDRP went into effect at the end of May and generated a firestorm of trade business press but nothing near what Y2K did.  The primary GDPR objectives are to give citizens control over their personal data and simplify the regulatory environment for international business.

According to Bob Yelland, author of How it Works: GDPR, a Little Bee Book above, 50% of global companies  say they will struggle to meet the rules set out by Europe unless they make significant changes to how they operate, and this may lead many companies to appoint a Data Protection Officer, which the rules recommend. Doesn’t it feel a little like Y2K again?

The Economist in April wrote: “After years of deliberation on how best to protect personal data, the EC is imposing a set of tough rules. These are designed to improve how data are stored and used by giving more control to individuals over their information and by obliging companies to handle what data they have more carefully. “

As you would expect, IBM created a GDPR framework with five phases to help organizations achieve readiness: Assess, Design, Transform, Operate, and Conform. The goal of the framework is to help organizations manage security and privacy effectively in order to reduce risks and therefore avoid incidents.

DancingDinosaur is not an expert on GDPR in any sense, but from reading GDPR documents, the Z with its pervasive encryption and automated secure key management should eliminate many concerns. The rest probably can be handled by following good Z data center policy and practices.

There is only one area of GDPR, however, that may be foreign to North American organizations—the parts about respecting and protecting the private data of individuals.

As The Economist wrote: GDPR obliges organizations to create an inventory of the personal data they hold. With digital storage becoming ever cheaper, companies often keep hundreds of databases, many of which are long forgotten. To comply with the new regulation, firms have to think harder about data hygiene. This is something North American companies probably have not thought enough about.

IBM recommends you start by assessing your current data privacy situation under all of the GDPR provisions. In particular, discover where protected information is located in your enterprise. Under GDPR, individuals have rights to consent to access, correct, delete, and transfer personal data. This will be new to most North American data centers, even the best managed Z data centers.

Then, IBM advises, assess the current state of your security practices, identify gaps, and design security controls to plug those gaps. In the process find and prioritize security vulnerabilities, as well as any personal data assets and affected systems. Again, you will want to design appropriate controls. If this starts sounding a little too complicated just turn it over to IBM or any of the handful of other vendors who are racing GDPR readiness services into the market. IBM offers Data Privacy Consulting Services along with a GDPR readiness assessment.

Of course, you can just outsource it to IBM or others. IBM also offers its GDPR framework with five phases. The goal of the framework is to help organizations subject to GDPR manage security and privacy with the goal of reducing risks and avoiding problems.

GDPR is not going to be fun, especially the obligation to comply with each individual’s rights regarding their data. DancingDinosaur suspects it could even get downright ugly.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Grows Quantum Ecosystem

April 27, 2018

It is good that you aren’t dying to deploy quantum computing soon because IBM readily admits that it is not ready for enterprise production now or in several weeks or maybe several months. IBM, however, continues to assemble the building blocks you will eventually need when you finally feel the urge to deploy a quantum application that can address a real problem that you need to resolve.

cryostat with prototype of quantum processor

IBM is surprisingly frank about the state of quantum today. There is nothing you can do at this point that you can’t simulate on a conventional or classical computer system. This situation is unlikely to change anytime soon either. For years to come, we can expect hybrid quantum and conventional compute environments that will somehow work together to solve very demanding problems, although most aren’t sure exactly what those problems will be when the time comes. Still at Think earlier this year IBM predicted quantum computing will be mainstream in 5 years.

Of course, IBM has some ideas of where the likely problems to solve will be found:

  • Chemistry—material design, oil and gas, drug discovery
  • Artificial Intelligence—classification, machine learning, linear algebra
  • Financial Services—portfolio optimization, scenario analysis, pricing

It has been some time since the computer systems industry had to build a radically different kind of compute discipline from scratch. Following the model of the current IT discipline IBM began by launching the IBM Q Network, a collaboration with leading Fortune 500 companies and research institutions with a shared mission. This will form the foundation of a quantum ecosystem.  The Q Network will be comprised of hubs, which are regional centers of quantum computing R&D and ecosystem; partners, who are pioneers of quantum computing in a specific industry or academic field; and most recently, startups, which are expected to rapidly advance early applications.

The most important of these to drive growth of quantum are the startups. To date, IBM reports eight startups and it is on the make for more. Early startups include QC Ware, Q-Ctrl, Cambridge Quantum Computing (UK), which is working on a compiler for quantum computing, 1Qbit based in Canada, Zapata Computing located at Harvard, Strangeworks, an Austin-based tool developer, QxBranch, which is trying to apply classical computing techniques to quantum, and Quantum Benchmark.

Startups get membership in the Q network and can run experiments and algorithms on IBM quantum computers via cloud-based access; provide deeper access to APIs and advanced quantum software tools, libraries, and applications; and have the opportunity to collaborate with IBM researchers and technical SMEs on potential applications, as well as with other IBM Q Network organizations. If it hasn’t become obvious yet, the payoff will come from developing applications that solve recognizable problems. Also check out QISKit, a software development kit for quantum applications available through GitHub.

The last problem to solve is the question around acquiring quantum talent. How many quantum scientists, engineers, or programmers do you have? Do you even know where to find them? The young people excited about computing today are primarily interested in technologies to build sexy apps using Node.js, Python, Jupyter, and such.

To find the people you need to build quantum computing systems you will need to scour the proverbial halls of MIT, Caltech, and other top schools that produce physicists and quantum scientists. A scan of salaries for these people reveals $135,000- $160,000, if they are available at all.

The best guidance from IBM on starting is to start small. The industry is still at the building block stage; not ready to throw specific application at real problems. In that case sign up for IBM’s Q Network and get some of your people engaged in the opportunities to get educated in quantum.

When DancingDinosaur first heard about quantum physics he was in a high school science class decades ago. It was intriguing but he never expected to even be alive to see quantum physics becoming real, but now it is. And he’s still here. Not quite ready to sign up for QISKit and take a small qubit machine for a spin in the cloud, but who knows…

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IT Security Enters the Cooperative Era

April 20, 2018

Ever hear of the cybersecurity tech accord?  It was  announced on Tuesday. Microsoft, Facebook, and 32 other companies signed aboard.  Absent from the signing were Apple, Alphabet and Amazon. Also missing was IBM. Actually, IBM was already at the RSA Conference making its own security announcement of an effort to help cybersecurity teams collaborate just like the attackers they’re defending against do via the dark web by sharing information among themselves.

IBM security control center

Tuesday’s Cybersecurity Tech Accord amounted to a promise to work together on cybersecurity issues. Specifically, the companies promise to work against state sponsored cyberattacks. The companies also agreed to collaborate on stronger defense systems and protect against the tampering of their products, according to published reports.

Giving importance to the accord is the financial impact of cybersecurity attacks on businesses and organizations, which is projected to reach $8 trillion by 2022. Other technology leaders, including Cisco, HP, Nokia, Oracle also joined the accord.

A few highly visible and costly attacks were enough to galvanize the IT leaders. In May, WannaCry ransomware targeted more than 300,000 computers in 150 countries, including 48 UK medical facilities. In a bid to help, Microsoft issued patches for old Windows systems, even though it no longer supports them, because so many firms run old software that was vulnerable to the attack, according to published reports. The White House attributed the attack to North Korea.

In June, NotPetya ransomware, which initially targeted computers in Ukraine before spreading, infected computers, locked down their hard drives, and demanded a $300 ransom to be paid in bitcoin. Even victims that paid weren’t able to recover their files, according to reports. The British government said Russia was behind the global cyberattack.

The Cybersecurity Tech Accord is modeled after a digital Geneva Convention, with a long-term goal of updating international law to protect people in times of peace from malicious cyberattacks, according to Microsoft president Brad Smith.

Github’s chief strategy officer Julio Avalos wrote in a separate blog post that “protecting the Internet is becoming more urgent every day as more fundamental vulnerabilities in infrastructure are discovered—and in some cases used by government organizations for cyberattacks that threaten to make the Internet a theater of war.” He continued: “Reaching industry-wide agreement on security principles and collaborating with global technology companies is a crucial step toward securing our future.”

Added Sridhar Muppidi, Co-CTO of IBM Security about the company’s efforts to help cybersecurity teams collaborate like the attackers they’re working against, in a recent published interview: The good guys have to collaborate with each other so that we can provide a better and more secure and robust systems. So we talk about how we share the good intelligence. We also talk about sharing good practices, so that we can then build more robust systems, which are a lot more secure.

It’s the same concept of open source model, where you provide some level of intellectual capital with an opportunity to bring in a bigger community together so that we can take the problem and solve it better and faster. And learn from each other’s mistakes and each other’s advancement so that it can help, individually, each of our offerings. So, end of the day, for a topic like AI, the algorithm is going to be an algorithm. It’s the data, it’s the models, it’s the set of things which go around it which make it very robust and reliable, Muppidi continued.

IBM appears to be practicing what it preaches by facilitating the collaboration of people and machines in defense of cyberspace. Last year at RSA, IBM introduced Watson to the cybersecurity industry to augment the skills of analysts in their security investigations. This year investments and artificial intelligence (AI), according to IBM, were made with a larger vision in mind: a move toward “automation of response” in cybersecurity.

At RSA, IBM also announced the next-generation IBM Resilient Incident Response Platform (IRP) with Intelligent Orchestration. The new platform promises to accelerate and sharpen incident response by seamlessly combining incident case management, orchestration, automation, AI, and deep two-way partner integrations into a single platform.

Maybe DancingDinosaur, which has spent decades acting as an IT-organization-of-one, can finally turn over some of the security chores to an intelligent system, which hopefully will do it better and faster.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Dinosaurs Strike Back in IBM Business Value Survey

March 2, 2018

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing.  Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet.  The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset.  Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found:  Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

z System-Power-Storage Still Live at IBM

January 5, 2017

A mid-December briefing by Tom Rosamilia, SVP, IBM Systems, reassured some that IBM wasn’t putting its systems and platforms on the backburner after racking up financial quarterly losses for years. Expect new IBM systems in 2017. A few days later IBM announced that Japan-based APLUS Co., Ltd., which operates credit card and settlement service businesses, selected IBM LinuxONE as its mission-critical system for credit card payment processing. Hooray!

linuxone-emperor-2

LinuxONE’s security and industry-leading performance will ensure APLUS achieves its operational objectives as online commerce heats up and companies rely on cloud applications to draw and retain customers. Especially in Japan, where online and mobile shopping has become increasingly popular, the use of credit cards has grown, with more than 66 percent of consumers choosing that method for conducting online transactions. And with 80 percent enterprise hybrid cloud adoption predicted by 2017, APLUS is well positioned to connect cloud transactions leveraging LinuxONE. Throw in IBM’s expansion of blockchain capabilities and the APLUS move looks even smarter.

With the growth of international visitors spending money, IBM notes, and the emergence of FinTech firms in Japan have led to a diversification of payment methods the local financial industry struggles to respond. APLUS, which issues well-known credit cards such as T Card Plus, plans to offer leading-edge financial services by merging groups to achieve lean operations and improved productivity and efficiency. Choosing to update its credit card payment system with LinuxONE infrastructure, APLUS will benefit from an advanced IT environment to support its business growth by helping provide near-constant uptime. In addition to updating its server architecture, APLUS has deployed IBM storage to manage mission-critical data, the IBM DS8880 mainframe-attached storage that delivers integration with IBM z Systems and LinuxONE environments.

LinuxONE, however, was one part of the IBM Systems story Rosamilia set out to tell.  There also is the z13s, for encrypted hybrid clouds and the z/OS platform for Apache Spark data analytics and even more secure cloud services via blockchain on LinuxONE, by way of Bluemix or on premises.

z/OS will get attention in 2017 too. “z/OS is the best damn OLTP system in the world,” declared Rosamilia. He went on to imply that enhancements and upgrades to key z systems were coming in 2017, especially CICS, IMS, and a new release of DB2. Watch for new announcements coming soon as IBM tries to push z platform performance and capacity for z/OS and OLTP.

Rosamilia also talked up the POWER story. Specifically, Google and Rackspace have been developing OpenPOWER systems for the Open Compute Project.  New POWER LC servers running POWER8 and the NVIDIA NVLink accelerator, more innovations through the OpenCAPI Consortium, and the team of IBM and Nvidia to deliver PowerAI, part of IBM’s cognitive efforts.

As much as Rosamilia may have wanted to talk about platforms and systems IBM continues to avoid using terms like systems and platforms. So Rosamilia’s real intent was to discuss z and Power in conjunction with IBM’s strategic initiatives.  Remember these: cloud, big data, mobile, analytics. Lately, it seems, those initiatives have been culled down to cloud, hybrid cloud, and cognitive systems.

IBM’s current message is that IT innovation no longer comes from just the processor. Instead, it comes through scaling performance by workload and sustaining leadership through ecosystem partnerships.  We’ve already seen some of the fruits of that innovation through the Power community. Would be nice to see some of that coming to the z too, maybe through the open mainframe project. But that isn’t about z/0S. Any boost in CICS, DB2, and IMS will have to come from the core z team. The open mainframe project is about Linux on z.

The first glimpse we had of this came last spring in a system dubbed Minsky, which was described back then by commentator Timothy Prickett Morgan. With the Minsky machine, IBM is using NVLink ports on the updated Power8 CPU, which was shown in April at the OpenPower Summit and is making its debut in systems actually manufactured by ODM Wistron and rebadged, sold, and supported by IBM. The NVLink ports are bundled up in a quad to deliver 80 GB/sec bandwidth between a pair of GPUs and between each GPU and the updated Power8 CPU.

The IBM version, Morgan describes, aims to create a very brawny node with very tight coupling of GPUs and CPUs so they can better share memory, have fewer overall GPUs, and more bandwidth between the compute elements. IBM is aiming Minsky at HPC workloads, according to Morgan, but there is no reason it cannot be used for deep learning or even accelerated databases.

Is this where today’s z data center managers want to go?  No one is likely to spurn more performance, especially if it is accompanied with a price/performance improvement.  Whether rank-and-file z data centers are queueing up for AI or cognitive workloads will have to be seen. The sheer volume and scale of expected activity, however, will require some form of automated intelligent assist.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth

IBM POWER9 chip

IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: