Posts Tagged ‘LinuxONE’

Hybrid Cloud to Streamline IBM Z

June 27, 2018

2020 is the year, according to IDC,  when combined IT infrastructure spending on private and public clouds will eclipse spending on traditional data centers. The researcher predicts the public cloud will account for 31.68 percent of IT infrastructure spending in 2020, while private clouds will take a 19.82 percent slice of the spending pie, totaling more than half (51.5 percent) of all infrastructure spending for the first time, with the rest going to traditional data centers.

Source: courtesy of IBM

There is no going back. By 2021 IDC expects the balance to continue tilting further toward the cloud, with combined public and private cloud dollars making up 53.15 percent of infrastructure spending. Enterprise spending on cloud, according to IDC, will grow over $530 billion as over 90 percent of enterprises will be using a mix of multiple cloud services and platforms, both on and off premises.

Technology customers want choices. They want to choose their access device, interface, deployment options, cost and even their speed of change. Luckily, today’s hybrid age enables choices. Hybrid clouds and multi-cloud IT offer the most efficient way of delivering the widest range of customer choices.

For Z shops, this shouldn’t come as a complete surprise. IBM has been preaching the hybrid gospel for years, at least since x86 machines began making significant inroads into its platform business. The basic message has always been the same: Center the core of your business on the mainframe and then build around it— using x86 if you must but now try LinuxONE and hybrid clouds, both public and on-premises.

For many organizations a multi-cloud strategy using two or more different clouds, public or on-premise, offers the fastest and most efficient way of delivering the maximum in choice, regardless of your particular strategy. For example one might prefer a compute cloud while the other a storage cloud. Or, an organization might use different clouds—a cloud for finance, another for R&D, and yet another for DevOps.

The reasoning behind a multi-cloud strategy can also vary. Reasons can range from risk mitigation, to the need for specialized functionality, to cost management, analytics, security, flexible access, and more.

Another reason for a hybrid cloud strategy, which should resonate with DancingDinosaur readers, is modernizing legacy systems. According to Gartner, by 2020, every dollar invested in digital business innovation will require enterprises to spend at least three times that to continuously modernize the legacy application portfolio. In the past, such legacy application portfolios have often been viewed as a problem subjected to large-scale rip-and-replace efforts in desperate, often unsuccessful attempts to salvage them.

With the growth of hybrid clouds, data center managers instead can manage their legacy portfolio as an asset by mixing and matching capabilities from various cloud offerings to execute business-driven modernization. This will typically include microservices, containers, and APIs to leverage maximum value from the legacy apps, which will no longer be an albatross but a valuable asset.

While the advent of multi-clouds or hybrid clouds may appear to complicate an already muddled situation, they actually provide more options and choices as organizations seek the best solution for their needs at their price and terms.

With the Z this may be easier done than it initially sounds. “Companies have lots of records on Z, and the way to get to these records is through APIs, particularly REST APIs,” explains Juliet Candee, IBM Systems Business Continuity Architecture. Start with the IBM Z Hybrid Cloud Architecture. Then, begin assembling catalogs of APIs and leverage z/OS Connect to access popular IBM middleware like CICS. By using z/OS Connect and APIs through microservices, you can break monolithic systems into smaller, more composable and flexible pieces that contain business functions.

Don’t forget LinuxONE, another Z but optimized for Linux and available at a lower cost. With the LinuxONE Rockhopper II, the latest slimmed down model, you can run 240 concurrent MongoDB databases executing a total of 58 billion database transactions per day on a single server. Accelerate delivery of your new applications through containers and cloud-native development tools, with up to 330,000 Docker containers on a single Rockhopper II server. Similarly, lower TCO and achieve a faster ROI with up to 65 percent cost savings over x86. And the new Rockhopper II’s industry-standard 19-inch rack uses 40 percent less space than the previous Rockhopper while delivering up to 60 percent more Linux capacity.

This results in what Candee describes as a new style of building IT that involves much smaller components, which are easier to monitor and debug. Then, connect it all to IBM Cloud on Z using secure Linux containers. This could be a hybrid cloud combining IBM Cloud Private and an assortment of public clouds along with secure zLinux containers as desired.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Please note: DancingDinosaur will be away for the first 2 weeks of July. The next piece should appear the week of July 16 unless the weather is unusually bad.

IBM Expands and Enhances its Cloud Offerings

June 15, 2018

IBM announced 18 new availability zones in North America, Europe, and Asia Pacific to bolster its IBM Cloud business and try to keep pace with AWS, the public cloud leader, and Microsoft. The new availability zones are located in Europe (Germany and UK), Asia-Pacific (Tokyo and Sydney), and North America (Washington, DC and Dallas).

IBM cloud availability zone, Dallas

In addition, organizations will be able to deploy multi-zone Kubernetes clusters across the availability zones via the IBM Cloud Kubernetes Service. This will simplify how they deploy and manage containerized applications and add further consistency to their cloud experience. Furthermore, deploying multi-zone clusters will have minimal impact on performance, about 2 ms latency between availability zones.

An availability zone, according to IBM, is an isolated instance of a cloud inside a data center region. Each zone brings independent power, cooling, and networking to strengthen fault tolerance. While IBM Cloud already operates in nearly 60 locations, the new zones add even more capacity and capability in these key centers. This global cloud footprint becomes especially critical as clients look to gain greater control of their data in the face of tightening data regulations, such as the European Union’s new General Data Protection Regulation (GDPR). See DancingDinosaur June 1, IBM preps z world for GDPR.

In its Q1 earnings IBM reported cloud revenue of $17.7bn over the past year, up 22 percent over the previous year, but that includes two quarters of outstanding Z revenue that is unlikely to be sustained,  at least until the next Z comes out, which is at least a few quarters away.  AWS meanwhile reported quarterly revenues up 49 percent to $5.4 billion, while Microsoft recently reported 93 percent growth for Azure revenues.

That leaves IBM trying to catch up the old fashioned way by adding new cloud capabilities, enhancing existing cloud capabilities, and attracting more clients to its cloud capabilities however they may be delivered. For example, IBM announced it is the first cloud provider to let developers run managed Kubernetes containers directly on bare metal servers with direct access to GPUs to improve the performance of machine-learning applications, which is critical to any AI effort.  Along the same lines, IBM will extend its IBM Cloud Private and IBM Cloud Private for Data and middleware to Red Hat’s OpenShift Container Platform and Certified Containers. Red Hat already is a leading provider of enterprise Linux to Z shops.

IBM has also expanded its cloud offerings to support the widest range of platforms. Not just Z, LinuxONE, and Power9 for Watson, but also x86 and a variety of non-IBM architectures and platforms. Similarly, notes IBM, users have gotten accustomed to accessing corporate databases wherever they reside, but proximity to cloud data centers still remains important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.

Contrary to simplifying things, the propagation of more and different types of clouds and cloud strategies complicate an organization’s cloud approach. Already, today companies are managing complex, hybrid public-private cloud environments. At the same time, eighty percent of the world’s data is sitting on private servers. It just is not practical or even permissible in some cases to move all the data to the public cloud. Other organizations are run very traditional workloads that they’re looking to modernize over time as they acquire new cloud-native skills. The new IBM cloud centers can host data in multiple formats and databases including DB2, SQLBase, PostreSQL, or NoSQL, all exposed as cloud services, if desired.

The IBM cloud centers, the company continues, also promise common logging and services between the on-prem environment and IBM’s public cloud environment. In fact, IBM will make all its cloud services, including the Watson AI service, consistent across all its availability zones, and offer multi-cluster support, in effect enabling the ability to run workloads and do backups across availability zones.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Preps Z World for GDPR

June 1, 2018

Remember Y2K?  That was when calendars rolled over from the 1999 to 2000. It was hyped as an event that would screw up computers worldwide. Sorry, planes did not fall out of the sky overnight (or at all), elevators didn’t plummet to the basement, and hospitals and banks did not cease functioning. DancingDinosaur did OK writing white papers on preparing for Y2K. Maybe nothing bad happened because companies read papers like those and worked on changing their date fields.

Starting May 25, 2018 GDPR became the new Y2K. GRDP, the EC’s (or EU) General Data Protection Regulation (GDPR), an overhaul of existing EC data protection rules, promises to strengthen and unify those laws for EC citizens and organizations anywhere collecting and exchanging data involving its citizens. That is probably most of the readers of DancingDinosaur. GDRP went into effect at the end of May and generated a firestorm of trade business press but nothing near what Y2K did.  The primary GDPR objectives are to give citizens control over their personal data and simplify the regulatory environment for international business.

According to Bob Yelland, author of How it Works: GDPR, a Little Bee Book above, 50% of global companies  say they will struggle to meet the rules set out by Europe unless they make significant changes to how they operate, and this may lead many companies to appoint a Data Protection Officer, which the rules recommend. Doesn’t it feel a little like Y2K again?

The Economist in April wrote: “After years of deliberation on how best to protect personal data, the EC is imposing a set of tough rules. These are designed to improve how data are stored and used by giving more control to individuals over their information and by obliging companies to handle what data they have more carefully. “

As you would expect, IBM created a GDPR framework with five phases to help organizations achieve readiness: Assess, Design, Transform, Operate, and Conform. The goal of the framework is to help organizations manage security and privacy effectively in order to reduce risks and therefore avoid incidents.

DancingDinosaur is not an expert on GDPR in any sense, but from reading GDPR documents, the Z with its pervasive encryption and automated secure key management should eliminate many concerns. The rest probably can be handled by following good Z data center policy and practices.

There is only one area of GDPR, however, that may be foreign to North American organizations—the parts about respecting and protecting the private data of individuals.

As The Economist wrote: GDPR obliges organizations to create an inventory of the personal data they hold. With digital storage becoming ever cheaper, companies often keep hundreds of databases, many of which are long forgotten. To comply with the new regulation, firms have to think harder about data hygiene. This is something North American companies probably have not thought enough about.

IBM recommends you start by assessing your current data privacy situation under all of the GDPR provisions. In particular, discover where protected information is located in your enterprise. Under GDPR, individuals have rights to consent to access, correct, delete, and transfer personal data. This will be new to most North American data centers, even the best managed Z data centers.

Then, IBM advises, assess the current state of your security practices, identify gaps, and design security controls to plug those gaps. In the process find and prioritize security vulnerabilities, as well as any personal data assets and affected systems. Again, you will want to design appropriate controls. If this starts sounding a little too complicated just turn it over to IBM or any of the handful of other vendors who are racing GDPR readiness services into the market. IBM offers Data Privacy Consulting Services along with a GDPR readiness assessment.

Of course, you can just outsource it to IBM or others. IBM also offers its GDPR framework with five phases. The goal of the framework is to help organizations subject to GDPR manage security and privacy with the goal of reducing risks and avoiding problems.

GDPR is not going to be fun, especially the obligation to comply with each individual’s rights regarding their data. DancingDinosaur suspects it could even get downright ugly.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Eager to Move Blockchain to Next Level

May 24, 2018

Blockchain, as IBM sees it, has moved past the days of proof of concept and is now focused on platform industrialization and best practices. As for industries, financial systems remain its primary target although areas like digital identity, provenance, and asset management are gaining interest fast. To that end the company is offering the IBM Blockchain Platform on a free trial basis. The Platform promises to make any developer a blockchain developer fast. According to IBM, hundreds of companies on dozens of blockchain networks are solving business challenges across varied industries, not just financial services.

Although IBM has been a blockchain pioneer, especially in the enterprise segment, which is getting more crowded.  Specifically, the Enterprise Ethereum Alliance (EEA), which lists over 500 members, recently made available to the public the release of its Enterprise Ethereum Architecture Stack. According to EEA, the stack defines the building blocks needed to drive the Web 3.0 era of decentralized, connective intelligence that will work anywhere and is capable of facilitating smart contracts without intermediaries. The stack, available as a free public document to download here, incorporates components developed by the EEA.

IBM is not saying much about EEA, preferring instead to focus on Hyperledger. This much, however, an IBMer was willing to say: Our main POV is that Hyperledger is the only blockchain platform designed for enterprises from the ground up.  Ethereum is better suited for crypto currencies and ICOs.  But we know they are working to move into the enterprise space. Interestingly, of the 500 companies listed as EEA members, including many of IBM’s consulting and vendor peers but not IBM.

The IBMer continued: we have had many clients start with Ethereum and discover that it does not meet their business needs … and then they move to Hyperledger, which as a permissioned blockchain network does meet enterprise business needs. Permissioned means you have to be approved to join the network, presumably making it more secure.

IBM clearly has locked onto the idea of a permissioned blockchain network as important, at least among enterprise customers. The IBMer continued: we state that you can have a public permissioned blockchain network—and you’ll start to see more of these around payments.

The IBMer noted: it’s unclear what Ethereum’s permissioning method is; what their governance model is.  There is lack of agreement around new types of consensus.

In case you have missed the blockchain buzz of the past few years blockchain is a shared, immutable ledger for recording the history of transactions. Immutable means that once a transaction has been accepted by the other nodes on the blockchain it cannot be removed or deleted. It sits there as a permanent record for the life of the blockchain. Of course, corrections can be made but they are added as another immutable record. As such, depending on the number of nodes in the blockchain, you can end up with a long chain of immutable records, sort of an auditor’s dream or nightmare.

A business blockchain, such as IBM Blockchain and the Linux Foundation’s Hyperledger Project, provides a permissioned network with known identities. And unlike Bitcoin, there is no need for cryptocurrency exchange. Where any payments are required, you make them as you would in any transaction.  Except with blockchain the transaction exists forever as an immutable record. From initial order to delivery acceptance to payment it all exists as a permanent record. Warning: sleazy business partners should avoid blockchain transactions since you won’t be able to weasel out so easily.

To make blockchain easy, IBM is developing what it describes as enterprise-ready solutions. For example, to address the provenance of diamonds and avoid buying what are called blood diamonds, IBM has backed TrustChain, which tracks six styles of diamond and gold engagement rings on a blockchain network, enabling dealers to know who handled the diamond from the mine to processor to the buyer’s finger.

Similarly, IBM’s Food Trust blockchain includes Dole, Driscoll’s, Golden State Foods, Kroger, McCormick and Company, McLane Company, Nestle, Tyson Foods, Unilever, and Walmart to address food safety. So the next time you eat something and get a stomach ache, just turn to Food Trust to track down the culprit.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

High Cost of Ignoring Z’s Pervasive Encryption

May 17, 2018

That cost was spelled out at IBM’s Think this past spring.  Writes David Bruce, who leads IBM’s strategies for security on IBM Z and LinuxONE, data breaches are expensive, costing $3.6 million on average. And hoping to avoid one by doing business as usual is a bad bet. Bruce reports breaches are increasingly likely: an organization has a 28 percent chance of being breached in the next 24 months. You can find Bruce’s comments on security and pervasive encryption here.

9 million data records were compromised in 2015

Were any of those 9 million records from your organization? Did you end up on the front page of the newspaper? To stay out of the data breach headlines, organizations require security solutions that protect enterprise and customer data at minimal cost and effort, Bruce observes.

Encryption is the preferred solution, but it is costly, cumbersome, labor-intensive, and hit-or-miss. It is hit-or-miss because the overhead involved forces organizations to choose what to encrypt and what to skip. You have to painstakingly classify the data in terms of risk, which takes time and only adds to the costs. Outside of critical revenue transactions or key intellectual property—no brainers—you will invariably choose wrong and miss something you will regret when it shows up on the front page of the New York Times.

Adding to the cost is the compliance runaround. Auditors are scheduled to visit or maybe they aren’t even scheduled and just drop in; you now have to drop whatever your staff was hoping to do and gather the necessary documentation to prove your data is safe and secure.  Do you really need this? Life is too short as it is.

You really want to put an end to the entire security compliance runaround and all the headaches it entails. But more than that, you want protected, secure data; all data, all the time.  When someone from a ransomware operation calls asking for hundreds or thousands of dollars to get your data back you can laugh and hang up the phone. That’s what Bruce means when he talks about pervasive encryption. All your data is safely encrypted with its keys protected from the moment it is created until the moment it is destroyed by you. And you don’t have to lift a finger; the Z does it all.

That embarrassing news item about a data breach; it won’t happen to you either. Most importantly of all, customers will never see it and get upset.

In fact, at Think, Forrester discussed today’s customer-obsessed approach that leading organizations are adopting to spur growth. To obsess over customers, explained Bruce, means to take great care in protecting the customer’s sensitive data, which provides the cornerstone of a customer-obsessed Forrester zero trust security framework. The framework includes, among other security elements, encryption of all data across the enterprise. Enabling the Z’s built in pervasive encryption and automatic key protection you can ignore the rest of Forrester’s framework.

Pervasive encryption, unique to Z, addresses the security challenges while helping you thrive in this age of the customer. At Think, Michael Jordan, IBM Distinguished Engineer for IBM Z Security, detailed how pervasive encryption represents a paradigm shift in security, reported Bruce. Previously, selective field-level encryption was the only feasible way to secure data, but it was time-, cost-, and resource-intensive – and it left large portions of data unsecured.

Pervasive encryption, however, offers a solution capable of encrypting data in bulk, making it possible and practical to encrypt all data associated with an application, database, and cloud service – whether on premises or in the cloud, at-rest or in-flight. This approach also simplifies compliance by eliminating the need to demonstrate compliance at the field level. Multiple layers of encryption – from disk and tape up through applications – provide the strongest possible defense against security breaches. The high levels of security enabled by pervasive encryption help you promote customer confidence by protecting their data and privacy.

If you have a Z and have not enabled pervasive encryption, you are putting your customers and your organization at risk. Am curious, please drop me a note why.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Is Your Enterprise Ready for AI?

May 11, 2018

According to IBM’s gospel of AI “we are in the midst of a global transformation and it is touching every aspect of our world, our lives, and our businesses.”  IBM has been preaching its gospel of AI of the past year or longer, but most of its clients haven’t jumped fully aboard. “For most of our clients, AI will be a journey. This is demonstrated by the fact that most organizations are still in the early phases of AI adoption.”

AC922 with NIVIDIA Tesla V100 and Enhanced NVLink GPUs

The company’s latest announcements earlier this week focus POWER9 squarely on AI. Said Tim Burke, Engineering Vice President, Cloud and Operating System Infrastructure, at Red Hat. “POWER9-based servers, running Red Hat’s leading open technologies offer a more stable and performance optimized foundation for machine learning and AI frameworks, which is required for production deployments… including PowerAI, IBM’s software platform for deep learning with IBM Power Systems that includes popular frameworks like Tensorflow and Caffe, as the first commercially supported AI software offering for [the Red Hat] platform.”

IBM insists this is not just about POWER9 and they may have a point; GPUs and other assist processors are taking on more importance as companies try to emulate the hyperscalers in their efforts to drive server efficiency while boosting power in the wake of declines in Moore’s Law. ”GPUs are at the foundation of major advances in AI and deep learning around the world,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. [Through] “the tight integration of IBM POWER9 processors and NVIDIA V100 GPUs made possible by NVIDIA NVLink, enterprises can experience incredible increases in performance for compute- intensive workloads.”

To create an AI-optimized infrastructure, IBM announced the latest additions to its POWER9 lineup, the IBM Power Systems LC922 and LC921. Characterized by IBM as balanced servers offering both compute capabilities and up to 120 terabytes of data storage and NVMe for rapid access to vast amounts of data. IBM included HDD in the announcement but any serious AI workload will choke without ample SSD.

Specifically, these new servers bring an updated version of the AC922 server, which now features recently announced 32GB NVIDIA V100 GPUs and larger system memory, which enables bigger deep learning models to improve the accuracy of AI workloads.

IBM has characterized the new models as data-intensive machines and AI-intensive systems, LC922 and LC921 Servers with POWER9 processors. The AC922, arrived last fall. It was designed for the what IBM calls the post-CPU era. The AC922 was the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 was designed to drive demonstrable performance improvements across popular AI frameworks such as TensorFlow and Caffe.

In the post CPU era, where Moore’s Law no longer rules, you need to pay as much attention to the GPU and other assist processors as the CPU itself, maybe even more so. For example, the coherence and high-speed of the NVLink enables hash tables—critical for fast analytics—on GPUs. As IBM noted at the introduction of the new machines this week: Hash tables are fundamental data structure for analytics over large datasets. For this you need large memory: small GPU memory limits hash table size and analytic performance. The CPU-GPU NVLink2 solves 2 key problems: large memory and high-speed enables storing the full hash table in CPU memory and transferring pieces to GPU for fast operations; coherence enables new inserts in CPU memory to get updated in GPU memory. Otherwise, modifications on data in CPU memory do not get updated in GPU memory.

IBM has started referring to the LC922 and LC921 as big data crushers. The LC921 brings 2 POWER9 sockets in a 1U form factor; for I/O it comes with both PCIe 4.0 and CAPI 2.0.; and offers up to 40 cores (160 threads) and 2TB RAM, which is ideal for environments requiring dense computing.

The LC922 is considerably bigger. It offers balanced compute capabilities delivered with the P9 processor and up to 120TB of storage capacity, again advanced I/O through PCIe 4.0/CAPI 2.0, and up to 44 cores (176 threads) and 2TB RAM. The list price, notes IBM is ~30% less.

If your organization is not thinking about AI your organization is probably in the minority, according to IDC.

  • 31 percent of organizations are in [AI] discovery/evaluation
  • 22 percent of organizations plan to implement AI in next 1-2 years
  • 22 percent of organizations are running AI trials
  • 4 percent of organizations have already deployed AI

Underpinning both servers is the IBM POWER9 CPU. The POWER9 enjoys a nearly 5.6x improved CPU to GPU bandwidth vs x86, which can improve deep learning training times by nearly 4x. Even today companies are struggling to cobble together the different pieces and make them work. IBM learned that lesson and now offers a unified AI infrastructure in PowerAI and Power9 that you can use today.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Dinosaurs Strike Back in IBM Business Value Survey

March 2, 2018

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing.  Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet.  The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset.  Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found:  Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Value and Power of LinuxOne Emperor II

February 4, 2018

There is much value n the mainframe but it doesn’t become clear until you do a full TCO analysis. When you talk to an IBMer about the cost of a mainframe the conversation immediately shifts to TCO, usually in the form of how many x86 systems you would have to deploy to handle a comparable workload with similar quality of service.  The LinuxONE Emperor II, introduced in September, can beat those comparisons.

LinuxONE Emperor II

Proponents of x86 boast about the low acquisition cost of x86 systems. They are right if you are only thinking about a low initial acquisition cost. But you also have to think about the cost of software for each low-cost core you purchase, and for many enterprise workloads you will need to acquire a lot of cores. This is where costs can mount quickly.

As a result, software will likely become the highest TCO item because many software products are priced per core.  Often the amount charged for cores is determined by the server’s maximum number of physical cores, regardless of whether they actually are activated. In addition, some architectures require more cores per workload. Ouch! An inexpensive device suddenly becomes a pricy machine when all those cores are tallied and priced.

Finally, x86 to IBM Z core ratios differ per workload, but x86 almost invariably requires more cores than a z-based workload; remember, any LinuxONE is a Z System. For example, the same WebSphere workload on x86 that requires 10 – 12 cores may require only one IFL on the Z. The lesson here: whether you’re talking about system software or middleware, you have to consider the impact of software on TCO.

The Emperor II delivers stunning specs. The machine can be packed with up to 170 cores, as much as 32 TB of memory, and 160 PCIe slots. And it is flexible; use this capacity, for instance, to add more system resources—cores or memory—to service an existing Linux instance or clone more Linux instances. Think of it as scale-out capabilities on steroids, taking you far beyond what you can achieve in the x86 world and do it with just a few keystrokes. As IBM puts it, you might:

  • Dynamically add cores, memory, I/O adapters, devices, and network cards without disruption.
  • Grow horizontally by adding Linux instances or grow vertically by adding resources (memory, cores, slots) to existing Linux guests.
  • Provision for peak utilization.
  • After the peak subsides automatically return unused resources to the resource pool for reallocation to another workload.

So, what does this mean to most enterprise Linux data centers? For example, IBM often cites a large insurance firm. The insurer needed fast and flexible provisioning for its database workloads. The company’s approach directed it to deploy more x86 servers to address growth. Unfortunately, the management of software for all those cores had become time consuming and costly. The company deployed 32 x86 servers with 768 cores running 384 competitor’s database licenses.

By leveraging elastic pricing on the Emperor II, for example, it only needed one machine running 63 IFLs serving 64 competitor’s database licenses.  It estimated savings of $15.6 million over 5 years just by eliminating charges for unused cores. (Full disclosure: these figures are provided by IBM; DancingDinosaur did not interview the insurer to verify this data.) Also, note there are many variables at play here around workloads and architecture, usage patterns, labor costs, and more. As IBM warns: Your results may vary.

And then there is security. Since the Emperor II is a Z it delivers all the security of the newest z14, although in a slightly different form. Specifically, it provides:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

BTW the Emperor II also anchors IBM’s Blockchain cloud service. That calls for security to the max. In the end. the Emperor II is unlike any x86 Linux system.

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (not included in the core count)
  • Leading I/O capacity and performance in the industry
  • IBM’s shared memory vertical scale architecture with a better architecture for stateful workloads like databases and systems of record
  • Hardware designed to give good response time even with 100% utilization, which simplifies the solution and reduces the extra costs x86 users assume are necessary because they’re used to keeping a utilization safety margin.

This goes far beyond TCO.  Just remember all the things the Emperor II brings: scalability, reliability, container-based security and flexibility, and more.

…and Go Pats!

DancingDinosaur is Alan Radding, a Boston-based veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Meltdown and Spectre Attacks Require IBM Mitigation

January 12, 2018

The chip security threats dubbed Meltdown and Spectre revealed last month apparently will require IBM threat mitigation in the form of code and patching. IBM has been reticent to make a major public announcement, but word finally is starting to percolate publicly.

Courtesy: Preparis Inc.

On January 4, one day after researchers disclosed the Meltdown and Spectre attack methods against Intel, AMD and ARM processors the Internet has been buzzing.  Wrote Eduard Kovacs on Wed.; Jan. 10, IBM informed customers that it had started analyzing impact on its own products. The day before IBM revealed its POWER processors are affected.

A published report from Virendra Soni, January 11, on the Consumer Electronics Show (CES) 2018 in Las Vegas where Nvidia CEO Jensen Huang revealed how the technology leaders are scrambling to find patches to the Spectre and Meltdown attacks. These attacks enable hackers to steal private information off users’ CPUs running processors from Intel, AMD, and ARM.

For DancingDinosaur readers, that puts the latest POWER chips and systems at risk. At this point, it is not clear how far beyond POWER systems the problem reaches. “We believe our GPU hardware is immune. As for our driver software, we are providing updates to help mitigate the CPU security issue,” Nvidia wrote in their security bulletin.

Nvidia also reports releasing updates for its software drivers that interact with vulnerable CPUs and operating systems. The vulnerabilities take place in three variants: Variant 1, Variant 2, and Variant 3. Nvidia has released driver updates for Variant 1 and 2. The company notes none of its software is vulnerable to Variant 3. Nvidia reported providing security updates for these products: GeForce, Quadro, NVS Driver Software, Tesla Driver Software, and GRID Driver Software.

IBM has made no public comments on which of their systems are affected. But Red Hat last week reported IBM’s System Z, and POWER platforms are impacted by Spectre and Meltdown. IBM may not be saying much but Red Hat is, according to Soni: “Red Hat last week reported that IBM’s System Z, and POWER platforms are exploited by Spectre and Meltdown.”

So what is a data center manager with a major investment in these systems to do?  Meltdown and Spectre “obviously are a very big problem, “ reports Timothy Prickett Morgan, a leading analyst at The Last Platform, an authoritative website following the server industry. “Chip suppliers and operating systems and hypervisor makers have known about these exploits since last June, and have been working behind the scenes to provide corrective countermeasures to block them… but rumors about the speculative execution threats forced the hands of the industry, and last week Google put out a notice about the bugs and then followed up with details about how it has fixed them in its own code. Read it here.

Chipmakers AMD and AMR put out a statement saying only Variant 1 of the speculative execution exploits (one of the Spectre variety known as bounds check bypass), and by Variant 2 (also a Spectre exploit known as branch target injection) affected them. AMD, reports Morgan, also emphasized that it has absolutely no vulnerability to Variant 3, a speculative execution exploit called rogue data cache load and known colloquially as Meltdown.  This is due, he noted, to architectural differences between Intel’s X86 processors and AMD’s clones.

As for IBM, Morgan noted: its Power chips are affected, at least back to the Power7 from 2010 and continuing forward to the brand new Power9. In its statement, IBM said that it would have patches out for firmware on Power machines using Power7+, Power8, Power8+, and Power9 chips on January 9, which passed, along with Linux patches for those machines; patches for the company’s own AIX Unix and proprietary IBM i operating systems will not be available until February 12. The System z mainframe processors also have speculative execution, so they should, in theory, be susceptible to Spectre but maybe not Meltdown.

That still leaves a question about the vulnerability of the IBM LinuxONE and the processors spread throughout the z systems. Ask your IBM rep when you can expect mitigation for those too.

Just patching these costly systems should not be sufficiently satisfying. There is a performance price that data centers will pay. Google noted a negligible impact on performance after it deployed one fix on Google’s millions of Linux systems, said Morgan. There has been speculation, Googled continued, that the deployment of KPTI (a mitigation fix) causes significant performance slowdowns. As far as is known, there is no fix for Spectre Variant 1 attacks, which have to be fixed on a binary-by-binary basis, according to Google.

Red Hat went further and actually ran benchmarks. The company tested its Enterprise Linux 7 release on servers using Intel’s “Haswell” Xeon E5 v3, “Broadwell” Xeon E5 v4, and “Skylake,” the upcoming Xeon SP processors, and showed impacts that ranged from 1-19 percent. You can demand these impacts be reflected in reduced system prices.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Under the Covers of Z Container Pricing

December 1, 2017

Along with the announcement of the z14, or now just Z, last July IBM also introduced container pricing as an upcoming capability of the machine intended to make it both flexible and price competitive. This is expected to happen by the end of this year.

A peak into the IBM z14

Container pricing implied overall cost savings and also simplified deployment. At the announcement IBM suggested competitive economics too, especially when benchmarked against public clouds and on-premises x86 environments.

By now you should realize that IBM has difficulty talking about price. They have lots of excuses relating to their global footprint and such. Funny, other systems vendors that sell globally don’t seem to have that problem. After two decades of covering IBM and the mainframe as a reporter, analyst, and blogger I’ve finally realized why the reticence: that the company’s pricing is almost always high, over-priced compared to the competition.

If you haven’t realized it yet, the only way IBM will talk price is around a 3-year TCO cost analysis. (Full disclosure: as an analyst, I have developed such TCO analyses and am quite familiar with how to manipulate them.) And even then you will have to swallow a number of assumptions and caveats to get the numbers to work.

For example, there is no doubt that IBM is targeting the x86 (Intel) platform with its LinuxONE lineup and especially its newest machine, the Emperor II. For example, IBM reports it can scale a single MongoDB database to 17TB on the Emperor II while running it at scale with less than 1ms response time. That will save up to 37% compared to x86 on a 3-year TCO analysis. The TCO analysis gets even better when you look at a priced-per-core data serving infrastructures. IBM reports it can consolidate thousands of x86 cores on a single LinuxONE server and reduce costs by up to 40%.

So, let’s see what the Z’s container pricing can do for you. IBM’s container pricing is being introduced to allow new workloads to be added onto z/OS in a way that doesn’t impact an organization’s rolling four-hour average while supporting deployment options that makes the most sense for an organization’s architecture while facilitating competitive pricing at an attractive price point relative to that workload.

For example, one of the initial use cases for container pricing revolves around payments workloads, particularly instant payments. That workload will be charged not to any capacity marker but to the number of payments processed. The payment workload pricing grid promises to be highly competitive with the price–per-payment starting at $0.0021 and dropping to $0.001 with volume. “That’s a very predictable, very aggressive price,” says Ray Jones, vice president, IBM Z Software and Hybrid Cloud. You can do the math and decide how competitive this is for your organization.

Container pricing applies to various deployment options—including co-located workloads in an existing LPAR—that present line-of-sight pricing to a solution. The new pricing promises simplified software pricing for qualified solutions. It even offers the possibility, IBM adds, of different pricing metrics within the same LPAR.

Container pricing, however, requires the use of IBM’s software for payments, Financial Transaction Manager (FTM). FTM counts the number of payments processed, which drives the billing from IBM.

To understand container pricing you must realize IBM is not talking about Docker containers. A container to IBM simply is an address space, or group of address spaces, in support of a particular workload. An organization can have multiple containers in an LPAR, have as many containers as it wants, and change the size of containers as needed. This is where the flexibility comes in.

The fundamental advantage of IBM’s container pricing comes from the co-location of workloads to get improved performance and lower latency. The new pricing eliminates what goes on in containers from consideration in the MLC calculations.

To get container pricing, however, you have to qualify. The company is setting up pricing agents around the world. Present your container plans and an agent will determine if you qualify and at what price. IBM isn’t saying anything about how you should present your container plans to qualify for the best deal. Just be prepared to negotiate as hard as you would with any IBM deal.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: