Posts Tagged ‘hybrid computing’

IBM’s z13 Redefines Mainframe Performance, Economics, and Versatility

January 14, 2015

With the introduction of the new IBM z13, the latest rev of the 50-year old mainframe product line introduced today, it will be hard for IT people to persist in the mistaken belief that the mainframe can’t handle today’s workloads or that it is too expensive. Built around an 8 core, 22nm processor, the IBM z13’s 141 configurable cores (any mix of CP, IFL, zIIP, ICF, SAP) delivers a 40% total capacity improvement over the zEC12.

 IBM z113

The z13 looks like the zEC12 but under the hood it’s far more powerful

The IBM z13 will handle up to 8,000 virtual enterprise-grade Linux servers per system, more than 50 per core.  Remember when Nationwide Insurance consolidated 3000 x86 servers mainly running Linux on a System z and saved $15 million over three years, a figure later revised considerably higher. They got a lot of press out of that, including from DancingDinosaur as recently as last May. With the IBM z13 Nationwide could consolidate more than twice the number of Linux servers at a lower cost and the resulting saving would be higher still.

If you consider Linux VMs synonymous with cloud services, the new machine will enable superior Cloud services at up to 32% lower cost than an x86-based cloud. It also will cost up to 60% less than Public Cloud over three years. In almost every metric, the IBM z13 delivers more capacity or performance at lower cost.

IBM delivered an almost constant stream of innovations that work to optimize performance and reduce cost. For example, it boosted single thread capacity by 10% over the zEC12. It also delivers 3x more memory to help both z/OS and Linux workloads. The more memory combined with a new cache design, improved I/O bandwidth, and compression will boost analytics on the machine. In fact, with the z13 you can do in-memory analytics if you want it.

The one thing it doesn’t do is boast the fastest commercial processor in terms of sheer speed. The zEC12 processor still is the fastest but with all the optimizations and enhancements IBM has built in the z13 should beat the z12 in handling the workloads organizations most want to run. For instance, the z13 performs 2X faster than the most common server processors, 300 percent more memory, 100 percent more bandwidth and delivers vector processing analytics to speed mobile transactions. As a result, the z13 transaction engine is capable of analyzing transactions in real time.

Similarly, simultaneous multi-threading delivers more throughput for Linux and zIIP-eligible workloads while larger caches optimize data serving. It also improved on-chip hardware compression, which saves disk space and cuts data transfer time.  Also, there is new workload container pricing and new multiplex pricing, both of which again will save money.

In addition, IBM optimized this machine for both mobile and analytics, as well as for cloud. This is the new versatility of this redefined mainframe. Last year, IBM discounted the cost of mobile transactions on the z. The new machine continues to optimize for mobile with consolidated REST APIs for all z/OS transactions through z/OS Connect while seamlessly channeling z/OS transactions to mobile devices with the MobileFirst Platform. It also ensures end-to-end security from mobile device to mainframe with z/OS, RACF, and MobileFirst products.

For analytics, IBM continues to optimize Hadoop and expand the analytics portfolio on the z13. Specifically, the massive memory capability, up to 10TB, opens new opportunities for in-memory computing. The ability to perform analytics by combining data from different data sources and do it in-memory and in real-time within the platform drives more efficiencies, such as eliminating the need for ETL and the need to move data between platforms, as had previously often been the case. Now, just use Hadoop on z to explore data there within the secure zone of the mainframe. This opens a wide variety of analytics workloads, anything from fraud prevention to customer retention.

In addition to improved price/performance overall, IBM announced Technology Update Pricing for z13, including AWLC price reductions for z13 that deliver 5% price/performance on average in addition to performance gains in software exploitation of z13. DancingDinosaur will dig deeper into the new z13 software pricing in a subsequent post.

And the list of new and improved capabilities with the z13 just keeps going on and on.  With security IBM has accelerated the speed of encryption up to 2x over the zEC12 to help protect the privacy of data throughout its life cycle.  It also extended enhanced public key support for constrained digital environments using Elliptic Curve Cryptography (ECC), which helps applications like Chrome, Firefox, and Apple’s iMessage. In addition, the z13 sports a few I/O enhancements, like the first system to use a standards based approach for enabling Forward Error Correction for a complete end-to-end solution.

Finally, IBM has not abandoned hybrid computing, where you can mix a variety of blades, including x86 Windows blades and others in the zBX extension cabinet. With the z13 IBM introduced the new Mod 004 zBX cabinet, an upgrade from the previous Mod 002 and 003.

DancingDinosaur expects the introduction of the z13 along with structural organization changes, will drive System z quarterly financial performance back into the black as soon as deliveries roll. And if IBM stays consistent with past behavior within a year or so you can expect a scaled down, lower cost business class version of the z13 although it may be not be called business class. Stay tuned; it should be an exciting year.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. You can follow him on Twitter, @mainframeblog, or check out more of his writing and analysis at Technologywriter.com or here.

IBM Ends 2014 with Flurry of Outsourcing, Cloud Activity

January 4, 2015

Happy New Year. There is much to look forward to in 2015. At the least it probably is time for IBM to rev the System z. The zEnterprise EC12 was introduced in Aug. 2012. You should expect a new machine this year.

ibm cloud centers

IBM ended the year with a flurry of deals involving outsourcing in various forms, hybrid clouds, and the expansion its cloud centers globally. The company made it clear throughout this past difficult year that its focus will be on cloud computing, analytics, and mobile, and that’s what they did. DancingDinosaur will leave to the Wall St. analysts the question of whether the deals represent enough action at a sufficient margin.

IBM believes its future rides on the cloud. To that end it writes: Enterprise cloud deployments, specifically hybrid cloud, are growing at a significant rate.  According to Gartner, nearly half of all enterprises will have a hybrid cloud deployed by 2017.  Chief among the driving forces behind the adoption of cloud computing worldwide, including hybrid cloud, are requirements for businesses and governments to store certain data locally to comply with data residency regulations, as well as a growing desire for startups to expand their businesses globally.  IBM estimates about 100 nations and territories have adopted laws that dictate how governments and private enterprises handle personal data.

The expansion of the company’s global footprint of its cloud centers, now up to 40 locations, represents an effort to capitalize on cloud interest. Since the start of November, the company announced more than $4 billion worth of cloud agreements with major enterprises around the world. These include Lufthansa, ABN AMRO, WPP, Woox Innovations, Dow Water, and Thomson Reuters. Some of these, you will notice, are mainframe shops. DancingDinosaur is assuming they are augmenting their z with a hybrid cloud, not replacing it.

In addition, there are new organizations, referred to by IBM as born-on-the-web innovators, which are building their business on the IBM Cloud. Since November, IBM has announced wins with Diabetizer and Preveniomed, Hancom, Musimundo, and Nubity. Collectively these wins reflect IBM’s ability to deliver a full range of services through the cloud. Some of these are analytics-driven wins.

An interesting recently announced win was Westfield Insurance, which began working with IBM to transform their claims operations. To this end, Westfield is looking at business analytics to increase flexibility, operational efficiency, and effectiveness while enabling the company to keep pace with its evolving customer base and business growth. When DancingDinosaur last checked, Westfield was a z196 shop running DB2.

As IBM reports, leading insurers are leveraging cloud, analytics and social technologies to stay ahead of their competition. Specifically, more than 60% of identified leading insurers are focused on advanced analytics to improve their claims handling in order to streamline processes and increase customer satisfaction. Westfield’s multi-year claims handling transformation initiatives, including process, organizational and technology changes, focus on using data and analytics to better serve customers.

For Westfield, IBM developed a new protocol to migrate data for use with predicative models, built simulation models to evaluate bottlenecks in the claims process, and designed a strategy for expedited workflow.  This simulation helped expedite organizational changes. The new claims system will also utilize a suite of IBM counter-fraud capabilities to detect suspicious activity.

In addition, IBM helped Westfield optimize its current claims handling process to provide a seamless, fully-integrated customer experience. Westfield’s claims system with Guidewire is now consolidated to ensure efficient operations across its network.

To further drive its cloud business IBM simplified its cloud contract with a goal of reducing the complexity and speeding the signing of cloud agreements. The result is a standard, two-page agreement that replaces the previous longer, more complex contracts, which typically entailed long negotiations and reviews before a deal was signed. By comparison, its cloud competitors require customers to review and commit to more complex contracts that commonly are at least five times longer and also incorporate terms and conditions from other websites, IBM reports.

Citing leading industry analyst firms, IBM claims global leadership in cloud computing with a diverse portfolio of open cloud solutions designed to enable clients for the hybrid cloud era. IBM has helped more than 30,000 cloud clients around the world. It boasts of over 30,000 cloud clients, invested more than $7 billion since 2007 in 17 acquisitions to accelerate its cloud initiatives, and holds more than 1,560 cloud patents. IBM also processes more than 5.5 million client transactions daily through its public cloud.

IBM’s initiatives in cloud computing will not diminish its interest in the System z enterprise cloud platform. To the contrary a recent IBM analysis shows the z enhancing the economic advantages of the cloud: a business scaling up to about 200 virtual machines (VM) gets far more efficient and economical results by using the Enterprise Linux Server as an enterprise cloud than with a virtualized x86 or public-cloud model. And the deal gets even better if you acquire the Enterprise Linux Server under either the Solution Edition program for Enterprise Linux  or the System z Solution Edition for Cloud Computing.

DancingDinosaur is Alan Radding, a longtime IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. Check out more of his work at Technologywriter.com and here.

Compuware Aims for Mainframe Literacy in CIOs

November 13, 2014

Many IT professionals, especially younger ones, are clueless about the mainframe. Chris O’Malley, president of the mainframe business at Compuware, has met CIOs who are versed in everything about IT and have seemingly done everything there is with computers, but “they are not literate about the mainframe.” That means the mainframe never comes to mind. IBM could give away a zEnterprise for free, which it comes close to doing today through the System z Solution Edition program and these CIOs would ignore it. O’Malley wants to address that.

compuware MainframeExcellence2025_cover

In response, Compuware is following the path of the IBM System z Academic Initiative, but without the extensive global involvement of colleges and universities, with a program called Mainframe Excellence 2025, which it describes as a generational call for strategic platform stewardship. “We’re also trying to debunk a lot of issues around the mainframe,” O’Malley continues.

compuware O'Malley headshot

Chris O’Malley, Pres. Mainframe, Compuware

Compuware refers to Mainframe Excellence 2025 as a manifesto, something of a call to arms for millennials to storm the IT gates and liberate IT management from enslavement to x86 computing. Somehow DancingDinosaur doesn’t see it happening exactly that way; it envisions coexistence and synergy.

Most of the Mainframe Excellence document goes over ground DancingDinosaur and many others have covered before. It is delightful, however, to see others refreshing the arguments. And, the document adds some interesting data. For instance, over 1.15 million CICS transactions are executed on System z every second of every day! That’s more than all Google searches, YouTube views, Facebook likes, and Twitter tweets combined.

It also pays homage to what it refers to as the mainframe’s culture of excellence. It characterizes this culture by rigorous adherence to a standard of excellence demonstrably higher than that associated with other platforms, notably x86. IT organizations actually expect, accept, and plan for problems and patches in other platforms (think Microsoft Patch Tuesday). Mainframe professionals, on the other hand, have zero-tolerance for downtime and system failures and the mainframe generally lives up to those high expectations.

Ironically, the document points out that the culture of excellence has created a certain chasm between mainframe professionals and the rest of IT. In fact, this ingrained zero-failure culture of the mainframe community—including both vendors and enterprise IT staffs—can sometimes put it at odds with the very spirit of innovation that allows the mainframe to deliver the repeated advances in price/performance and new capabilities that consistently produce tremendous value.

Compuware concludes its report with an action checklist:

  • Fully inventory existing mainframe data, applications (including business rules), capacity, utilization/MSUs and management tools, a veritable trove a value embedded in mainframe code and business rules.
  • Build a fact-based skills plan with a realistic timeline.
  • Ramp up current and road-mapped mainframe capabilities.
  • Rightsize investments in mainframe application stewardship.
  • Institute an immediate moratorium on short-term cost-cutting that carries long-term negative consequences.
  • Combat denial and hype in regards to non-mainframe platform capabilities, costs and risks.

And Compuware’s final thought should give encouragement to all those who must respond to the mainframe-costs-too-much complaint:  IT has a long history of under-estimating real TCO and marginal costs for new platforms while over-estimating their benefits. A more sober assessment of these platforms will make the strategic value and economic advantages of the mainframe much more evident in comparison.

Compuware certainly is on the right track with Mainframe Excellence 2025. Would like, however, to see the company coordinate its efforts with the System z Academic Initiative, the Master the Mainframe effort, and such.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

IBM Creates Comprehensive Cloud Security Portfolio

November 6, 2014

On Wednesday IBM introduced what it describes as the industry’s first intelligent security portfolio for protecting people, data, and applications in the cloud. Not a single product but a set of products that taps a wide range of IBM’s cloud security, analytics, and services offerings.  The portfolio dovetails with IBM’s end-to-end mainframe security solution as described at Enterprise2014 last month.

Cloud security certainly is needed. In a recent IBM CISO survey, 44% of security leaders said they expect a major cloud provider to suffer a significant security breach in the future; one that will drive a high percentage of customers to switch providers, not to mention the risks to their data and applications.  Cloud security fears have long been one of the biggest impediments to organizations moving more data, applications, and processes to the cloud. These fears are further complicated by the fact the IT managers feel that much their cloud providers do is beyond their control. An SLA only gets you so far.

2014 IBM study of CISO 44 high

The same survey found 86% of leaders surveyed say their organizations are now moving to cloud, of those three-fourths see their cloud security budget increasing over the next 3-5 years.

As is typical of IBM when it identifies an issue and feels it has an edge, the company assembles a structured portfolio of tools, a handful of which were offered Wednesday. The portfolio includes versions of IBM’s own tools optimized for the cloud and tools and technologies IBM has acquired.  Expect more cloud security tools to follow. Together the tools aim to manage access, protect data and applications, and enable visibility in the cloud.

For example, for access management IBM is bringing out Cloud Identity Services which  onboards and handles users through IBM-hosted infrastructure.  To safeguard access to cloud-deployed apps it is bringing a Cloud Sign-On service used with Bluemix. Through Cloud Sign-On developers can quickly add single-sign on to web and mobile apps via APIs.  Another product, Cloud Access Manager, works with SoftLayer to protect cloud applications with pattern-based security, multi-factor authentication, and context-based access control. IBM even has a tool to handle privileged users like DBAs and cloud admins, the Cloud Privilege Identity Manager.

Here is a run-down of what was announced Wednesday. Expect it to grow.

  • Cloud Identity Services—IBM Cloud Identity Services
  • Cloud Sign-On Service –IBM Single Sign On
  • Cloud Access Manager –IBM Security Access Manager
  • Cloud Privileged Identity Manager—IBM Security Privileged Identity Manager (v2.0)
  • Cloud Data Activity Monitoring—IBM InfoSphere Guardium Data Activity Monitoring
  • Cloud Mobile App Analyzer Service –IBM AppScan Mobile Analyzer
  • Cloud Web App Analyzer Service –IBM AppScan Dynamic Analyzer
  • Cloud Security Intelligence –IBM QRadar Security Intelligence (v7.2.4)
  • Cloud Security Managed Services –IBM Cloud Security Managed Services

Now let’s see how these map to what the z data center already can get with IBM’s End-to-End Security Solution for the Mainframe. For starters, security is built into every level of the System z structure: processor, hypervisor, operating system, communications, and storage.

In terms of security analytics; zSecure, Guardium, AppScan, and QRadar improve your security intelligence. Some of these tools are included in the new Cloud security portfolio. Intelligence is collected from z/OS, RACF, CA ACF2, CA Top Secret, CICS, and DB2. The zSecure suite also helps address compliance challenges. In addition, InfoSphere Guardium Real-time Activity Monitoring handles activity monitoring, blocking and masking, and vulnerability assessment.

Of course the z brings its crypto coprocessor, Crypto Express4S, which complements the cryptographic capabilities of CPACF. There also is a new zEC12 coprocessor, the EP11 processor, amounting to a Crypto Express adapter configured with the Enterprise PKCS #11 (EP11) firmware, also called the CEX4P adapter. It provides hardware-accelerated support for crypto operations that are based on RSA’s PKCS #11 Cryptographic Token Interface Standard. Finally, the z supports the necessary industry standards, like FIPS 140-2 Level 4, to ensure multi-tenanted public and private cloud workloads remain securely isolated. So the cloud, at least, is handled to some extent.

The mainframe has long been considered the gold standard for systems security. Now it is being asked to take on cloud-oriented and cloud-based workloads while delivering the same level of unassailable security. Between IBM’s end-to-end mainframe security solution and the new intelligent (analytics-driven) security portfolio for the cloud enterprise shops now have the tools to do the job right.

And you will want all those tools because security presents a complex, multi-dimensional puzzle requiring different layers of integrated defense. It involves not only people, data, applications, and infrastructure but also mobility, on premise and off premise, structured, unstructured, and big data. This used to be called defense in depth, but with the cloud and mobility the industry is moving far beyond that.

DancingDinosaur is Alan Radding, a veteran IT analyst with well over 20 years covering IT and the System z. You can find more of my writing at Technologywriter.com and here. Also follow DancingDinosaur on Twitter, @mainframeblog.

Real-Time Analytics on z Lead at IBM Enterprise2014 Opening Day

October 8, 2014

Users have always been demanding about performance. But does the 5-minute rule noted by Tom Rosamilia in the opening keynote at IBM Enterprise2014 go too far? It now seems users expect companies to respond, or at least acknowledge, their comments, questions, or problems in five minutes. That means companies need to monitor and analyze social media in real-time and respond appropriately.

Building on client demand to integrate real-time analytics with consumer transactions, IBM yesterday announced new capabilities for its System z. Specifically, IBM is combining the transactional virtues of the z with big data analytic capabilities into a single, streamlined, end-to-end data system. This real-time integration of analytics and transaction processing can allow businesses to increase the value of a customer information profile with every interaction the customer makes.  It also promises one way to meet the 5-minute rule, especially when a customer posts a negative comment on social media.

With the new integrated capability you can apply analytics to social sentiment and customer engagement data almost as the transactions are occurring. The goal is to gain real-time insights, which you can do on the mainframe because the data already is there and now the real time analytics will be there. There is no moving of data or logic.  The mainframe already is doing this when it is being used for fraud prevention. This becomes another case where the mainframe can enable organizations to achieve real-time insights and respond within five minutes. Compared to fraud analysis the 5-minute expectation seems a luxury.

By incorporating social media into the real time analytic analysis on the mainframe you can gain an indication of how the business is performing in the moment, how you stack up to your competitors, and most importantly, meet the 5-minute response expectation.  Since we’re talking about pretty public social sentiment data, you also could monitor your competitors’ social sentiment and analyze that to see how well they are responding.

And then there are the more traditional things you can do with the integration of analytics with transactional data to provide real-time, actionable insights on commercial transactions as they occur. For example you could take advantage of new opportunities to increase sales or prevent customer churn.

According to IBM this is being driven by the rise of mobile and smartphones, numbering in the billions in a few years. The combination of massive amounts of data and consumers who are empowered with mobile access is creating a difficult challenge for businesses, IBM noted in the announcement. Consumers now expect an immediate response—the 5 minute rule—to any interaction, at any time, and through their own preferred channel of communication. Unfortunately, many businesses are trying to meet this challenge and deliver instantaneous, on-demand customer service with outdated IT systems that can only provide after-the-fact intelligence.

Said Ross Mauri, General Manager, System z, IBM Systems & Technology Group: “Off-loading operational data in order to perform analytics increases cost and complexity while limiting the ability of businesses to use the insights in a timely manner.” The better approach, he continued, is to turn to an end-to-end solution that makes analytics a part of the flow of transactions and allows companies to gain real time insights while improving their business performance with every transaction.

Of course,  Mauri was referring specifically to the System z.  However, Power Systems and especially the new POWER8 machines, which have a strong presence here at IBM Enterprise2014, can do it too. Speaker after speaker emphasized that the Power machines are optimized for lightning fast analytics, particularly real time analytics.

Still, this was a z announcement so IBM piled on a few more goodies for the z. These include new analytics capabilities for the mainframe to enable better data security and provide companies with the ability to integrate Hadoop big data with the z. Specifically, IBM is delivering:

  • IBM InfoSphere BigInsights for Linux on System z – Combines open-source Apache Hadoop with IBM innovations to deliver enterprise grade Hadoop for System z clients;
  • IBM DB2 Analytics Accelerator – Enhances data security while delivering 2000 times the response time for complex data queries.
  • New capabilities in Linux and the cloud for system z, such as IBM Elastic Storage for Linux on System z, which extends the benefits of Elastic Storage to the Linux environment on z servers, and IBM Cloud Manager with OpenStack for System z, which enables heterogeneous cloud management across System z, Power and x86 environments.

Many of these pieces are available now.  You can meet the 5-minute rule sooner than you may think.

Alan Radding is DancingDinosaur. Follow him on Twitter, @mainframeblog, or check out his website, Technologywriter.com

 

SoftLayer Direct Link Brings Hybrid Cloud to System z and Power

June 26, 2014

Back in February, IBM announced that SoftLayer was integrating IBM Power Systems into its cloud infrastructure, a move that promised to deliver a level and breadth of services beyond what has traditionally been available over the cloud. Combined with new services and tools announced at the same time, this would help organizations deploy hybrid and private cloud environments.

Back then IBM included the System z in the announcement as well by bolstering its System z cloud portfolio with IBM Wave for z/VM. IBM Wave promises to provide rapid insight into an organization’s virtualized infrastructure with intelligent visualization, simplified monitoring and unified management. Specifically, Wave helps the organization more easily manage large numbers of virtual machines.

Now it is June, the snow has finally melted and IBM’s SoftLayer is introducing Direct Link to the computing public. Direct Link had previously been available to only a select few customers. Direct Link, in effect, is a specialized content delivery network for creating hybrid clouds. Organizations would connect their private IT infrastructure to public cloud resources by going directly to the SoftLayer platform, which streamlines delivery over the network. Direct Link users avoid the need to traverse the public Internet.

The focus here is on hybrid clouds. When an organization with a private cloud, say a mainframe hosting a large amount of IT resources and services behind the firewall, needs resources such as extra capacity or services it doesn’t have, it can turn to the public cloud for those extra resources or services. The combination of the private cloud and tightly connected public cloud resources form a hybrid cloud.  If you’re attending a webinar on hybrid clouds at this point the speaker usually says …and then you just punch out to the public cloud to get x, y, or z resource or service. It always sounds so simple, right?

As far as the System z goes, SoftLayer was not actually integrated with the z in the February announcement, although DancingDinosaur expects it will be eventually if IBM is serious about enterprise cloud computing. For now, the z sits in the on-premise data center, a private cloud so to speak. It runs CICS and DB2 and all the systems it is known for and, especially, security. From there, however, it can connect to an application server, dedicated or virtual, on the SoftLayer Cloud Server to form a Hybrid System z-Enterprise Cloud. As presented at SHARE this past spring, the resulting Hybrid System z-Cloud Enterprise Architecture (slides 46-49) provides the best of both worlds, secure transactions combined with the dynamics of the cloud.

Direct Link itself consists of a physical, dedicated network connection from your data center, on-premise private cloud, office, or co-location facility to SoftLayer’s data centers and private network through one of the company’s 18 network Points of Presence (PoPs) around the world. These PoPs reside within facilities operated by SoftLayer partners including Equinix, Telx, Coresite, Terremark, Pacnet, InterXion and TelecityGroup, which provide access for SoftLayer customers, especially those with infrastructure co-located in the same facilities.

Direct Link, essentially an appliance, eliminates the need to traverse the public Internet to connect to the SoftLayer private network. Direct Link enables organizations to completely control access to their infrastructure and services, the speed of their connection to SoftLayer, and how data is routed. In the process, IBM promises:

  • Higher network performance consistency and predictability
  • Streamlined and accelerated workload and data migration
  • Improved data and operational security

If you are not co-located in any of the above facilities operated by one of SoftLayer’s POP partners then it appears you will have will to set up an arrangement with one of them. SoftLayer promises to hold your hand and walk you through the set up process.

When you do have it set up Direct Link pricing appears quite reasonable. Available immediately, Direct Link pricing starts at $147/month for a 1Gbps network connection and $997/month for a 10Gbps network connection.

According to Trevor Jones, writing for Tech Target, IBM’s pricing undercuts AWS slightly and Microsoft’s by far. Next month Microsoft, on a discounted rate for its comparable Express Route service, will charge $600 per month for 1 Gbps and $10,000 for 10 Bbps per month. Amazon uses its Direct Connect service priced at $0.30 per hour for 1 Gbps and 10 Gbps at $2.25 per hour.

Your System z or new Power server integrated with SoftLayer can provide a solid foundation for hybrid cloud nirvana. Just add Direct Link and make arrangements with public cloud resources and services. Presto, you have a hybrid cloud.

BTW, IBM Enterprise 2014 is coming in Oct. to Las Vegas. DancingDinosaur expects to hear a lot of the z and Power, SoftLayer, and hybrid clouds there.

DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

The Future of IBM Lies in the Cloud

March 13, 2014

In her annual letter to stockholders IBM CEO Virginia Rometty made it clear that the world is being forever altered by the explosion of digital data and by the advent of the cloud. So, she intends IBM to “remake the enterprise IT infrastructure for the era of cloud.” This where she is leading IBM.

DancingDinosaur thinks she has it right. But where does that leave this blog, which was built on the System z, Power Systems, and IBM’s enterprise systems? Hmm.

Rometty has an answer for that buried far down in her letter. “We are accelerating the move of our Systems product portfolio—in particular, Power and storage—to growth opportunities and to Linux, following the lead of our successful mainframe business. “

The rapidly emerging imperatives of big data, cloud computing, and mobile/social require enterprise-scale computing in terms of processing power, capacity, availability, security, and all the other ities that have long been the hallmark of the mainframe and IBM’s other enterprise class systems. She goes so far as to emphasize that point:  “Let me be clear—we are not exiting hardware. IBM will remain a leader in high-performance and high-end systems, storage and cognitive computing, and we will continue to invest in R&D for advanced semiconductor technology.”

You can bet that theme will be continued at the upcoming Edge 2014 conference May 19-23 in Las Vegas. The conference will include an Executive program, a Technical program with 550 expert technical sessions across 14 tracks, and a partner program. It’s being billed as an infrastructure innovation event and promises a big storage component too. Expect to see a lot of FlashSystems and XIV, which has a new pay-as-you-go pricing program that will make it easy to get into XIV and scale it fast as you need it. You’ll probably also encounter some other new go-to-market strategies for storage.

As far as getting to the cloud, IBM has been dropping billions to build out about as complete a cloud stack as you can get.  SoftLayer, the key piece, was just the start. BlueMix, an implementation of IBM’s Open Cloud Architecture, leverages Cloud Foundry to enable developers to rapidly build, deploy, and manage their cloud applications while tapping a growing ecosystem of available services and runtime frameworks, many of which are open source. IBM will provide services and runtimes into the ecosystem based on its already extensive and rapidly expanding software portfolio. BlueMix is the IBM PaaS offering that compliments SoftLayer, its IaaS offering. Cloudant, the most recent acquisition, brings database as a service (DBaaS) to the stack. And don’t forget IBM Wave for z/VM, which virtualizes and manages Linux VMs, a critical cloud operation for sure. With this conglomeration of capabilities IBM is poised to offer something cloud-like to just about any organization. Plus, tying WebSphere and its other middleware products to SoftLayer bolsters the cloud stack that much more.

And don’t think IBM is going to stop here. DancingDinosaur expects to see more acquisitions, particularly when it comes to hybrid clouds and what IBM calls systems of engagement. Hybrid clouds, for IBM, link systems of engagement—built on mobile and social technologies where consumers are engaging with organizations—with systems of record, the main workloads of the System z and Power Systems, where data and transactions are processed.

DancingDinosaur intends to be at Edge 2014 where it expects to see IBM detailing a lot of its new infrastructure and demonstrating how to use it. You can register for Edge 2014 here until April 20 and grab a discount.

Follow DancingDinosaur on Twitter: @mainframeblog

February 25, 2014

How the 50 Year-Old Mainframe Remains Relevant

The mainframe turns 50 years old this year and the many pundits and experts who predicted it would be long gone by now must be scratching their heads.  Yes, it is still around and has acquired over 260 new accounts just since zEnterprise launch. It also has shipped over 320 hybrid computing units (not to be confused with zBX chassis only) since the zBX was introduced and kicked off hybrid mainframe computing.

As for MIPS, although IBM experienced a MIPS decline last quarter that follows the largest MIPS shipment in mainframe history a year ago resulting in a 2-year CGR of +11%.  (Mainframe sales follow the new product release cycle in a predictable pattern.) IBM brought out the last System z release, the zEC12, faster than the mainframe’s historic release cycle. Let’s hope IBM repeats the quick turnaround with the next release.

Here’s what IBM is doing to keep the mainframe relevant:

  • Delivered steady price/performance improvements with each release. And with entry-level BC-class pricing and the System z Solution Edition programs you can end up with a mainframe system that is as competitive or better than x86-based systems while being more secure and more reliable out of the box.
  • Adopted Linux early, before it had gained the widespread acceptance it has today. Last year over three-quarters of the top 100 enterprises had IFLs installed. This year IBM reports a 31% increase in IFL MIPS. In at least two cases where DancingDinosaur recently interviewed IT managers, Linux on z was instrumental in bringing their shops to the mainframe.
  • Supported for SOA, Java, Web services, and cloud, mobile, and social computing continues to put the System z at the front of the hot trends. It also prominently plays with big data and analytics.  Who ever thought that the mainframe would be interacting with RESTful APIs? Certainly not DancingDinosaur’s computer teacher back in the dark ages.
  • Continued delivery of unprecedented scalability, reliability, and security at a time when the volumes of transactions, data, workloads, and users are skyrocketing.  (IDC predicts millions of apps, billions of users, and trillions of things connected by 2020.)
  • Built a global System z ecosystem of tools and technologies to support cloud, mobile, big data/analytics, social and non-traditional mainframe workloads. This includes acquisitions like SoftLayer and CSL Wave to deliver IBM Wave for z/VM, a simplified and cost effective way to harness the consolidation capabilities of the IBM System z platform along with its ability to host the workloads of tens of thousands of commodity servers. The mainframe today can truly be a fully fledged cloud player.

And that just touches on the mainframe platform advantages. While others boast of virtualization capabilities, the mainframe comes 100% virtualized out of the box with virtualization at every level.  It also comes with a no-fail redundant architecture and built-in networking. 

Hybrid computing is another aspect of the mainframe that organizations are just beginning to tap.  Today’s multi-platform compound workloads are inherently hybrid, and the System z can manage the entire multi-platform workload from a single console.

The mainframe anniversary celebration, called Mainframe50, officially kicks off in April but a report from the Pulse conference suggests that Mainframe50 interest already is ramping up. A report from Pulse 2014 this week suggests IBM jumped the gun by emphasizing how the z provides new ways never before thought possible to innovate while tackling challenges previously out of reach.

Pulse 2014, it turns out, offered 38 sessions on System z topics, of which 27 will feature analysts or IBM clients. These sessions promise to address key opportunities and challenges for today’s mainframe environments and the latest technology solutions for meeting them, including OMEGAMON, System Automation, NetView, GDPS, Workload Automation Tivoli Asset Discovery for z/OS and Cloud.

One session featured analyst Phil Murphy, Vice President and Principal Analyst from Forrester Research, discussing the critical importance of a robust infrastructure in a mixed mainframe/distributed cloud environment—which is probably the future most DancingDinosaur readers face—and how it can help fulfill the promise of value for cloud real time.

Another featured mainframe analyst Dot Alexander from Wintergreen Research who looked at how mainframe shops view executing cloud workloads on System z. The session focused on the opportunities and challenges, private and hybrid cloud workload environments, and the impact of scalability, standards, and security.

But the big celebration is planned for April 8 in NYC. There IBM promises to make new announcements, launch new research projects, and generally focus on the mainframe’s future.  A highlight promises to be Showcase 20, which will focus on 20 breakthrough areas referred to by IBM as engines of progress.  The event promises to be a sellout; you should probably talk to your System z rep if you want to attend. And it won’t stop on April 8. IBM expects to continue the Mainframe50 drumbeat all year with new announcements, deliverables, and initiatives. Already in February alone IBM has made a slew of acquisitions and cloud announcements that will touch every mainframe shop with any cloud interests (which should be every mainframe shop at one point or another).

In coming weeks stay tuned to DancingDinosaur for more on Mainframe50. Also watch this space for details of the upcoming Edge 2014 conference, with an emphasis on infrastructure innovation coming to Las Vegas in May.

Please follow DancingDinosaur on Twitter, @mainframeblog

A Maturity Model for the New Mainframe Normal

February 3, 2014

Last week Compuware introduced its new mainframe maturity model designed to address what is emerging as the new mainframe normal. DancingDinosaur played a central role in the creation of this model.

A new mainframe maturity model is needed because the world of the mainframe is changing rapidly.  Did your data center team ever think they would be processing mainframe transactions from mobile phones? Your development team probably never imagined they would be architecting compound workloads across the mainframe and multiple distributed systems running both Windows and Linux? What about the prospect of your mainframe serving up millions or even billions of customer-facing transactions a day?  But that’s the mainframe story today.

Even IBM, the most stalwart of the mainframe vendors, repeats the driving trends—cloud, mobile, social, big data, analytics, Internet of things—like a mantra. As the mainframe celebrates its 50th anniversary year, it is fitting that a new maturity model be introduced because there is, indeed, a new mainframe normal rapidly evolving.

Things certainly are changing in ways most mainframe data center managers wouldn’t have anticipated 10 years ago, probably not even five years ago. Of those, perhaps the most disconcerting change for traditional mainframe shops is the need to accommodate distributed, open systems (systems of engagement) alongside the traditional mainframe environment (systems of record).

Since the rise of distributed systems two decades ago, there has existed both a technical and cultural gap between the mainframe and distributed teams. The emergence of technologies like hybrid computing, middleware, and the cloud have gone far to alleviate the technical gap. The cultural gap is not so amenable to immediate fixes. Still, navigating that divide is no longer optional – it has become a business imperative.  Crossing the gap is what the new maturity model addresses.

Many factors contribute to the gap; the largest of which appears to be that most organizations still approach the mainframe and distributed environments as separate worlds. One large financial company, for example, recently reported that they view the mainframe as simply MQ messages to distributed developers.

The new mainframe maturity model can be used as a guide to bridging both the technical and cultural gaps.  Specifically, the new model defines five levels of maturity. In the process, it incorporates distributed systems alongside the mainframe and recognizes the new workloads, processes and challenges that will be encountered. The five levels are:

  1. Ad-hoc:  The mainframe runs core systems and applications; these represent the traditional mainframe workloads and the green-screen approach to mainframe computing.
  2. Technology-centric:  An advanced mainframe is focused on ever-increasing volumes, higher capacity, and complex workload and transaction processing while keeping a close watch on MIPS consumption.
  3. Internal services-centric:  The focus shifts to mainframe-based services through a service delivery approach that strives to meet internal service level agreements (SLAs).
  4. External services-centric:  Mainframe and non-mainframe systems interoperate through a services approach that encompasses end-user expectations and tracks external SLAs.
  5. Business revenue-centric:  Business needs and the end-user experience are addressed through interoperability with cloud and mobile systems, services- and API-driven interactions, and real-time analytics to support revenue initiatives revolving around complex, multi-platform workloads.

Complicating things is the fact that most IT organizations will likely find themselves straddling different maturity levels. For example, although many have achieved levels 4 and 5 when it comes to technology the IT culture remains at levels 1 or 2. Such disconnects mean IT still faces many obstacles preventing it from reaching optimal levels of service delivery and cost management. And this doesn’t just impact IT; there can be ramifications for the business itself, such as decreased customer satisfaction and slower revenue growth.

DancingDinosaur’s hope is that as the technical cultures come closer through technologies like Java, Linux, SOA, REST, hybrid computing, mobile, and such to allow organizations to begin to close the cultural gap too.

Follow DancingDinosaur on Twitter: @mainframeblog

The zEnterprise as a Hybrid Data Center

November 21, 2013

There is no doubt that the zEnterprise enables hybrid computing. Just attach a zBX to it and start plugging in Linux and x86 blades; presto, you’ve got hybrid computing.  You can manage this entire hybrid infrastructure via the Unified Resource Manager.

The zEnterprise also has a sister hybrid computing platform, IBM PureSystems. Here, too, you can add in System x and Linux or even Power and System i and do hybrid computing. You can also manage the hybrid environment through a single console, albeit a different console—the Flex System Manager—and manage this second IBM hybrid platform as a unified environment.  DancingDinosaur has noted the irony of IBM having two different, incompatible hybrid systems; IBM has reassured this blogger several times that it is trying to converge the two. Whenever it happens DancingDinosaur will be the first to report it.

The zEnterprise or even PureSystems as a hybrid computing platform, however, is not the same as a hybrid data center.  Apparently there is no definition of a hybrid data center despite all the talk about hybrid computing, hybrid clouds, and hybrid systems.  As best DancingDinosaur can piece it together, the hybrid data center is multiplatform like the zEnterprise, but it also is multi-location, often using co-location facilities or factory-built containerized data centers (IBM calls them Portable Modular Data Centers, PMDC). More often, however, hybrid data centers are associated with cloud computing as the third of the three flavors of cloud (private, public, hybrid).

Gartner recently described some architecture options for a hybrid data center. In one case you could have a zEnterprise acting as, say, a private cloud using a co-location facility as a DMZ between the private cloud and a public cloud like Amazon. Not sure, however, you would need the DMZ if your private cloud was running on the highly secure zEnterprise but Gartner included it. Go figure.

Hybrid showed up in numerous Enterprise 2013 sessions this past October. You can catch some video highlights from it here. The conference made frequent mention of hybrid in numerous sessions, some noted in previous DancingDinosaur posts, such as Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The session introduced the Unified Resource Manager and described how it would allow an IT shop to manage a collection of one or more zEnterprise nodes including any optionally attached zBX cabinets as a single logical virtualized system through a Hardware Management Console (HMC). In short, it was about providing a single point of control through which data center personnel can deploy, configure, monitor, manage and maintain the integrated System z and zBX blades based on heterogeneous architectures in a unified manner. But it wasn’t talking about the hybrid enterprise data center described in the previous paragraph.

Similarly, Application Performance Management and Capacity Planning for the IBM zEnterprise Hybrid Workload focused on extending the Unified Resource Manager to goal-oriented performance management for both traditional System z and BladeCenter applications. It was about applying WLM, RMF, and Platform Performance Management to cross-platform hybrid applications. Again, this really wasn’t about the hybrid data center described above.

BTW, plans apparently already are underway for Enterprise 2014. Looks like it will be Oct. 6-10 at the Venetian in Las Vegas. It should be quite an event given that IBM will be celebrating the 50th anniversary of the mainframe in 2014.

And there is much more on z hybrid computing and hybrid clouds. The zEnterprise has its own page on cloud computing here, and last month the zEnterprise zBC12 won CRN Tech Innovator Award for the most Innovative cloud solution.  You can also click here to see how a dozen IBM customers used various IBM platforms to build hybrid clouds.

IBM has already used the zEnterprise to consolidate over 30,000 servers around the world for an 84% improvement in data center efficiency and a 50% reduction in power and cooling. This effectively freed $1 billion to spend on innovative new projects that drive business growth across the company. And IBM is about as hybrid a data center as you can find.

Follow DancingDinosaur on Twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 718 other followers

%d bloggers like this: