Posts Tagged ‘Fit for Purpose’

IBM Applies Fit-For-Purpose to Accelerating Hybrid Cloud Adoption

September 16, 2016

Almost every company is using the cloud, but not for everything, according to a recently announced study from IBM’s Institute of Business Value (IBV). Although 78 percent of the executives surveyed reported cloud initiatives as coordinated or fully integrated, almost half of computing workloads are expected to remain on dedicated, on-premises servers. This forces IT managers to consider applying IBM’s old server fit-for-purpose methodology to the cloud era, something mainframe shops haven’t seen since 2009-2010 with the arrival of hybrid mainframes.

bluemix-garages-collage-2

Given cloud evolution, notes the study researchers, it is imperative that organizations determine and regularly re-assess which combination of traditional IT, public cloud, and private cloud best suits their needs. This sounds eerily similar to IBM’s fit-for-purpose mantra that it used to steer buyers between the company’s various server offerings: enterprise mainframes, midrange machines, and entry level PC servers. Only in today’s cloud era, the choices are on-premises private clouds, off-premises public clouds, and hybrid clouds.

Picking up the fit-for-purpose mantra, IBM’s Marie Wieck said:  “Enterprises are moving to the cloud, especially hybrid cloud, faster than anyone expected to support their digital transformation, drive business model innovation and fuel growth.” She concludes by rephrasing the old server fit-for-purpose pitch for the cloud era: “Successful clients have integrated plans to place workloads where they fit best, and IBM’s hybrid cloud strategy provides this on-ramp for flexibility and growth.”

DancingDinosaur has no problem with the fit-for-purpose approach.  It has proven immeasurably useful to me over the years when pressed to explain differences between IBM platforms. That the company is starting to apply it to various ways of deploying cloud computing seems both fitting and ironic. It was the emergence of the Internet in the first place as a forerunner to the cloud that saved IBM from the incompatibilities of its myriad platforms. This was after IBM’s internal effort to make its systems at least interoperate in some rudimentary way, no matter how kludgy, failed. This happened under an initiative called Systems Application Architecture (SAA), which started in the late 1980s and was gone and forgotten by 2000. Vestiges of SAA actually may still linger in some systems. Read about it here.

It should be no surprise that the cloud has emerged as a strategic imperative for IBM. In addition to being where the revenue is heading, it is doing good things for its customers. The latest study finds that the top reasons executives cite for adopting hybrid cloud solutions are: lowering total cost of ownership (54 percent), facilitating innovation (42 percent), enhancing operational efficiencies (42 percent), and enabling them to more readily meet customer expectations (40 percent).

Furthermore, the researchers found that organizations are steadily increasing their use of cloud technologies to address wide-ranging requirements. Specifically companies reported undertaking cloud initiatives that enabled them to expand into new industries (76 percent), create new revenue sources (71 percent), and create and support new business models (69 percent).

Given the widely varying possible cloud starting points from which any company might begin its cloud journey the fit-for-purpose approach makes even more send. As the researchers noted: The particular needs and business conditions of each enterprise help define its optimal hybrid solution: most often, a blend of public cloud, private cloud, and traditional IT services. Finding the right cloud technology mix or deployment approach starts with deciding what to move to the cloud and addressing the challenges affecting migration. In the study, executives achieved the strongest results by integrating cloud initiatives company-wide, and by tapping external resources for access to reliable skills and greater efficiency.

Cloud computing undeniably is hot. According to IDC, worldwide spending on public cloud services is expected to grow from $96.5 billion this year to more than $195 billion in 2020. Even as cloud adoption matures and expands, organizations surveyed expect that about 45 percent of their workloads will continue to need on-premises, dedicated servers – nearly the same percentage as both today and two years ago. Clearly organizations are reluctant to give up the control and security they retain by keeping certain systems on-premises, behind their own firewalls.

Hybrid cloud solutions promise a way to move forward incrementally. Hybrid clouds by definition include a tailored mix of on-premises and public cloud services intended to work in unison and are expected to be widely useful across industries. Each organization’s unique business conditions and requirements will define its optimal hybrid technology landscape. Each organization’s managers will have to balance cost, risk, and performance considerations with customer demands for innovation, change, and price along with its own skills and readiness for change. Sounds like fit-for-purpose all over again.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

zEnterprise and the Private Cloud

May 20, 2011

At a dinner following an IBM Systems & Technology Group (STG) analyst briefing a few weeks ago, IBM Senior Vice President Rod Adkins responded to a question from DancingDinosuar about IBM’s Tuned for the Task strategy, which supplants Fit for Purpose. Tuned for the Task, he suggested, better addresses the new workloads and new challenges of Smarter Computing.

As Adkins noted, with IBM’s rich capabilities in systems, middleware, and analytics, enterprises can build a roadmap for Smarter Computing infrastructures that are tuned to the task, designed for data, and managed in the cloud. More often than not that cloud will be a private cloud.

At the STG analyst briefing itself private clouds drew considerable attention with one study noting that 60% of enterprises planned to implement private clouds. In his session, Andy Wachs, IBM System Software Manager, gave a presentation here laying out a simple progression for any company’s journey to a private cloud. It starts with server, storage, and network virtualization. To achieve the efficiency and flexibility inherent in a private cloud those IT resources must be virtualized. Without that you can’t move forward.

Wachs focused primarily on IBM’s Power platform, but the System z, particularly the zEnterprise (z196 and zBX), are ideal for private clouds. As IBM puts it, the mainframe’s leading virtualization capabilities makes it an obvious choice for cloud computing workloads, especially with the ability to rapidly provision thousands of virtual Linux servers and share resources across the entire system. One zEnterprise can encompass the capacity of an entire multi-platform data center in a single system.

In his presentation, Wachs makes the point that server/storage/network virtualization is the starting point of any private cloud. Without being fully virtualized, you go nowhere with the cloud. Well, the System z is fully virtualized from the start and its share-all design principle for system components enables component reductions of as much as 90% for massive simplification, which significantly reduces TCO. Add to that the security and reliability of the z makes it a secure platform for multi-tenant business workloads from the application layer to the data source and all points between.

In the end, the z is a trusted repository both for highly secure information and as an open platform supporting anything from Web2.0 agile development environments with REST interfaces to enterprise middleware. It also can deliver the variety of management—provisioning, monitoring, workflow orchestration, tracking/metering resource usage—Wasch identifies as essential. More importantly, all this management must be automated. Here again, the z already has the right tools in Systems Director and the Tivoli suite of management tools along with the Unified Resource Manager for the hardware.

Properly managed and automated private clouds can enable efficient self-service, on-demand IT provisioning. Requested resources, ideally, can be selected from a catalog with the click of a browser and, after automatic governance review, materialize in the private cloud properly configured and ready for use within hours if not minutes.

In a recent report, IDC observes that private clouds present an opportunity to accelerate the shift to this kind of more automated, self-service form of computing. It not only enables organizations to reduce costs and boost IT utilization but to better match the IT resources provisioning process with the speed at which businesses need to move these days. Click here and scroll down to access the IDC report.

When Adkins talks about Smarter Computing, he’s not talking only about System z and zEnterprise private clouds. That is, however, a good place to start with Smarter Computing, and when lower cost zEnterprise machines roll out later this year even more organizations will be able to join this party.

IBM zEnterprise 196 changes fit-for-purpose

October 6, 2010

In June 2009, an IBM executive delivered a System z presentation that provided a simple prescription for fit-for-purpose decisions. The new hybrid zEnterprise 196, a mainframe that can incorporate blades running x86, POWER7, AIX, and Linux along with the usual System z assist processors (zAAP, zIIP, and IFL) under unified virtual platform management (the Unified Resource Manager) complicates the fit-for-purpose calculation considerably.

Fit-for-purpose refers to the decision process one goes through in determining which workloads should run on which platforms.  In June 2009, the choices as presented were pretty straightforward. You chose mainframes and high end UNIX servers for high volume transaction processing and big databases.  UNIX platforms also were the recommended choice for business critical production applications, analytics, and high performance computing.  Windows servers were left to handle web applications, web infrastructure, and collaboration. Based on the same thinking, Windows servers would take on workgroup productivity applications.

In IBM’s new fit-for-purpose scheme the four basic platform choice categories essentially remain but the application workloads are labeled Type 1, 2, 3, 4.

Type 1—mixed workloads updating shared data or queues

Type 2—highly threaded applications

Type 3—parallel data structures with analytics

Type 4—small discrete applications

As a hybrid machine—one able to cross the different IBM platforms—the  zEnterprise 196 complicates the decision of where to put workloads because some workloads can be run on different platforms and in different ways within the zEnterprise. For example, you can run Linux using an IFL assist processor with or without z/VM or you can run Linux on a POWER7 blade or x86 blade within the zEnterprise extension cabinet, the zBX.

Similarly, you can run Cognos on the z on Linux using any of those platforms. The same goes for WebSphere.  In the usual fit-for-purpose decision you consider data volumes and data proximity, the number of users, the number of transaction, usage patterns, and service and security levels.

But now you especially need to look at price and performance.  At this point, IBM has been coy about talking about the price of the various zEnterprise components. They also have been silent about the real-world performance of the different components. In terms of performance are organizations to assume that Cognos running on Linux on z with data stored in DB2 on the same z will deliver the best performance?

What happens when you consider price/performance rather than performance alone? Will Cognos running on Linux on an x86 blade in the zBX extension deliver slightly less yet still good enough performance but at a considerably lower cost?  Who knows? Until IBM provides more information about both pricing and relative platform performance there is no way to evaluate the tradeoffs involved in each fit-for purpose choice. Same with WebSphere or any other workload. (Further complicating the Cognos decision is the Smart Analytics Optimizer, a special blade for the zBX.)

Since some of the pieces aren’t actually shipping yet, IBM may not yet have settled on specific pricing. As for real world performance, we will have to wait for companies to begin deploying the new zEnterprise 196 in production situations that generate meaningful production and performance metrics. Only then can organizations make good fit-for-purpose decision with the new machine. As has been noted here before, the zEnterprise is a work in progress. Stay tuned.

The future of IBM System z and mainframes

January 5, 2010

What is the future of the IBM System z? That turned out to be the inadvertent topic of a long LinkedIn Mainframe Experts discussion thread—50 comments and still growing—that began by talking about the fight shaping up between IBM and NEON over zPrime and moved back and forth between the initial topic interwoven with discussions of old IBM platforms and of the future of the System z.

It is coming on 10 years since IBM renamed the System/390 as the zSeries. There has been buzz on the Internet for the past year about what the next mainframe will look like. Software Strategies references an upcoming z11 several times in a CPU-focused comparison of HP servers with the System z.  IT Knowledge Exchange posted a blog item in Nov. about IBM ending z9 marketing in June and referenced a z11 coming a few months later. Others talk about the z11 offering a water cooled option. There is, however, nothing official to be found on IBM’s website even remotely confirming any of this, except the end of marketing the z9.

Still, IBM does need something to boost the System z offering in the face of the increased performance and low price of the newest x86-based systems. Short of rolling out a new mainframe architecture powered by a hot new chip IBM appears to be starting 2010 pursuing two mainframe strategies: 1) slash the cost of mainframe acquisition through System z Solution Editions and 2) adopt a more accommodating product marketing approach called Fit for Purpose.

The IBM System z Solution Editions certainly are welcome. They lower the cost of acquiring a System z by bundling the necessary hardware, software, and middleware to do real business work into a reduced price package with several years of maintenance included. IBM offers a handful of Solution Editions for BI, CRM, cloud, data warehousing, and more. Additional Solution Editions will likely be forthcoming. If you can qualify, this gets you another System z cheap.

The Fit for Purpose strategy addresses situations where the mainframe may not be right for a given situation. Rather it guides organizations to choose the (IBM) platform—mainframe, Unix, blade, x86—best suited to the workload, situation, skills, and budget.

Linux on z greatly expands the workloads that can be run on the z, including Web apps, BI, and even, in theory anyway, Microsoft .NET apps (via SUSE Linux on z with Mono). OpenSolaris on z can expand the z’s workload universe even more. Mantissa’s v/OS promises to put Windows apps on the z eventually.

In a Fit for Purpose briefing early in the summer, however, IBM emphasized the mainframe for large scale transaction processing and databases. Other workload categories, such as business applications, analytics, and Web it steered to UNIX/AIX, and Windows. Huh?

Granted IBM sells x86 and POWER-based platforms too, however, this seems unnecessarily limiting the z. I’ve already profiled organizations that are using their System z to support social networking, BI, Web applications, and mobile computing. Each has built a strong business case around it. Not every workload is suitable for the z, but surely when IBMers talk about Fit for Purpose they can look beyond large scale transaction processing. The mainframe in the future will do a lot more than that. It already does.


%d bloggers like this: