Posts Tagged ‘IBM’

IBM Enterprise2014 to Drive Advanced Mainframe Capabilities

August 27, 2014

The summer is winding down, and IBM Enterprise2014 (October 6-10, 2014 at the Venetian in Las Vegas, October 6-10, 2014 at the Venetian in Las Vegas) will be here in a little over a month.  It combines the IBM System z Technical University and the IBM Power Systems Technical University at one North American location. The advanced capabilities being featured at Enterprise2014 include: cloud, big data, and much more. Let’s look at a sampling of the z-oriented cloud and big data sessions. Subsequent posts will look at POWER and other topics.

The event also will include announcing the winner of the Mainframe Mobile App Throwdown, details here. Mobile is hot and poised to drive a lot of activity through the mainframe. The next generation of mobile apps will need to integrate with core applications running on the mainframe. DancingDinosaur readers know how to do that. Top prize for the Throwdown is an iPad, a pass to the IBM Enterprise2014 conference in Las Vegas, and even a week with IBM experts to help turn the app from a concept to reality. DancingDinosaur will be there to publicize the winners here. But the competition closes Sept. 17 so sign up soon.

For Mainframe Mobile App Throwdown ideas check out the session details at Enterprise2014. For example, Taking Analytics Mobile with DB2 Web Query and More! by Doug Mack digs into mobile features added to DB2 Web Query. He discusses how to sync a mobile device up with your favorite dashboards, or use the mobile app to organize and access reports offline. Leverage REST-based Web Services and application extensions to customize the user interface for reporting functions or schedule the reports to run in the background.

Now, let’s look at a sampling of the cloud and big data sessions.

How Companies Are Using IBM System z for Cloud—Fehmina Merchant describes how organizations are building secure and robust private clouds on System z to deliver their critical IT services with agility and at lower costs.  The session will examine the unique capabilities of zEnterprise as a platform for private cloud computing, in effect providing the ultimate in virtualization, security, scalability and reliability. It also will cover how the newest IBM SmartCloud technologies can automate and optimize the deployment and management of services in the cloud. In addition, the session will offer some specific real-life examples and use-cases to illustrate how a private cloud built on zEnterprise and SmartCloud provide flexible IT service delivery at the lowest cost. The session will end with a live demonstrations of the latest IBM SmartCloud tools.

Should mainframe shops even care about cloud computing? That’s a question DancingDinosaur gets asked frequently. Glenn Anderson answers it in zEnterprise—Cutting Through the Hype: Straight Talk About the Mainframe and Cloud Computing. In this session he promises to explain why the cloud is relevant to a System z enterprise and helps z data center managers cut through the marketing hype.

For zLinux there is The Elephant on the Mainframe—Using Hadoop to Analyze IBM System z Data by Christopher Spaight. He describes the zEnterprise portfolio as including a rich set of options for the analysis of structured, relational data. But what, he asks, if the business needs to analyze data that is unstructured or semi-structured or a mix of relational and non-relational records? Many are looking to Hadoop in these situations. This session lays out the mainframer’s options for using Hadoop both on and off platform, and walks through several use cases for when it makes sense to use Hadoop. BTW, Hadoop on z is called zDoop.

Finally, HDFS, Hive and All That Big Data “Stuff” for IBM System z by Karen Durward looks at how the System z participates in the world of HDFS, Hive and more Big Data stuff. This session focuses on not only why z/OS data should be integrated into a Big Data environment but the various ways to do it. She will describe the latest on z/OS data integration with Big Data, Linux on System z as a Big Data platform, and more.

Then, when you have absorbed all the technology you can enjoy three evenings of live performances: 2 country rock groups, Delta Rae and The Wild Feathers and then, Rock of Ages. Check ‘em out here.

Alan Radding is DancingDinosaur. Look for me at Enterprise2014. You can follow this blog and more on Twitter, @mainframeblog. Find Alan Radding on Technologywriter.com.

IBM POWER8 CAPI for Efficient Top Performance

August 21, 2014

IBM’s Power Systems Power8 Coherent Accelerator Processor Interface (CAPI) is not for every IT shop running Power Systems. However, for those that aim to attach devices to their POWER8 systems over the PCIe interface and want fast, efficient performance CAPI will be unbeatable.  Steve Fields, IBM Distinguished Engineer and Director of Power Systems Design introduces it here. Some of it gets pretty geeky but slides #12-17 make the key points.

DancingDinosaur first covered CAPI here, in April, shortly after its introduction. At that point it looked like CAPI would be a game changer and nothing since suggests otherwise. As we described it then, CAPI sits directly on the POWER8 board and works with the same memory addresses that the processor uses. Pointers de-reference the same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable and, most importantly, a direct interface. In the process, it offloads complexity.

In short, CAPI provides:

  • SMP Coherence Protocol transported over PCI Express interface
  • Provides isolation and filtering through the support unit in the processor (“CAPP”)
  • Manages caching and address translation through the standard POWER Service Layer in the accelerator device
  • Enables accelerator Functional Units to operate as part of the application at the user (direct) level, just like a CPU

What you end up with is a coherent connected accelerator for just a fraction of the development effort otherwise required. As such, CAPI enables more efficient accelerator development. It can reduce the typical seven-step I/O model flow (1-Device Driver Call, 2-Copy or Pin Source Data, 3-MMIO Notify Accelerator, 4-Acceleration, 5-Poll/Int Completion, 6-Copy or Unpin Result Data, 7-Return From Device Driver Completion) to just three steps (1-shared memory/notify accelerator, 2-acceleration, and 3-shared memory completion). The result is an easier, more natural programming model with traditional thread-level programming and no need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing (e.g. Java garbage-collection).

Other advantages include an open ecosystem for accelerators built using Field Programmable Gate Arrays (FPGA). The number and size of FPGAs can be based on application requirements, and FPGAs can attach to other components, such as private DRAM, flash memory, or a high-speed network.

Driving the need for CAPI is the insatiable demand for performance.  For that, acceleration is required, which is complicated and resource-intensive to build. So IBM created CAPI, not just for pure compute but for any network-attached or storage-attached I/O. In the end it eliminates the overhead of the I/O subsystem, allowing the focus to be on the workload.

In one example IBM reported it was able to attach an IBM Flash appliance to POWER8 via the CAPI interface. As a result it could generate Read/Write commands from applications and eliminate 97% of code path length, a savings of 20-30 cores per 1M IOPS. In another test IBM reported being able to leverage CAPI to integrate flash into a server; the memory-like semantics allowed the flash to replace DRAM for many in-memory workloads. The result: 5x cost savings plus large density and energy improvements. Furthermore, by eliminating the I/O subsystem overhead from high IOPS flash access, it freed the CPU to focus on the application workload.

Finally, in a Monte Carlo simulation of 1 million iterations, a POWER8 core with FPGA and CAPI ran a full execution of the Heston pricing model for a single security 250x faster than the POWER8 core alone. It also proved easier to code, reducing the lines of C code to write by 40x compared to non-CAPI FPGA.

IBM is just getting started with CAPI. Coming up next will be CAPI working with Linux, mainly for use with analytics. Once Linux comes into the picture, expect more PCIe card vendors to deliver products that leverage CAPI. AIX too comes into the picture down the road.

Plan to attend IBM Enterprise2014 in Las Vegas, Oct. 6-19. Here is one intriguing CAPI presentation that will be there: Light up performance of your LAMP apps with a stack optimized for Power, by Alise Spence, Andi Gutmans, and Antonio Rosales. It will discuss how to leverage CAPI with POWER8 to create what they call a “killer stack” that brings together continuous delivery with exceptional performance at a competitive price. Other CAPI sessions also are in the works for Enterprise2014.

DancingDinosaur (Alan Radding) definitely is attending IBM Enterprise2014. You can follow DancingDinosaur on Twitter, @mainframeblog, or check out Technologywriter.com. Upcoming posts will look more closely at Enterprise2014 and explore some session content.

IBM Mainframe Tweet-Up with Ross Mauri Generates Action

August 15, 2014

DancingDinosaur has participated in numerous Mainframe Tweet-Ups before, most recently Enterprise2013 and Edge2014.  The Tweet-Up last Tuesday (8/12) might have been the biggest yet, generating numerous questions and responses (over 120 in one hour by DancingDinosaur’s count) on a range of topics including Linux on the mainframe, mobile on the mainframe, and more.

A Tweet-Up is a Twitter event where a panel of experts respond to questions from an audience and interactive discussions revolve around the questions. Think of the Mainframe Tweet-Up as a very mini IBM Enterprise2014. But instead of one expert panel and 100+ participants there will be over 600 expert sessions, an army of IBM experts to present and respond to questions, and over 50 case studies where you can talk directly to the user and get the real nitty-gritty.

The central attraction of Tuesday’s Mainframe Tweet-Up was Ross Mauri, General Manager of the IBM System z business.  Mauri is a veteran of enterprise servers and systems, having previously held a similar position with Power Systems. Of course he is a strong proponent of the mainframe, but he also is a big advocate for mobile on the System z.

In a recent post Mauri notes that enterprise mobility will be a $30 billion market next year with twice as many corporate employees using their own mobile devices as they are today.  According to Gartner, by 2017, 25% of all enterprises will have a mobile app store.  Check out Mauri’s post, Mobility made possible with the mainframe, here.

Mauri really sees the System z as an essential platform for mobile: “Given IBM System z’s unprecedented enterprise scale, availability, cloud, analytics, and mobile capabilities, we (the IBM mainframe team) are poised to deliver value to clients’ enterprise mobility needs. The marketplace demands mobile capabilities and has for years because their customers demand it of them.  Across industries, consumers mandate immediate, any time access to their accounts and information.  Consider what’s possible when IBM System z delivers enterprise mobility to these institutions,” he wrote.

Africa stands to gain the most from mobile mainframe, especially when it comes to banking. Mauri continued. Nearly 80% of Africa’s population – 326 million people — is unbanked, denying them the ability to get education and business loans or support their families.  First National Bank (FNB) and the mainframe are changing that.  Using System z’s mobile bank-in-a-box solutions, FNB brings secure banking to the customer in ways they’re familiar with — to the tune of 234 million monthly mobile banking transactions.  IBM’s System z bank-in-a-box solutions eliminate the need for FNB’s customers to rely on couriers.  Families have their funds in seconds instead of days and save sizable courier fees.  For the people who now use this solution, their lives have been changed forever.

DancingDinosaur has been on top of the mobile mainframe since IBM first began talking about it in the spring of 2010, and most recently here and here. The mainframe, especially with the new discounted z/OS pricing, makes an ideal cost-efficient platform for mobile computing. The z is a particularly good choice since much of the processing resulting from mobile activity will be handled right on the z, probably even the same z.

Mobile certainly was a top topic in the Mainframe Tweet-Up. One discussion addressed whether mobile would increase mainframe workloads or just shift it to coming from different devices. Instead of using an ATM to check your balance, for example, you would use the bank’s mobile app. The responses were varied: everyone agreed that mobile would increase transaction volume overall, but the transactions would follow a different cycle, a predominantly read cycle. If you have an opinion, feel welcome to weigh in with a comment.

Another discussion focused on mainframe simplification and looked at z/OSMF and CICS Explorer as two simplification/GUI tools, along with z/OS HealthChecker, RTD, and PFA. A different discussion turned to APIs and the z; concluding that the z has the APIs to effectively work with SoftLayer and also connect with APIM. Another participant added that the z works with the RESTful API. And not surprisingly there was an active discussion on Linux on z. The expert panelists and participants overall kept things very lively.

The Mainframe Tweet-Up was a small taste of what is coming in IBM Enterprise2014, Oct. 6-10 at the Venetian in Las Vegas. Register now; last year’s event sold out. IBM is expecting over 3000 attendees.  DancingDinsosaur certainly will be there.

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter, @mainframeblog. You also can find him at Technologywriter.com.

Put the Mainframe at the Heart of the Internet of Things

August 4, 2014

Does the Internet of things (IoT) sound familiar? It should. Remember massive networks of ATMs connecting back to the mainframe?

The mainframe is poised to take on the IoT challenge writes Advanced Software Products Group, Inc. (ASPG), a company specializing in mainframe software, in an interesting document called the Future of the Mainframe.  Part of that future is the IoT, which IBM refers to in a Redbook Point of View as the Interconnecting of Everything.

In that Redbook the IoT is defined as a network of Internet-enabled, real-world objects—things—ranging from nanotechnology objects to consumer electronics, home appliances, sensors of all kinds, embedded systems, and personal mobile devices. The IoT also will encompass enabling network and communication technologies, such as IPv6 to get the unique address capacity, web services, RFID, and 4G networks.

The IoT Redbook cites industry predictions of upwards of 50 billion connected devices by 2020, a number 10x that of all current Internet hosts, including connected mobile phones. Based on that the Redbook authors note two primary IoT scalability issues:

  1. The sheer number of connected devices; the quantity of connected devices, mainly the number of concurrent connections (throughput) a system can support and the quality of service (QoS) that can be delivered. Here, authors note, Internet scalability is a critical factor. Currently, most Internet-connected devices use IPv4, which is based on a 32-bit. Clearly, the industry has to speed the transition to IPv6, which implements a 128-bit addressing scheme that can support up to 2128 addresses or 4 x 1038 devices, although some tweaking of the IPv6 standard is being proposed for IoT.
  1. The volume of generated data and the performance issues associated with data collection, processing, storage, query, and display. IoT systems need to handle both device and data scalability issues. From a data standpoint, this is big data on steroids.

As ASPG noted in its paper cited above, the mainframe is well suited to provide a central platform for IoT. The zEnterprise has the power to connect large dispersed networks, capture and process the mountains of data produced every minute, and provide the security and privacy companies and individuals demand. In addition, it can accept, process, and interpret all that data in a useful way. In short, it may be the only general commercial computing platform powerful enough today to crunch vast quantities of data very quickly and is already proven to perform millions of transactions per second and do it securely.

Even with a top end zEC12 configured to the hilt and proven to handle maximum transactions per second, you are not quite yet ready to handle the IoT as it is currently being envisioned. This IoT vision is much more heterogeneous in all dimensions than the massive reservation or POS or ATM networks the mainframe has proven itself with.

At least one major piece still needed: an industry-wide standard that defines how the various devices capture myriad information for a diverse set of applications involving numerous vendors and ensure everything can communicate and exchange information in a meaningful way. Not surprisingly, the industry already is working on it.

Actually, maybe too many groups. The IEEE points to a number of standards, projects and activities it is involved with that address the creation of what it considers a vibrant IoT. The Open Internet Consortium, consisting of a slew of tech-industry heavyweights like Intel, Broadcom, and Samsung, hope to develop standards and certification for devices involved in the IoT. Another group, the AllSeen Alliance, is promoting an open standard called AllJoyn with the goal of enabling ubiquitously connected devices. Even Google is getting into the act by opening up its Nest acquisition so developers can connect their various home devices (thermostats, security alarm controllers, garage door openers, and such) via a home IoT.

This will likely shake out the way IT standards usually do with several competing groups fighting it out. Probably too early to start placing bets. But you can be sure IBM will be right there. The company already has put an IoT stake in the ground here (as if the z wasn’t enough).  Whatever eventually shakes out, System z shops should be right in the middle of the IoT action.

Expect this will be subject of discussion at the upcoming IBM Enterprise 2014 conference, Oct. 6-10 in Las Vegas. Your blogger expects to be there. DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog or Technologywriter.com.

IBM-Apple Deal Enhanced by New z/OS Pricing Discounts

July 25, 2014

In the spring, IBM announced, almost as an aside, new pricing discounts for z/OS mobile transactions. At the time, it didn’t seem like a big deal. But IBM’s more recent announcement of its exclusive mobile partnership with Apple, covered by DancingDinosaur here, suddenly gives it much bigger potential.

The plan is to create apps that can transform specific aspects of how businesses and employees work using iPhone and iPad, allowing companies to achieve new levels of efficiency, effectiveness and customer satisfaction. At the backend will be the mainframe.

Already zEnterprise shops, especially banks and financial services firms, are reporting growth in the volume of transactions that originate from mobile devices. The volume of these mobile-originated transactions in some cases is getting large enough to impact the four-hour peak loads that are used in calculating monthly costs.

Here’s the problem: you put out a mobile app and want people to use it. They do, but much of the workload being generated does not directly produce revenue. Rather, they are requesting data or checking invoices and balances. Kind of a bummer to drive up monthly charges with non-revenue producing work.

That’s where the new pricing discounts for z/OS mobile workloads come in. The new pricing reduces the impact of these mobile transactions on reported LPAR MSUs. Specifically, the Mobile Workload Pricing Reporting Tool (MWRT) will subtract 60% of the reported Mobile MSUs from a given LPAR in each hour, adjusting the total LPAR MSU value for that hour. Think of this as just a standard SCRT report with a discount built in to adjust for mobile workload impact.

So, what does that translate into in terms of hard dollar savings? DancingDinosaur had a private briefing with two IBMers who helped build the tool and asked that question. They are only in the earliest stages of getting actual numbers from users in the field; the tool only became available June 30.  Clearly the results depend on how many mobile transactions you are handling in each reporting hour and how you are handling the workloads.

There is a little work involved but the process won’t seem intimidating to mainframe shops accustomed to IBM’s monthly reporting process. Simply record mobile program transaction data, including CPU seconds, on an hourly basis per LPAR, load the resulting data file into the new tool, MWRT, each month using the IBM-specified CSV format, and run MWRT, submitting the results to IBM each month. It replaces the SCRT process.

The MWRT will function like a partial off-load from a software pricing perspective. When an LPAR value is adjusted, all software running in the LPAR will benefit from lower MSUs. The tool will calculate the monthly MSU peak for a given machine using the adjusted MSU values.

This brings us back to the hard dollar savings question. The answer: probably not much initially unless your mobile apps already generate a sizeable proportion of your peak transaction volume. But jump ahead six months or a year when the IBM-Apple partnership’s new iOS made-for-business apps are gaining traction your mobile transaction volume could be climbing substantially each month. At that point, savings of hundreds of thousands of dollars or more seem quite possible.

Of course, the new applications or the entire partnership could be a bust. In that case, you will have burned some admin time for a one-time set up. You’ll still experience whatever normal transaction growth your current mobile apps generate and collect your discounted MSU charges. Unless the big IT analysis firms are dead wrong, however, mobile transactions are not going away. To the contrary, they will only increase. The bottom line: negligible downside risk while the upside gain could be huge.

Hope to see you at IBM Enterprise 2014 in Las Vegas, Oct. 6-10. DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

 

 

System z Takes BackOffice Role in IBM-Apple Deal

July 21, 2014

DancingDinosaur didn’t have to cut short his vacation and race back last week to cover the IBM-Apple agreement. Yes, it’s a big deal, but as far as System z shops go it won’t have much impact on their data center operations until late this year or 2015 when new mobile enterprise applications apparently will begin to roll out.

The deal, announced last Tuesday, promises “a new class of made-for-business apps targeting specific industry issues or opportunities in retail, healthcare, banking, travel and transportation, telecommunications, and insurance among others,” according to IBM. The mainframe’s role will continue to be what it has been for decades, the backoffice processing workhorse. IBM is not porting iOS to the z or Power or i or any enterprise platform.

Rather, the z will handle transaction processing, security, and data management as it always has. With this deal, however, analytics appears to be assuming a larger role. IBM’s big data and analytics capabilities is one of the jewels it is bringing to the party to be fused with Apple’s legendary consumer experience. IBM expects this combination—big data analytics and consumer experience—to produce apps that can transform specific aspects of how businesses and employees work using iPhone and iPad devices and ultimately, as IBM puts it, enable companies to achieve new levels of efficiency, effectiveness and customer satisfaction—faster and easier than ever before.

In case you missed the point, this deal, or alliance as IBM seems to prefer, is about software and services. If any hardware gets sold as a result, it will be iPhones and iPads. Of course, IBM’s MobileFirst constellation of products and services stand to gain. Mainframe shops have been reporting a steady uptick in transactions originating from mobile devices for several years. This deal won’t slow that trend and might even accelerate it. The IBM-Apple alliance also should streamline and simplify working with and managing Apple’s mobile devices on an enterprise-wide basis.

According to IBM its MobileFirst Platform for iOS will deliver the services required for an end-to-end enterprise capability, from analytics, workflow and cloud storage to enterprise-scale device management, security and integration. Enhanced mobile management includes a private app catalog, data and transaction security services, and a productivity suite for all IBM MobileFirst for iOS offerings. In addition to on premise software solutions, all these services will be available on Bluemix—IBM’s development platform available through the IBM Cloud Marketplace.

One hope from this deal is that IBM will learn from Apple how to design user-friendly software and apply those lessons to the software it subsequently develops for the z and Power Systems. Would be interesting see what Apple software designers might do to simplify using CICS.

Given the increasing acceptance of BYOD when it comes to mobile, data centers will still have to cope with the proliferation of operating systems and devices in the mobile sphere. Nobody is predicting that Android, Amazon, Google, or Microsoft will be exiting the mobile arena as a result, at least not anytime soon.

Finally, a lot of commentators weighed in on who wins or loses in the mobile market. In terms of IBM’s primary enterprise IT competitors Oracle offers the Oracle Mobile Platform. This includes mobile versions of Siebel CRM, JD Edwards, PeopleSoft, and a few more. HP offers mobile app development and testing and a set of mobile application services that include planning, architecture, design, build, integration, and testing.

But if you are thinking in terms of enterprise platform winners and losers IBM is the clear winner; the relationship with Apple is an IBM exclusive partnership. No matter how good HP, Oracle, or any of IBM’s other enterprise rivals might be at mobile computing without the tight Apple connection they are at a distinct disadvantage. And that’s before you even consider Bluemix, SoftLayer, MobileFirst, and IBM’s other mobile assets.

BTW, it’s not too early to start planning for IBM Enterprise 2014. Mark your calendar, Oct 6-10 at the Venetian in Las Vegas. This event should be heavily z and Power.

DancingDinosaur is Alan Radding. Follow him on Twitter @mainframeblog or at Technologywriter.com.

Bringing the System z into the Cloud via OpenStack

July 11, 2014

Last week DancingDinosaur looked at how organizations can extend the System z into the cloud and especially hybrid clouds.  One key component, the IBM SmartCloud Entry for the z remained a bit unclear. DancingDinosaur had turned up conflicting reports as to whether the z was supported by SmartCloud Entry or not.

As you read last week: The easiest way to get started should be through IBM’s SmartCloud Entry and Linux on z. Good idea but just one catch: in the spring, IBM SmartCloud entry for z, was still only a statement of direction: “IBM intends to update IBM SmartCloud Entry to support the System z platform…” The product apparently didn’t exist. Or did it? DancingDinosaur found a Starter kit of IBM SmartCloud Entry for IBM System z. Go figure. (2 years ago DancingDinosaur wrote that SmartCloud Entry for z was imminent based on an IBM announcement that was later pulled).

IBM just got back to DancingDinosaur with a clarification. It turns out IBM rebranded the product. The rebranded product family is now IBM Cloud Manager with OpenStack, announced in May. It provides support for the latest OpenStack operating system release, Icehouse, and full access to the complete core OpenStack API set to help clients ensure application portability and avoid vendor lock-in.

Most importantly to DancingDinosaur readers,  it unequivocally extends cloud management support to System z, in addition to Power Systems, PureFlex/Flex Systems, System x, or any other x86 environment. The new solution also supports IBM z/VM on System z, as well as PowerVC for PowerVM on Power Systems to add more scalability and security to Linux environments. As of this writing, the Starter kit for IBM SmartCloud Entry for IBM System z was still live at the link above but don’t expect it to stay up for long.

IBM goes on to explain that the rebranded product is built on the foundation of IBM SmartCloud Entry. It offers a modular, flexible design that enables rapid innovation, vendor interoperability, and faster time-to-value. According to IBM it is an easy-to-deploy, simple-to use-cloud management offering that can deliver improved private cloud and Service Provider solutions with features like security, automation, usage tracking metering, and multi-architecture management. You can access the technology through the OpenStack Marketplace here.

Expect to hear more about the z, hybrid clouds, and OpenStack at IBM Enterprise 2014 this coming October in Las Vegas.

DancingDinosaur is Alan Radding. Follow on Twitter, @mainframeblog and at Technologywriter.com.

Extend the System z to the Cloud via IBM Bluemix

July 2, 2014

The System z offers an increasing number of cloud options.  At a SHARE conference this past spring Erich Amrehn, IBM Distinguished Engineer elaborated on Cloud Computing with IBM System z.  In his presentation, Amrehn focused on five cloud options: Solution Edition for Computing and Data Cloud, SAP Cloud, CICS Cloud, Hybrid Cloud, and Mobile solution for z. And that’s not even mentioning the z-based IBM Enterprise Cloud System.

Why should a z data center care?  In short, you risk being left behind. The next architecture will encompass traditional systems of record and the new systems of engagement. Both, according to Amrehn, are headed to the cloud.

From the cloud your data center can deliver on-demand self-service, ubiquitous network access, location-independent resource pooling, rapid elasticity (for storage, compute, and network), and pay-per-use. For this, Amrehn identifies 5 steps starting with virtualization. However, with his last step—patterns—many z shops drop the ball. All they have is Rational Programming Patterns via Rational Developer for System z.

Patterns become critical when the organization wants to capitalize on the agility, efficiency, orchestration, and optimization that are essential for gaining maximum value from clouds, especially hybrid clouds.

The easiest way to get started should be through IBM’s SmartCloud Entry and Linux on z. Amrehn notes just one catch: in the spring, IBM SmartCloud entry for z was still only a statement of direction: “IBM intends to update IBM SmartCloud Entry to support the System z platform…” DancingDinosaur, however, found a Starter kit of IBM SmartCloud Entry for IBM System z. Go figure. Still awaiting clarification from IBM (2 years ago DancingDinosaur wrote that SmartCloud Entry for z was imminent based on an IBM announcement that has since been pulled).

The hybrid cloud is emerging as IBM’s preferred cloud solution. The company suggests a 2-step path to the hybrid cloud: 1) select an automated cloud application platform and 2) capture the desired application(s) into a pattern. IBM’s PureSystems, particularly PureApplication, directly enable hybrid cloud innovation, especially with the IBM Pattern Engine and its support for a variety of containers.  Notice the evolution in IBM’s thinking around PureApplication. What started as integrated hardware with built-in expertise in the form of patterns is morphing into the PureApp software system and service with a cloud component.

For best results, you want expert-driven automation at the infrastructure, application, and deployment tiers. Through patterns, especially IBM patterns, you avoid any need to re-architect when shifting from on premise to off premise (and back, if needed). Without patterns, you must do everything manually, an inefficient and costly approach. You can find a selection of patterns at  the IBM Cloud Marketplace.

To capitalize on your hybrid cloud environment you eventually will want to augment it with new software—mobile apps, customer-driven innovations, whatever—apps that tap the capabilities of the latest devices and integrate with mobile and social environments. That’s why IBM is rolling out Bluemix, an integrated application development and deployment environment.

Bluemix is not your standard IBM licensed technology. IBM has adopted distinctly different pricing for Bluemix. Runtimes are charged by the GB-hours that your app runs, including some free per month. For IBM this truly is innovative pricing, and IBMers suggest it is a work in progress. Right now, pricing varies with each Bluemix service. Whatever mix of services you end up with, they will be tallied monthly and charged to your credit card.

The current charges look like this:

courtesy of IBM

courtesy of IBM

The goal is rapid app development; to go from idea to app in minutes, no coding. Instead assemble new apps using APIs and existing systems. Bluemix handles the heavy lifting (via Cast Iron) behind the scenes, including integrating with legacy systems.

And it works. A demo by San Francisco’s BART showed how they used Bluemix to build a mobile app in 15 days. EyeQ reduced operations costs by 30% by focusing on the apps and code while leaving Bluemix to handle the infrastructure. aPersona, which provides multi-factor authentication, used Bluemix to reduce the time to deploy a new customer from 2 days to 30 seconds.

Bluemix speeds development and deployment through instant access to IBM’s SoftLayer cloud infrastructure, IBM software, runtimes, third party services, and IBM DevOps services.  Now IBM needs to get the z completely wired in.

Expect to hear more about the z, Bluemix, SoftLayer, and hybrid clouds at IBM Enterprise 2014 this coming October in Las Vegas.

DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

SoftLayer Direct Link Brings Hybrid Cloud to System z and Power

June 26, 2014

Back in February, IBM announced that SoftLayer was integrating IBM Power Systems into its cloud infrastructure, a move that promised to deliver a level and breadth of services beyond what has traditionally been available over the cloud. Combined with new services and tools announced at the same time, this would help organizations deploy hybrid and private cloud environments.

Back then IBM included the System z in the announcement as well by bolstering its System z cloud portfolio with IBM Wave for z/VM. IBM Wave promises to provide rapid insight into an organization’s virtualized infrastructure with intelligent visualization, simplified monitoring and unified management. Specifically, Wave helps the organization more easily manage large numbers of virtual machines.

Now it is June, the snow has finally melted and IBM’s SoftLayer is introducing Direct Link to the computing public. Direct Link had previously been available to only a select few customers. Direct Link, in effect, is a specialized content delivery network for creating hybrid clouds. Organizations would connect their private IT infrastructure to public cloud resources by going directly to the SoftLayer platform, which streamlines delivery over the network. Direct Link users avoid the need to traverse the public Internet.

The focus here is on hybrid clouds. When an organization with a private cloud, say a mainframe hosting a large amount of IT resources and services behind the firewall, needs resources such as extra capacity or services it doesn’t have, it can turn to the public cloud for those extra resources or services. The combination of the private cloud and tightly connected public cloud resources form a hybrid cloud.  If you’re attending a webinar on hybrid clouds at this point the speaker usually says …and then you just punch out to the public cloud to get x, y, or z resource or service. It always sounds so simple, right?

As far as the System z goes, SoftLayer was not actually integrated with the z in the February announcement, although DancingDinosaur expects it will be eventually if IBM is serious about enterprise cloud computing. For now, the z sits in the on-premise data center, a private cloud so to speak. It runs CICS and DB2 and all the systems it is known for and, especially, security. From there, however, it can connect to an application server, dedicated or virtual, on the SoftLayer Cloud Server to form a Hybrid System z-Enterprise Cloud. As presented at SHARE this past spring, the resulting Hybrid System z-Cloud Enterprise Architecture (slides 46-49) provides the best of both worlds, secure transactions combined with the dynamics of the cloud.

Direct Link itself consists of a physical, dedicated network connection from your data center, on-premise private cloud, office, or co-location facility to SoftLayer’s data centers and private network through one of the company’s 18 network Points of Presence (PoPs) around the world. These PoPs reside within facilities operated by SoftLayer partners including Equinix, Telx, Coresite, Terremark, Pacnet, InterXion and TelecityGroup, which provide access for SoftLayer customers, especially those with infrastructure co-located in the same facilities.

Direct Link, essentially an appliance, eliminates the need to traverse the public Internet to connect to the SoftLayer private network. Direct Link enables organizations to completely control access to their infrastructure and services, the speed of their connection to SoftLayer, and how data is routed. In the process, IBM promises:

  • Higher network performance consistency and predictability
  • Streamlined and accelerated workload and data migration
  • Improved data and operational security

If you are not co-located in any of the above facilities operated by one of SoftLayer’s POP partners then it appears you will have will to set up an arrangement with one of them. SoftLayer promises to hold your hand and walk you through the set up process.

When you do have it set up Direct Link pricing appears quite reasonable. Available immediately, Direct Link pricing starts at $147/month for a 1Gbps network connection and $997/month for a 10Gbps network connection.

According to Trevor Jones, writing for Tech Target, IBM’s pricing undercuts AWS slightly and Microsoft’s by far. Next month Microsoft, on a discounted rate for its comparable Express Route service, will charge $600 per month for 1 Gbps and $10,000 for 10 Bbps per month. Amazon uses its Direct Connect service priced at $0.30 per hour for 1 Gbps and 10 Gbps at $2.25 per hour.

Your System z or new Power server integrated with SoftLayer can provide a solid foundation for hybrid cloud nirvana. Just add Direct Link and make arrangements with public cloud resources and services. Presto, you have a hybrid cloud.

BTW, IBM Enterprise 2014 is coming in Oct. to Las Vegas. DancingDinosaur expects to hear a lot of the z and Power, SoftLayer, and hybrid clouds there.

DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

Automated System z Workload Capping Can Save Big Bucks

June 20, 2014

IBM’s Monthly License Charge (MLC) pricing can be a powerful tool to significantly lower the cost of software licensing for a mainframe shop. The problem: it is frightfully complicated. DancingDinosaur has attended conferences that scheduled multi-part sessions just to cover the basic material. Figuring out which pricing program you qualify for is itself a challenge and you probably want a lawyer looking over your shoulder. Find IBM’s System z pricing page here.

One particularly galling challenge is estimating and capping the 4-hour utilization for each LPAR.  You can easily find yourself in a situation where you exceed the cap on one LPAR, resulting in a surcharge, while you have headroom to spare on other LPARs. The trick is to stay on top of this by constantly monitoring workloads and shift activity among LPARs to ensure you don’t exceed a cap.

This requires a skilled mainframe staffer with both a high level of z/OS skill and familiarity with z workloads and LPARs. While you’re at it throw in knowledge of data center operations and the organization’s overall business direction. Finding such an expert is costly and not easily spared for constant monitoring. It’s a task that lends itself to automation.

And that’s exactly what BMC did earlier this week when it introduced Intelligent Capping (iCap) for zSeries mainframes. On average, according to BMC, companies that actively manage and effectively prioritize their mainframe workloads save 10-15 percent more on their monthly license charges than those who use a more passive approach. Furthermore, instead of assigning a costly mainframe workload guru to manually monitor and manage this, BMC promises that the costs can be reduced while also diminishing risk to the business through the use of its intelligent iCap software that understands workloads, makes dynamic adjustments, and automates workload capping.

The savings, according to BMC, can add up fast. In one example, BMC cited saving 161 MSUs, which translated for that organization to over $55k that month. Given that a mainframe shop spends anywhere from few hundred thousand to millions of dollars per month on MLC charges savings of just a few percent can be significant. One BMC customer reportedly expects intelligent capping to save it 12% each month. Caveat: DancingDinosaur has not yet been able to speak with any BMC iCap customer to verify these claims.

But assuming they are true, iCap is a no-brainer for any mainframe shop paying anything but the most minimal MLC. BMC charges for iCap based on the customer’s capacity. It is willing to discuss a shared gain model by which the iCap charges are based on how much is saved but none of those deals apparently have been finalized.

This seems like a straightforward challenge for a mainframe management tool vendor but DancingDinosaur has found only a few actually doing it—BMC, Softwareonz, and IBM. Softwareonz brings AutoSoftCapping. The product promises to maximize software cost efficiency for IBM zSeries platforms, and specifically z/OS. It does so by automatically adjusting defined capacity by LPAR based upon workload while maintaining a consistent overall defined capacity for your CPC.

Softwareonz, Seattle, estimates it saves 2% on monthly charges, on the low end. At the high end, it has run simulations suggesting 20% savings.  AutoSoftCapping only works for datacenters running their z on the VWLC pricing model. Customers realistically can save 8-10%. Again, DancingDinosaur has not yet validated any savings with an actual Softwareonz customer.

Without automation, you have to do this manually, by adjusting defined capacity based on actual workloads. Too often that leave the organization with the choice of constraining workloads and thereby inhibiting performance or over-provisioning the cap and thereby driving up costs through wasted capacity.

So, if automatic MLC capping is a no brainer, why isn’t everybody doing it? Softwareonz sees several reasons, the primary one being the fear of the cap negatively impacting the VWLC four-hour rolling average. Nobody wants to impact their production workloads. Of course, the whole reason to apply intelligence to the automation is to reduce software costs without impacting production workloads. BMC offers several ways to ease the organization into this as they become more comfortable and confident in the tool.

Another reason suggested is that the System z operational team is protecting its turf from the inroads of automation. A large z shop might use a team of half a dozen or more people dedicated to monitoring and managing workloads manually. Bring in automation like iCAP or AutoSoftCapping and they expect pink slips to follow.

Of course, IBM brings the z/OS Capacity Provisioning tool for z/OS (v1.9 and above), which can be used to add and remove capacity through a Capacity Provisioning Manager (CPM) policy. This can be used to automatically control the defined capacity limit or the group capacity limits. The user interface for defining CPM policies is through z/OSMF.

If you are subject to MLC pricing, consider an automated tool. BTW, there also are consultants who will do this for you.

A note: IBM Enterprise Cloud System, covered by DancingDinosaur a few weeks ago here, is now generally available. It is an OpenStack-based converged offering that includes compute, storage, software, and services built around the zBC12. Check out the most recent details here.

Also take note: IBM Enterprise2014 is coming to Las Vegas in early Oct, Details here. The conference combines System z University and Power System University plus more. You can bet there will be multiple sessions on MLC pricing in its various permutations and workload capping.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or visit his website, www.technologywriter.com

 


Follow

Get every new post delivered to your Inbox.

Join 655 other followers

%d bloggers like this: