Posts Tagged ‘hadoop’

New IBM Initiatives Speed System z to Hybrid Cloud and IoT

November 20, 2014

Cloud computing, especially hybrid cloud computing, is going mainstream. Same is happening with the Internet of Things (IoT).  For mainframe shops unsure of how to get there IBM promises to speed the journey with the two recent initiatives.

Let’s start with hybrid clouds and the z. As IBM describes it, enterprises will continue to derive value from the existing investments in IT infrastructure while looking to the cloud to bolster business agility. The upshot: organizations increasingly are turning to hybrid clouds to obtain the best of both worlds by linking on-premises IT infrastructure to public cloud.

To that end, IBM has designed and tested various use cases around enterprise hybrid architecture involving System z and SoftLayer. These use cases focus on the relevant issues of security, application performance, and potential business cost.

One scenario introduces the cloud as an opportunity to enrich enterprise business services running on the z with external functionality delivered from the cloud.

hybrid use case

Here a retail payment system [click graphic to enlarge] is enriched with global functionality from a loyalty program that allows the consumer to accumulate points. It involves the z and its payment system, a cloud-based loyalty program, and the consumer using a mobile phone.

The hybrid cloud allows the z data center to maintain control of key applications and data in order to meet critical business service level agreements and compliance requirements while tapping the public cloud for new capabilities, business agility, or rapid innovation and shifting expenditure from CAPEX to OPEX.

Since the z serves as the data backbone for many critical applications it makes sense to connect on-premises System z infrastructure with an off-premises cloud environment. In its paper IBM suggests the hybrid architecture should be designed in a way that gives the businesses the flexibility to put their workloads and data where it makes most sense, mixing the right blend of public and private cloud services. And, of course, it also must ensure data security and performance. That’s why you want the z there.

To get started check out the uses cases IBM provides, like the one above. Already a number of organizations are trying the IBM hybrid cloud: Macy’s, Whirlpool, Daimler, and Sicoss Group. Overall, nearly half of IBM’s top 100 strategic outsourcing clients already implementing cloud solutions with IBM as they transition to a hybrid cloud model.

And if hybrid cloud isn’t enough to keep you busy, it also is time to start thinking about the IoT. To make it easier last month the company announced the IBM Internet of Things Foundation, an extension of Bluemix. Like Bluemix, this is a cloud service that, as IBM describes it, makes it possible for a developer to quickly extend an Internet-connected device such as a sensor or controller into the cloud, build an application alongside the device to collect the data, and send real-time insights back to the developer’s business. That data can be analyzed on the z too, using Hadoop on zLinux, which you read about here a few weeks ago.

IoT should be nothing new to System z shops. DancingDinosaur discussed it this past summer here. Basically it’s the POS or ATM network on steroids with orders on magnitude more complexity. IDC estimates that by 2020 there will be as many as 28 billion autonomous IoT devices installed. Today it estimates there are nine billion.

Between the cloud, hybrid clouds, and IoT, z data centers will have a lot to keep them busy. But with IBM’s new initiatives in both areas you can get simple, highly secure and powerful application access to the cloud, IoT devices, and data. With the IoT Foundation you can rapidly compose applications, visualization dashboards and mobile apps that can generate valuable insights when linked with back office enterprise applications like those on the z.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

Compuware Aims for Mainframe Literacy in CIOs

November 13, 2014

Many IT professionals, especially younger ones, are clueless about the mainframe. Chris O’Malley, president of the mainframe business at Compuware, has met CIOs who are versed in everything about IT and have seemingly done everything there is with computers, but “they are not literate about the mainframe.” That means the mainframe never comes to mind. IBM could give away a zEnterprise for free, which it comes close to doing today through the System z Solution Edition program and these CIOs would ignore it. O’Malley wants to address that.

compuware MainframeExcellence2025_cover

In response, Compuware is following the path of the IBM System z Academic Initiative, but without the extensive global involvement of colleges and universities, with a program called Mainframe Excellence 2025, which it describes as a generational call for strategic platform stewardship. “We’re also trying to debunk a lot of issues around the mainframe,” O’Malley continues.

compuware O'Malley headshot

Chris O’Malley, Pres. Mainframe, Compuware

Compuware refers to Mainframe Excellence 2025 as a manifesto, something of a call to arms for millennials to storm the IT gates and liberate IT management from enslavement to x86 computing. Somehow DancingDinosaur doesn’t see it happening exactly that way; it envisions coexistence and synergy.

Most of the Mainframe Excellence document goes over ground DancingDinosaur and many others have covered before. It is delightful, however, to see others refreshing the arguments. And, the document adds some interesting data. For instance, over 1.15 million CICS transactions are executed on System z every second of every day! That’s more than all Google searches, YouTube views, Facebook likes, and Twitter tweets combined.

It also pays homage to what it refers to as the mainframe’s culture of excellence. It characterizes this culture by rigorous adherence to a standard of excellence demonstrably higher than that associated with other platforms, notably x86. IT organizations actually expect, accept, and plan for problems and patches in other platforms (think Microsoft Patch Tuesday). Mainframe professionals, on the other hand, have zero-tolerance for downtime and system failures and the mainframe generally lives up to those high expectations.

Ironically, the document points out that the culture of excellence has created a certain chasm between mainframe professionals and the rest of IT. In fact, this ingrained zero-failure culture of the mainframe community—including both vendors and enterprise IT staffs—can sometimes put it at odds with the very spirit of innovation that allows the mainframe to deliver the repeated advances in price/performance and new capabilities that consistently produce tremendous value.

Compuware concludes its report with an action checklist:

  • Fully inventory existing mainframe data, applications (including business rules), capacity, utilization/MSUs and management tools, a veritable trove a value embedded in mainframe code and business rules.
  • Build a fact-based skills plan with a realistic timeline.
  • Ramp up current and road-mapped mainframe capabilities.
  • Rightsize investments in mainframe application stewardship.
  • Institute an immediate moratorium on short-term cost-cutting that carries long-term negative consequences.
  • Combat denial and hype in regards to non-mainframe platform capabilities, costs and risks.

And Compuware’s final thought should give encouragement to all those who must respond to the mainframe-costs-too-much complaint:  IT has a long history of under-estimating real TCO and marginal costs for new platforms while over-estimating their benefits. A more sober assessment of these platforms will make the strategic value and economic advantages of the mainframe much more evident in comparison.

Compuware certainly is on the right track with Mainframe Excellence 2025. Would like, however, to see the company coordinate its efforts with the System z Academic Initiative, the Master the Mainframe effort, and such.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

Mainframe Appeal Continues in 9th BMC Survey

October 30, 2014

With most of the over 1100 respondents (91%) reporting that the mainframe remains a viable long-term platform for them and a clear majority (60%) expecting to increase MIPS due to the normal growth of legacy applications and new application workloads the z continues to remain well entrenched. Check out the results for yourself here.

Maybe even more reassurance comes from almost half the respondents who reported that they expect the mainframe to attract and grow new workloads.  Most likely these will be Java and Linux workloads but one-third of the respondents listed cloud as a priority, jumping it up to sixth on the list of mainframe priorities. Mobile was cited as priority by 27% of the respondents followed by big data with 26% respondents.

ibm zec12

Apparently IBM’s steady promotion of cloud, mobile, and big data for the z over the past year is working. At Enterprise2014 IBM even made a big news with real time analytics and Hadoop on the z along with a slew of related announcements.

That new workloads like cloud, mobile, and big data made it into the respondents’ top 10 IT priorities for the year didn’t surprise Jonathan Adams, BMC vice president/general manager for z solutions.  The ease of developing in Java and its portability make it a natural for new workloads today, he noted.

In the survey IT cost reduction/optimization tops the list of IT priorities for 2014 by a large margin, 70% of respondents, followed by application availability, 52%.  Rounding out the top five are application modernization with 48%, data privacy, 47%, and business/IT alignment, 44%. Outsourcing finished out the top 10 priorities with 16%.

When asked to look ahead in terms of MIPS growth, the large majority of respondents expected growth to continue or at least remain steady. Only 9% expected MIPS to decline and 6% expected to eliminate the mainframe.  This number has remained consistent for years, noted Adams. DancingDinosaur periodically checks in with shops that announce plans to eliminate their mainframe and finds that a year later many have barely made any progress.

The top mainframe advantages shouldn’t surprise you:  availability (53%); security (51%); centralized data serving (47%) and transaction throughput (42%). More interesting results emerged when the respondents addressed new workloads. The mainframe’s cloud role includes data access (33%), cloud management from Linux on z (22%) and dynamic test environments via self-service (15%). Surprisingly, when it comes to big data analytics, 34% report that the mainframe acts as their analytics engine. This wasn’t supposed to be the case, at least not until BigInsights and Hadoop on z gained more traction.

Meanwhile, 28% say they move data off platform for analytics, and 14% report they federate mainframe data to an off-platform analytics engine. Yet, more than 81% now incorporate the mainframe into their Big Data strategy, up from 70% previously. The non-finance industries are somewhat more likely to use the mainframe as the big data engine, BMC noted. Those concerned with cost should seriously consider doing their analytics on the z, where the data is. It is costly to keep moving data around.

In terms of mobility, making existing applications accessible for mobile ranked as the top issue followed by developing new mobile applications and securing corporate data on mobile devices. Mobile processing increases for transaction volume came in at the bottom of mobility issues, but that will likely change when mobile transactions start impacting peak workload volumes and trigger increased costs. Again, those concerned about costs should consider IBM’s mobile transaction discount, which was covered by DancingDinsosaur here in the spring.

Since cost reduction is such a big topic again, the survey respondents offered their cost reduction priorities.  Reducing resource usage during peak led the list.  Other cost reduction priorities included consolidating mainframe software vendors, exploiting zIIP and specialty engines (which have distinctly lower cost/MIPS), and moving workloads to Linux on z.

So, judging from the latest BMC survey the mainframe is far from dead. But at least one recent IT consultant and commentator, John Appleby, seems to think so. This prediction has proven wrong so often that DancingDinosaur has stopped bothering to refute it.

BTW, change came to BMC last year  in the form of an acquisition by a venture capital group. Adams reports that the new owners have already demonstrated a commitment to continued investment in mainframe technology products, and plans already are underway for next year’s survey.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or see more of his writing at Technologywriter.com or in wide-ranging blogs here.

Hadoop Brings Big Data Analytics to the IBM System z

October 16, 2014

In a previous blog, DancingDinoaur reported on IBM’s initial announcement of Hadoop and other analytic products, like InfoSphere BigInsights, coming to the z. The IBM announcement itself can be found here.

Subsequent sessions at IBM Enterprise2014 delved more deeply into big data, analytics, and real-time analytics. A particularly good series of sessions was offered by Karen Durward, an IBM InfoSphere software product manager specializing in System z data integration. As Durward noted, BigInsights is Apache Hadoop wrapped up to make it easier to use for general IT and business managers.

Specifically, the real-time analytics package for z includes IBM InfoSphere BigInsights for Linux on System z, which combines open-source Apache Hadoop with enhancements to make Hadoop System z enterprise-ready. The solution also includes IBM DB2 Analytics Accelerator (IDAA), which improves data security while delivering a 2000x faster response time for complex data queries.

In her Hadoop on z session, Durward started with the Hadoop framework, which consists of four components:

  1. Common Core—the basic modules (libraries and utilities) on which all components are built
  2. Hadoop Distributed File System (HDFS)—stores data on multiple machines to provide very high aggregate bandwidth across a cluster of machines
  3. MapReduce—the programming model to support the high data volume data processing by the cluster
  4. YARN (Yet Another Resource Negotiator)—the platform used to manage the cluster’s compute resources including scheduling users’ applications. In effect, YARN decouples Hadoop workload and resource management.

The typical Hadoop process sounds deceptively straightforward.  Simply load data into an HDFS cluster, analyze the data in the cluster using MapReduce, write the resulting analysis back into the HDFS cluster. Then just read it.

Sounds easy enough until you try it. Then you need to deal with client nodes and name nodes, exchange metadata, and more. In addition, Hadoop is an evolving technology. Apache continues to add pieces to the environment in an effort to simplify it. For instance, Hive provides the Apache data warehouse framework, accessible using HivQL, and HBase brings Apache’s Hadoop database. Writing Map/Reduce code is a challenge so there is Pig, Apache’s platform for creating long and deep Hadoop source programs, and the list goes on. In short, Hadoop is not easy, especially for IT groups accustomed to relational databases and SQL. That’s why you need tools like BigInsights. The table below is how Durward sees the Hadoop tool landscape.

Software Needs Other Hadoop Products BigInsights
Open Source Apache Hadoop Y Y
Rich SQL on Hadoop (Big SQL) some Y
Tools for Business Users (BigSheets) NA Y
Advanced text analytics NA Y
In-Hadoop analytics NA Y
Rich developer tools NA Y
Enterprise workload & storage mgt. NA Y
Comprehensive suite NA Y

In fact, you need more than BigInsights. “We don’t know how to look at unstructured data,” said Durward. That’s why IBM layers on tools like Big SQL, which helps you query Hadoop’s HBase using industry-standard SQL. You can migrate a relational table to HBase using Big SQL or connect Big SQL via JDBC to run business intelligence and reporting tools, such as Cognos, which also runs on Linux on z. Similarly IBM offers BigSheets, a cloud application that performs ad hoc analytics at web-scale on unstructured and structured content using the familiar spreadsheet format.

Lastly, Hadoop queries often produce free-form text, which requires text analytics to make sense of the results. Not surprisingly, IBM offers BigInsights Text Analytics, a fast, declarative rule-based information extraction (IE) system that extracts insights from unstructured content. This system consists of a fast, efficient runtime that exploits numerous optimization techniques across extraction programs written in Annotation Query Language (AQL), an English-like declarative language for rule-based information extraction.

Hadoop for the z is more flexible than z data center managers may think. You can merge Hadoop data with z transactional data sources and analyze it all together through BigInsights.

So how big will big data be on the z? DancingDinosaur thought it could scale to hundreds of terabytes, even petabytes. Not so. You should limit Hadoop on the z to moderate volumes—from hundreds of gigabytes to tens of terabytes, Durward advises, adding “after that it gets expensive.”

Still, there are many advantages to running Hadoop on the z. To begin, the z brings rock solid security, is fast to deploy, and, through BigInsights, brings an easy-to-use data ingestion process. It also has proven to be easy to setup and run, taking just a few hours, with conversions handled automatically. Lastly, the data never leaves the platform, which avoids the expense and delay of moving data between platforms. But maybe most importantly, by wrapping Hadoop in a set of familiar, comfortable tools and burying its awkwardness out of sight Hadoop now becomes something every z shop can leverage.

DancingDinosaur is Alan Radding. Follow this blog on Twitter, @mainframeblog. Check out my work at Technologywriter.com

Software Licensing for IBM System z Distributed Linux Middleware

October 10, 2014

DancingDinosaur can’t attend a mainframe conference without checking out at least one session on mainframe software pricing by David Chase, IBM’s mainframe pricing guru. At IBM Enterprise2014, which wraps up today, the topic of choice was software licensing for Linux middleware. It’s sufficiently complicated to merit an entire session.

In case you think Linux on z is not in your future, maybe you should think again.  Linux is gaining momentum in even the largest z data centers. Start with IBM bringing new apps like InfoSphere, BigInsights (Hadoop), and OpenStack to z. Then there are apps from ISVs that just weren’t going to get their offerings to z/OS. Together it points to a telltale sign something is happening with Linux on z. And, the queasiness managers used to have about the open source nature of Linux has long been put to rest.

At some point, you will need to think about IBM’s software pricing for Linux middleware. Should you find yourself getting too lost in the topic, check out these links recommended by Chase:

To begin, software for Linux on z is treated differently than traditional mainframe software in terms of pricing. With Linux on z you think in terms of IFLs.  The quantity of IFLs represent the number of Linux engines subjected to IBM’s IPLA-based pricing.

Also think in terms of Processor Value Units (PVUs) rather than MSUs. For a pricing purposes, PVUs are analogous to MSUs although the values are different. A key point to keep in mind: distributed PVUs for Linux are not related to System z IPLA value units used for z/VM products. As is typical of IBM, those two different kinds of value units are NOT interchangeable.

Chase, however, provides a few ground rules:

  • Dedicated partition
    • Processors are always allocated in whole increments
    • Resources are only moved between partitions “explicitly” (e.g. by an operator or a scheduled job)
  • Shared pool:
    • Pool of processors shared by partitions (including virtual machines)
    • System automatically dispatches processor resources between partitions as needed
  • Maximum license requirements
  • Customer does not have to purchase more licenses for a product than the number of processors on the machine (e.g. maximum DB2 UDB licenses on a 12-way machine is 12)
    • Customer does not have to purchase more “shared pool” licenses for a product than the number of processors assigned to the shared pool (e.g. maximum of 7 MQSeries licenses for a shared pool with 7 processors). Note: This limit does not affect the additional licenses that might be required for dedicated partitions.

With that, as Chase explains it, Linux middleware pricing turns out to be relatively straightforward, determined by:

  • Processor Value Unit (PVU) rating for each kind of core
  • Any difference for different processor technologies (p, i, x, z, Sun, HP, AMD, etc—notice that the z is just one of many choices, not handled differently from the others
  • Number of processor cores which must be licensed (z calls them IFLs)
  • Price per PVU (constant per product, not different based upon technology)

Then it becomes a case of doing the basic arithmetic. The formula: # of PVUs x the # of cores required x the value ($) per core = your total cost.  Given this formula it is to your advantage to plan your Linux use to minimize IFLs and cores. You can’t do anything about the cost per PVU.

Distributed PVUs are the basis for licensing middleware on IFLs and are determined by the type of machine processor. The zEC12, z196, and z10 are rated at 120 PVUs. All others are rated at 100 PVUs. For example, any distributed middleware running on Linux on z this works out to:

  • z114—1IFL, 100 PVUs
  • z196—4IFLs, 480 PVUs
  • zEC12—8 IFLs, 960 PVUs

Also, distributed systems Linux middleware offerings are eligible for sub-capacity licensing. Specifically, sub-capacity licensing is available for all PVU-priced software offerings that run on:

  • UNIX (AIX, HP-UX, and Sun Solaris
  • i5/OS, OS/400
  • Linux (System i, System p, System z)
  • x86 (VMware ESX Server, VMware GSX Server, Microsoft Virtual Server)

IBM’s virtualization technologies also are included in Passport Advantage sub-capacity licensing offering, including LPAR, z/VM virtual machines in an LPAR, CPU Pooling support introduced in z/VM 6.3 APAR VM65418, and native z/VM (on machines which still support basic mode).

And in true z style, since this can seem more complicated than it should seem, there are tools available to do the job. In fact Chase doesn’t advise doing this without a tool. The current tool is the IBM License Metric Tool V9.0.1. You can find more details on it here.

If you are considering distributed Linux middleware software or are already wrestling with the pricing process, DancingDinosaur recommends you check out Chase’s links at the top of this piece. Good luck.

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter, @mainframeblog. You can check out more of my work at Technologywriter.com

Real-Time Analytics on z Lead at IBM Enterprise2014 Opening Day

October 8, 2014

Users have always been demanding about performance. But does the 5-minute rule noted by Tom Rosamilia in the opening keynote at IBM Enterprise2014 go too far? It now seems users expect companies to respond, or at least acknowledge, their comments, questions, or problems in five minutes. That means companies need to monitor and analyze social media in real-time and respond appropriately.

Building on client demand to integrate real-time analytics with consumer transactions, IBM yesterday announced new capabilities for its System z. Specifically, IBM is combining the transactional virtues of the z with big data analytic capabilities into a single, streamlined, end-to-end data system. This real-time integration of analytics and transaction processing can allow businesses to increase the value of a customer information profile with every interaction the customer makes.  It also promises one way to meet the 5-minute rule, especially when a customer posts a negative comment on social media.

With the new integrated capability you can apply analytics to social sentiment and customer engagement data almost as the transactions are occurring. The goal is to gain real-time insights, which you can do on the mainframe because the data already is there and now the real time analytics will be there. There is no moving of data or logic.  The mainframe already is doing this when it is being used for fraud prevention. This becomes another case where the mainframe can enable organizations to achieve real-time insights and respond within five minutes. Compared to fraud analysis the 5-minute expectation seems a luxury.

By incorporating social media into the real time analytic analysis on the mainframe you can gain an indication of how the business is performing in the moment, how you stack up to your competitors, and most importantly, meet the 5-minute response expectation.  Since we’re talking about pretty public social sentiment data, you also could monitor your competitors’ social sentiment and analyze that to see how well they are responding.

And then there are the more traditional things you can do with the integration of analytics with transactional data to provide real-time, actionable insights on commercial transactions as they occur. For example you could take advantage of new opportunities to increase sales or prevent customer churn.

According to IBM this is being driven by the rise of mobile and smartphones, numbering in the billions in a few years. The combination of massive amounts of data and consumers who are empowered with mobile access is creating a difficult challenge for businesses, IBM noted in the announcement. Consumers now expect an immediate response—the 5 minute rule—to any interaction, at any time, and through their own preferred channel of communication. Unfortunately, many businesses are trying to meet this challenge and deliver instantaneous, on-demand customer service with outdated IT systems that can only provide after-the-fact intelligence.

Said Ross Mauri, General Manager, System z, IBM Systems & Technology Group: “Off-loading operational data in order to perform analytics increases cost and complexity while limiting the ability of businesses to use the insights in a timely manner.” The better approach, he continued, is to turn to an end-to-end solution that makes analytics a part of the flow of transactions and allows companies to gain real time insights while improving their business performance with every transaction.

Of course,  Mauri was referring specifically to the System z.  However, Power Systems and especially the new POWER8 machines, which have a strong presence here at IBM Enterprise2014, can do it too. Speaker after speaker emphasized that the Power machines are optimized for lightning fast analytics, particularly real time analytics.

Still, this was a z announcement so IBM piled on a few more goodies for the z. These include new analytics capabilities for the mainframe to enable better data security and provide companies with the ability to integrate Hadoop big data with the z. Specifically, IBM is delivering:

  • IBM InfoSphere BigInsights for Linux on System z – Combines open-source Apache Hadoop with IBM innovations to deliver enterprise grade Hadoop for System z clients;
  • IBM DB2 Analytics Accelerator – Enhances data security while delivering 2000 times the response time for complex data queries.
  • New capabilities in Linux and the cloud for system z, such as IBM Elastic Storage for Linux on System z, which extends the benefits of Elastic Storage to the Linux environment on z servers, and IBM Cloud Manager with OpenStack for System z, which enables heterogeneous cloud management across System z, Power and x86 environments.

Many of these pieces are available now.  You can meet the 5-minute rule sooner than you may think.

Alan Radding is DancingDinosaur. Follow him on Twitter, @mainframeblog, or check out his website, Technologywriter.com

 

IBM Enterprise2014 to Drive Advanced Mainframe Capabilities

August 27, 2014

The summer is winding down, and IBM Enterprise2014 (October 6-10, 2014 at the Venetian in Las Vegas, October 6-10, 2014 at the Venetian in Las Vegas) will be here in a little over a month.  It combines the IBM System z Technical University and the IBM Power Systems Technical University at one North American location. The advanced capabilities being featured at Enterprise2014 include: cloud, big data, and much more. Let’s look at a sampling of the z-oriented cloud and big data sessions. Subsequent posts will look at POWER and other topics.

The event also will include announcing the winner of the Mainframe Mobile App Throwdown, details here. Mobile is hot and poised to drive a lot of activity through the mainframe. The next generation of mobile apps will need to integrate with core applications running on the mainframe. DancingDinosaur readers know how to do that. Top prize for the Throwdown is an iPad, a pass to the IBM Enterprise2014 conference in Las Vegas, and even a week with IBM experts to help turn the app from a concept to reality. DancingDinosaur will be there to publicize the winners here. But the competition closes Sept. 17 so sign up soon.

For Mainframe Mobile App Throwdown ideas check out the session details at Enterprise2014. For example, Taking Analytics Mobile with DB2 Web Query and More! by Doug Mack digs into mobile features added to DB2 Web Query. He discusses how to sync a mobile device up with your favorite dashboards, or use the mobile app to organize and access reports offline. Leverage REST-based Web Services and application extensions to customize the user interface for reporting functions or schedule the reports to run in the background.

Now, let’s look at a sampling of the cloud and big data sessions.

How Companies Are Using IBM System z for Cloud—Fehmina Merchant describes how organizations are building secure and robust private clouds on System z to deliver their critical IT services with agility and at lower costs.  The session will examine the unique capabilities of zEnterprise as a platform for private cloud computing, in effect providing the ultimate in virtualization, security, scalability and reliability. It also will cover how the newest IBM SmartCloud technologies can automate and optimize the deployment and management of services in the cloud. In addition, the session will offer some specific real-life examples and use-cases to illustrate how a private cloud built on zEnterprise and SmartCloud provide flexible IT service delivery at the lowest cost. The session will end with a live demonstrations of the latest IBM SmartCloud tools.

Should mainframe shops even care about cloud computing? That’s a question DancingDinosaur gets asked frequently. Glenn Anderson answers it in zEnterprise—Cutting Through the Hype: Straight Talk About the Mainframe and Cloud Computing. In this session he promises to explain why the cloud is relevant to a System z enterprise and helps z data center managers cut through the marketing hype.

For zLinux there is The Elephant on the Mainframe—Using Hadoop to Analyze IBM System z Data by Christopher Spaight. He describes the zEnterprise portfolio as including a rich set of options for the analysis of structured, relational data. But what, he asks, if the business needs to analyze data that is unstructured or semi-structured or a mix of relational and non-relational records? Many are looking to Hadoop in these situations. This session lays out the mainframer’s options for using Hadoop both on and off platform, and walks through several use cases for when it makes sense to use Hadoop. BTW, Hadoop on z is called zDoop.

Finally, HDFS, Hive and All That Big Data “Stuff” for IBM System z by Karen Durward looks at how the System z participates in the world of HDFS, Hive and more Big Data stuff. This session focuses on not only why z/OS data should be integrated into a Big Data environment but the various ways to do it. She will describe the latest on z/OS data integration with Big Data, Linux on System z as a Big Data platform, and more.

Then, when you have absorbed all the technology you can enjoy three evenings of live performances: 2 country rock groups, Delta Rae and The Wild Feathers and then, Rock of Ages. Check ‘em out here.

Alan Radding is DancingDinosaur. Look for me at Enterprise2014. You can follow this blog and more on Twitter, @mainframeblog. Find Alan Radding on Technologywriter.com.

Industrial Strength SDS for the Cloud

June 12, 2014

The hottest thing in storage today is software defined storage (SDS). Every storage vendor is jumping on the SDS bandwagon.

The presentation titled Industrial -Strength SDS for the Cloud, by Sven Oehme, IBM Senior Research Scientist, drew a packed audience at Edge 2014 and touched on many of the sexiest acronyms in IBM’s storage portfolio.  These included not just GPFS but also GSS (also called GPFS Storage Server), GNR, LROC (local read-only cache), and even worked in Linear Tape File System (LTFS).

The session promised to outline the customer problems SDS solves and show how to deploy it in large scale OpenStack environments with IBM GPFS.  Industrial strength generally refers to large-scale, highly secure and available multi-platform environments.

The session abstract explained that the session would show how GPFS enables resilient, robust, reliable, storage deployed on low-cost industry standard hardware delivering limitless scalability, high performance, and automatic policy-based storage tiering from flash to disk to tape, further lowering costs. It also promised to provide examples of how GPFS provides a single, unified, scale-out data plane for cloud developers across multiple data centers worldwide. GPFS unifies OpenStack VM images, block devices, objects, and files with support for Nova, Cinder, Swift and Glance (OpenStack components), along with POSIX interfaces for integrating legacy applications. C’mon, if you have even a bit of IT geekiness, doesn’t that sound tantalizing?

One disclaimer before jumping into some of the details; despite having written white papers on SDS and cloud your blogger can only hope to approximate the rich context provided at the session.

Let’s start with the simple stuff; the expectations and requirements for cloud  storage:

  • Elasticity, within and across sites
  • Secure isolation between tenants
  • Non-disruptive operations
  • No degradation by failing parts as components fail at scale
  • Different tiers for different workloads
  • Converged platform to handle boot volumes as well as file/object workload
  • Locality awareness and acceleration for exceptional performance
  • Multiple forms of data protection

Of course, affordable hardware and maintenance is expected as is quota/usage and workload accounting.

Things start getting serious with IBM’s General Parallel File System (GPFS). This what IBMers really mean when they refer to Elastic Storage, a single name space provided across individual storage resources, platforms, and operating systems. Add in different classes of storage devices (fast or slow disk, SSD, Flash, even LTFS tape), storage pools, and policies to control data placement and you’ve got the ability to do storage tiering.  You can even geographically distribute the data through IBM’s Active Cloud Engine, initially a SONAS capability sometimes referred to as Active File Manager. Now you have a situation where users can access data by the same name regardless of where it is located. And since the system keeps distributed copies of the latest data it can handle a temporary loss of connectivity between sites.

To protect the data add in declustered software RAID, aka GNR or even GSS (GPFS Storage Server). The beauty of this is it reduces the space overhead of replication through declustered parity (80% vs. 33% utilization) while delivering extremely fast rebuild.  In the process you can remove hardware storage controllers from the picture by doing the migration and RAID management in software on your commodity servers.

dino industrial sds 1

In the above graphic, focus on everything below the elongated blue triangle. Since it is being done in software, you can add an Object API for object storage. Throw in encryption software. Want Hadoop? Add that too. The power of SDS.  Sweet

The architecture Oehme lays out utilizes generic servers with direct-attached switched JBOD (SBOD). It also makes ample use of LROC, which provides a large read cache that benefits many workloads, including SPECsfs, VMware, OpenStack, other virtualization, and database workloads.

A key element in Oehme’s SDS for the cloud is OpenStack. From a storage standpoint OpenStack Cinder, which provides access to block storage as if it were local, enables the efficient sharing of data between services. Cinder supports advanced features, such as snapshots, cloning, and backup. On the back end, Cinder supports Linux servers with iSCSI and LVM; storage controllers; shared filesystems like GPFS, NFS, GlusterFS; and more.

Since Oehme’s  is to produceindustrial-strength SDS for the Cloud it needs to protect data. Data protection is delivered through backups, snapshots, cloning, replication, file level encryption, and declustered RAID, which spans all disks in the declustered array and results in faster RAID rebuild (because there are more disks available for RAID rebuild.)

The result is highly virtualized, industrial strength SDS for deployment in the cloud. Can you bear one more small image that promises to put this all together? Will try to leave it as big as can fit. Notice it includes a lot of OpenStack components connecting storage elements. Here it is.

dino industrial sds 2

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter @mainframeblog

Learn more about Alan Radding at technologywriter.com

Happy 50th System z

April 11, 2014

IBM threw a delightful anniversary party for the mainframe in NYC last Tuesday, April 8. You can watch video from the event here

About 500 people showed up to meet the next generation of mainframers, the top winners of the global Master of the Mainframe competition. First place went to Yong-Sian Shih, Taiwan; followed by Rijnard van Tonder, South Africa; and Philipp Egli, United Kingdom.  Wouldn’t be surprised if these and the other finalists at the event didn’t have job offers before they walked out of the room.

The System z may be built on 50-year old technology but IBM is rapidly driving the mainframe forward into the future. It had a slew of new announcements ready to go at the anniversary event itself and more will be rolling out in the coming months. Check out all the doings around the Mainframe50 anniversary here.

IBM started the new announcements almost immediately with Hadoop on the System z. Called  zDoop, the industry’s first commercial Hadoop for Linux on System z, puts map reduce big data analytics directly on the z. It also announced Flash for mainframe, consisting of the latest generation of flash storage on the IBM DS8870, which promises to speed time to insight with up to 30X the performance over HDD. Put the two together and the System z should become a potent big data analytics workhorse.

But there was even more. Mobile is hot and the mainframe is ready to play in the mobile arena too. Here the problem z shops experience is cost containment. Mainframe shops are seeing a concurrent rise in their costs related to integrating new mobile applications. The problem revolves around the fact that many mobile activities use mainframe resources but don’t generate immediate income.

The IBM System z Solution for Mobile Computing addresses this with new pricing for mobile workloads on z/OS by reducing the cost of the growth of mobile transaction volumes that can cause a spike in software charges. This new pricing will provide up to a 60% reduction on the processor capacity reported for Mobile activity, which can help normalize the rate of transaction growth that generates software charges. The upshot: much mobile traffic volume won’t increase your software overhead.

And IBM kept rolling out the new announcements:

  • Continuous Integration for System z – Compresses the application delivery cycle from months to weeks or days.   Beyond this IBM suggested upcoming initiatives to deliver full DevOps capabilities for the z
  • New version of IBM CICS Transaction Server – Delivers enhanced mobile and cloud support for CICS, able to handle more than 1 billion transactions per day
  • IBM WebSphere Liberty z/OS Connect—Rapid and secure enablement of web, cloud, and mobile access to z/OS assets
  • IBM Security zSecure SSE – Helps prevent malicious computer attacks with enhanced security intelligence and compliance reporting that delivers security events to QRadar SIEM for integrated enterprise- wide security intelligence dashboarding

Jeff Frey, an IBM Fellow and the former CTO of System z, observed that “this architecture was invented 50 years ago, but it is not an old platform.”  It has evolved over those decades and continues evolve. For example, Frey expects the z to accommodate 22nm chips and a significant increase in the increase in the number of cores per chip. He also expects vector technology, double precision floating point and integer capabilities, and FPGA to be built in. In addition, he expects the z to include next generation virtualization technology for the cloud to support software defined environments.

“This is a modern platform,” Frey emphasized. Other IBMers hinted at even more to come, including ongoing research to move beyond silicon to maintain the steady price/performance gains the computing industry has enjoyed the past number of decades.

Finally, IBM took the anniversary event to introduce a number of what IBM calls first-in-the-enterprise z customers. (DancingDinosaur thinks of them as mainframe virgins).  One is Steel ORCA, a managed service provider putting together what it calls the first full service digital utility center.  Based in Princeton, NJ, Phase 1 will offer connections of less than a millisecond to/from New York and Philadelphia. The base design is 300 watts per square foot and can handle ultra-high density configurations. Behind the operation is a zEC12. Originally the company planned to use an x86 system but the costs were too high. “We could cut those costs in half with the z,” said Dave Crocker, Steel ORCA chairman.

Although the Mainframe50 anniversary event has passed, there will be Mainframe50 events and announcements throughout the rest of the year.  Again, you can follow the action here.

Coming up next for DancingDinosaur is Edge2014, a big infrastructure innovation conference. Next week DancingDinosaur will look at a few more of the most interesting sessions, and there are plenty. There still is time to register. Please come—you’ll find DancingDinosaur in the bloggers lounge, at program sessions, and at the Sheryl Crow concert.

Follow DancingDinosaur on Twitter, @mainframeblog

 

Enterprise 2013 Details System z and Power Technology and New Capabilities

October 25, 2013

IBM announced a lot of goodies for z and Power users at Enterprise 2013 wrapping up in Orlando today. There were no blockbuster announcements, like a new z machine—we’re probably 12-18 months away from that and even then the first will likely focus on Power8—but it brought a slew of announcements nonetheless. For a full rundown on what was announced click here.

Cloud and analytics—not surprisingly—loom large. For example, Hadoop and a variety of other capabilities have been newly cobbled together, integrated, optimized, and presented as new big data offerings or as new cloud solutions.  This was exemplified by a new Cognos offering for CFOs needing to create, analyze and manage sophisticated financial plans that can provide greater visibility into enterprise profitability or the lack thereof.

Another announcement featured a new IBM Entry Cloud Configuration for SAP on zEnterprise. This is a cloud-enablement offering combining high-performance technology and services to automate, standardize and accelerate day-to-day SAP operations for reduced operational costs and increased ROI. Services also were big at the conference.

Kicking off the event was a dive into data center economics by Steve Mills, Senior Vice President & Group Executive, IBM Software & Systems. Part of the challenge of optimizing IT economics, he noted, was that the IT environment is cumulative. Enterprises keep picking up more systems, hardware and software, as new needs arise but nothing goes away or gets rationalized in any meaningful way.

Between 2000 and 2010, Mills noted, servers had grown at a 6x rate while storage grew at a 69x rate. Virtual machines, meanwhile, were multiplying at the rate of 42% per year. Does anyone see a potential problem here?

Mills’ suggestion: virtualize and consolidate. Specifically, large servers are better for consolidation. His argument goes like this: Most workloads experience variance in demand. But when you consolidate workloads with variance on a virtualized server the variance of the sum is less due to statistical multiplexing (which fits workloads into the gaps created by the variances). Furthermore, the more workloads you can consolidate, the smaller the variance of the sum. His conclusion: bigger servers with capacity to run more workloads can be driven to higher average utilization levels without violating service level agreements, thereby reducing the cost per workload. Finally, the larger the shared processor pool is the more statistical benefit you get.

On the basis of statistical multiplexing, the zEnterprise and the Power 795 are ideal choices for this. Depending on your workloads, just load up the host server, a System z or a big Power box, with as many cores as you can afford and consolidate as many workloads as practical.

Mills’ other cost savings tips: use flash to avoid the cost and complexity of disk storage. Also, eliminate duplicate applications—the fewer you run, the lower the cost. In short, elimination is the clearest path to saving money in the data center.

To illustrate the point, Jim Tussing from Nationwide described how the company virtualized and consolidated 60% on their 10,500 servers on a few mainframes and saved $46 million over five years. It also allowed the company to delay the need for an additional data center for 4 years.

See, if DancingDinosaur was an actual data center manager it could have justified attendance at the entire conference based on the economic tips from just one of the opening keynotes and spent the rest of the conference playing golf. Of course, DancingDinosaur doesn’t play golf so it sat in numerous program sessions instead, which you will hear more about in coming weeks.

You can follow DancingDinosaur on twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 687 other followers

%d bloggers like this: