Posts Tagged ‘hybrid computing’

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Legacy Storage vs. Software Defined Storage at IBM Edge2015

May 21, 2015

At Edge2015 software defined storage (SDS) primarily meant IBM Spectrum Storage, the new storage software portfolio designed to address data storage inefficiencies by separating storage functionality from the underlying hardware through an intelligent software layer. To see what DancingDinosaur posted on Spectrum Storage in February when it was unveiled click here. Spectrum became the subject of dozens of sessions at the conference. Check out a general sampling of Edge2015 sessions here.

Jon Toigo, a respected storage consultant and infuriating iconoclast to some, jumped into the discussion of legacy storage vs. SDS at a session provocatively titled 50 Shades of Grey. He started by declaring “true SANs never reached the market.” On the other hand, SDS promises the world—storage flexibility, efficiency, avoidance of vendor lock-in, and on and on.

 edge2015 toigo san

Courtesy Jon Toigo (click to enlarge)

What the industry actually did as far as storage sharing, Toigo explained, was provide serial SCSI over a physical layer fabric and the use of a physical layer switch to make and break server-storage connections at high speed. Although network-like there was no management layer (which should be part of any true network model, he believes). Furthermore, the result was limited by the Fibre Channel Protocol and standards designed so that “two vendors could implement switch products that conformed to the letter of the standard…with absolute certainty that they would NOT work together,” said Toigo. iSCSI later enabled storage fabrics using TCP/IP, which made it easier to deploy the fabric since organizations already were deploying TCP/IP networks for other purposes.

Toigo’s key requirement: unified storage management, which means managing the diversity and heterogeneity of the arrays comprising the SAN. The culprit preventing this, as he sees it, are so call value-add services on array controllers that create islands of storage. You know these services: thin provisioning, on-array tiering, mirroring, replication, dedupe, and more. The same value-add services are the culprits driving the high cost of storage. “Storage hardware components are commoditized, but value-add software sustains pricing.”

With Spectrum Storage IBM incorporates more than 700 patents and is designed to help organizations transform to a hybrid cloud business model by managing massive amounts of data where they want it, how they want it, in a fast and easy manner from a single dashboard.  The software helps clients move data to the right location, at the right time to flash storage for fast access or to tape and cloud for the lowest cost.

This apparently works for Toigo, with only a few quibbles: vendors make money by adding more software, and inefficiency is added when they implement non-standard commands. IBM, however, is mostly in agreement with Toigo. According to IBM, a new approach is needed to help organizations address [storage] cost and complexity driven by tremendous data growth.  Traditional storage is inefficient in today’s world. However, Spectrum Storage software, IBM continued, helps organizations to more efficiently leverage their hardware investments to extract the full business value of data. Listen closely and you might even hear Toigo mutter Amen.

SDS may or may not be the solution. Toigo titled this session fifty shades of grey because the vendors can’t even agree on a definition for what constitutes SDS.  Yet, it is being presented as a panacea for everything that is wrong with legacy storage.

The key differentiator for Toigo is where a vendor’s storage intelligence resides; on the array controller, in the server hypervisor, or part of the software stack. As it turns out, some solutions are hypervisor dedicated or hypervisor dependent.  VMware’s Virtual SAN, for instance, only works with its hypervisor.  Microsoft’s Clustered Storage Spaces is proprietary to Microsoft, though it promises to share its storage with VMware – simple as pie, just convert your VMware workload into Microsoft VHD format and import it into Hyper-V so you can share the Microsoft SDS infrastructure.

IBM Spectrum passes Toigo’s 50 Shades test. It promises simple, efficient storage without the cost or complexity of dedicated hardware. IBM managers at Edge2015 confirmed Spectrum could run on generic servers and with generic disk arrays. With SDS you want everything agnostic for maximum flexibility.

Toigo’s preferred approach: virtualized SDS with virtual storage pools and centralized select value-add services that can be readily allocated to any workload regardless of the hypervisor. DancingDinosaur will drill down into other interesting Edge2015 sessions in subsequent posts.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Edge Rocks 6000 Strong for Digital Transformation

May 15, 2015

Unless you’ve been doing the Rip Van Winkle thing, you have to have noticed that a profound digital transformation is underway fueled, in this case,from the bottom. “This is being driven by people embracing technology,” noted Tom Rosamilia, Senior Vice President, IBM System. And it will only get greater with quantum computing, a peak into it provided at Edge2015 by Arvind Krishna, senior vice president and director, IBM Research.

ibm_infographic_rough draft_r5

(Quantum computing, courtesy of IBM, click to enlarge)

Need proof? Just look around. New cars are now hot spots, and it’s not just luxury cars. Retailers are adding GPS inside their store and are using it to follow and understand the movement of shoppers in real time. Eighty-two percent of millennials do their banking from their mobile phone.  As Rosamilia noted, it amounts to “an unprecedented digital disruption” in the way people go about their lives. Dealing with this digital transformation and the challenges and opportunities it presents was what IBM Edge 2015 was about. With luck you can check out much from Edge2015 at the media center here.

The first day began with a flurry of product announcements starting with a combined package of new servers and storage software and solutions aimed to accelerate the development of hybrid cloud computing.  Hybrid cloud computing was big at Edge2015. To further stimulate hybrid computing IBM introduced new flexible software licensing of its middleware to help companies speed their adoption of hybrid cloud environments.

Joining in the announcement was Rocket Software, which sponsored the entertainment, including the outstanding Grace Potter concert. As for Rocket’s actual business, the company announced Rocket Data Access Service on Bluemix for z Systems, intended to provide companies a simplified connection to data on the IBM z Systems mainframe for development of mobile applications through Bluemix. Starting in June, companies can access a free trial of the service, which works with a range of database storage systems, including VSAM, ADABASE, IMS, CICS, and DB2, and enables access through common mobile application interfaces, including MongoDB, JDBC, and the REST protocol.  Now z shops have no excuse not to connect their systems with mobile and social business.

Storage also grabbed the spotlight. IBM introduced new storage systems, including the IBM Power System E850, a four-socket system with flexible capacity and up to 70% guaranteed utilization. The E850 targets cloud service providers and medium or large enterprises looking to securely and efficiently deploy multi-tenancy workloads while speeding access to data through larger in-memory databases with up to 4TB of installed memory.

The IBM Power System E880, designed to scale to 192 cores, is suitable for IBM DB2 with BLU Acceleration, enhancing the efficiency of cloud deployments; and the PurePOWER System, a converged infrastructure for cloud. It is intended to help deliver insights via the cloud, and is managed with OpenStack.

The company also will be shipping IBM Spectrum Control Storage Insights, a new software-defined storage offering that provides data management as a hybrid cloud service to optimize on-premises storage infrastructures. Storage Insights is designed to simplify storage management by improving storage visibility while applying analytics to ease capacity planning, enhance performance monitoring, and improve storage utilization. It does this by reclaiming under-utilized storage. Thank you analytics.

Finally for storage, the company announced IBM XIV GEN 3, designed for cloud with real-time compression that enables scaling as demand for data storage capacity expands. You can get more details on all the announcements at Edge 2015 here.

Already announced is IBM Edge 2016, again at the Venetian in Las Vegas in October 2016. That gives IBM 18 months to pack it with even more advances. Doubt there will be a new z by then; a new business class version of the z13 is more likely.

DancingDinosaur will take up specific topics from Edge2015 in the coming week. These will include social business on z, real-time analytics on z, and Jon Toigo sorting through the hype on SDS.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

IBM’s z13 Redefines Mainframe Performance, Economics, and Versatility

January 14, 2015

With the introduction of the new IBM z13, the latest rev of the 50-year old mainframe product line introduced today, it will be hard for IT people to persist in the mistaken belief that the mainframe can’t handle today’s workloads or that it is too expensive. Built around an 8 core, 22nm processor, the IBM z13’s 141 configurable cores (any mix of CP, IFL, zIIP, ICF, SAP) delivers a 40% total capacity improvement over the zEC12.

 IBM z113

The z13 looks like the zEC12 but under the hood it’s far more powerful

The IBM z13 will handle up to 8,000 virtual enterprise-grade Linux servers per system, more than 50 per core.  Remember when Nationwide Insurance consolidated 3000 x86 servers mainly running Linux on a System z and saved $15 million over three years, a figure later revised considerably higher. They got a lot of press out of that, including from DancingDinosaur as recently as last May. With the IBM z13 Nationwide could consolidate more than twice the number of Linux servers at a lower cost and the resulting saving would be higher still.

If you consider Linux VMs synonymous with cloud services, the new machine will enable superior Cloud services at up to 32% lower cost than an x86-based cloud. It also will cost up to 60% less than Public Cloud over three years. In almost every metric, the IBM z13 delivers more capacity or performance at lower cost.

IBM delivered an almost constant stream of innovations that work to optimize performance and reduce cost. For example, it boosted single thread capacity by 10% over the zEC12. It also delivers 3x more memory to help both z/OS and Linux workloads. The more memory combined with a new cache design, improved I/O bandwidth, and compression will boost analytics on the machine. In fact, with the z13 you can do in-memory analytics if you want it.

The one thing it doesn’t do is boast the fastest commercial processor in terms of sheer speed. The zEC12 processor still is the fastest but with all the optimizations and enhancements IBM has built in the z13 should beat the z12 in handling the workloads organizations most want to run. For instance, the z13 performs 2X faster than the most common server processors, 300 percent more memory, 100 percent more bandwidth and delivers vector processing analytics to speed mobile transactions. As a result, the z13 transaction engine is capable of analyzing transactions in real time.

Similarly, simultaneous multi-threading delivers more throughput for Linux and zIIP-eligible workloads while larger caches optimize data serving. It also improved on-chip hardware compression, which saves disk space and cuts data transfer time.  Also, there is new workload container pricing and new multiplex pricing, both of which again will save money.

In addition, IBM optimized this machine for both mobile and analytics, as well as for cloud. This is the new versatility of this redefined mainframe. Last year, IBM discounted the cost of mobile transactions on the z. The new machine continues to optimize for mobile with consolidated REST APIs for all z/OS transactions through z/OS Connect while seamlessly channeling z/OS transactions to mobile devices with the MobileFirst Platform. It also ensures end-to-end security from mobile device to mainframe with z/OS, RACF, and MobileFirst products.

For analytics, IBM continues to optimize Hadoop and expand the analytics portfolio on the z13. Specifically, the massive memory capability, up to 10TB, opens new opportunities for in-memory computing. The ability to perform analytics by combining data from different data sources and do it in-memory and in real-time within the platform drives more efficiencies, such as eliminating the need for ETL and the need to move data between platforms, as had previously often been the case. Now, just use Hadoop on z to explore data there within the secure zone of the mainframe. This opens a wide variety of analytics workloads, anything from fraud prevention to customer retention.

In addition to improved price/performance overall, IBM announced Technology Update Pricing for z13, including AWLC price reductions for z13 that deliver 5% price/performance on average in addition to performance gains in software exploitation of z13. DancingDinosaur will dig deeper into the new z13 software pricing in a subsequent post.

And the list of new and improved capabilities with the z13 just keeps going on and on.  With security IBM has accelerated the speed of encryption up to 2x over the zEC12 to help protect the privacy of data throughout its life cycle.  It also extended enhanced public key support for constrained digital environments using Elliptic Curve Cryptography (ECC), which helps applications like Chrome, Firefox, and Apple’s iMessage. In addition, the z13 sports a few I/O enhancements, like the first system to use a standards based approach for enabling Forward Error Correction for a complete end-to-end solution.

Finally, IBM has not abandoned hybrid computing, where you can mix a variety of blades, including x86 Windows blades and others in the zBX extension cabinet. With the z13 IBM introduced the new Mod 004 zBX cabinet, an upgrade from the previous Mod 002 and 003.

DancingDinosaur expects the introduction of the z13 along with structural organization changes, will drive System z quarterly financial performance back into the black as soon as deliveries roll. And if IBM stays consistent with past behavior within a year or so you can expect a scaled down, lower cost business class version of the z13 although it may be not be called business class. Stay tuned; it should be an exciting year.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. You can follow him on Twitter, @mainframeblog, or check out more of his writing and analysis at Technologywriter.com or here.

IBM Ends 2014 with Flurry of Outsourcing, Cloud Activity

January 4, 2015

Happy New Year. There is much to look forward to in 2015. At the least it probably is time for IBM to rev the System z. The zEnterprise EC12 was introduced in Aug. 2012. You should expect a new machine this year.

ibm cloud centers

IBM ended the year with a flurry of deals involving outsourcing in various forms, hybrid clouds, and the expansion its cloud centers globally. The company made it clear throughout this past difficult year that its focus will be on cloud computing, analytics, and mobile, and that’s what they did. DancingDinosaur will leave to the Wall St. analysts the question of whether the deals represent enough action at a sufficient margin.

IBM believes its future rides on the cloud. To that end it writes: Enterprise cloud deployments, specifically hybrid cloud, are growing at a significant rate.  According to Gartner, nearly half of all enterprises will have a hybrid cloud deployed by 2017.  Chief among the driving forces behind the adoption of cloud computing worldwide, including hybrid cloud, are requirements for businesses and governments to store certain data locally to comply with data residency regulations, as well as a growing desire for startups to expand their businesses globally.  IBM estimates about 100 nations and territories have adopted laws that dictate how governments and private enterprises handle personal data.

The expansion of the company’s global footprint of its cloud centers, now up to 40 locations, represents an effort to capitalize on cloud interest. Since the start of November, the company announced more than $4 billion worth of cloud agreements with major enterprises around the world. These include Lufthansa, ABN AMRO, WPP, Woox Innovations, Dow Water, and Thomson Reuters. Some of these, you will notice, are mainframe shops. DancingDinosaur is assuming they are augmenting their z with a hybrid cloud, not replacing it.

In addition, there are new organizations, referred to by IBM as born-on-the-web innovators, which are building their business on the IBM Cloud. Since November, IBM has announced wins with Diabetizer and Preveniomed, Hancom, Musimundo, and Nubity. Collectively these wins reflect IBM’s ability to deliver a full range of services through the cloud. Some of these are analytics-driven wins.

An interesting recently announced win was Westfield Insurance, which began working with IBM to transform their claims operations. To this end, Westfield is looking at business analytics to increase flexibility, operational efficiency, and effectiveness while enabling the company to keep pace with its evolving customer base and business growth. When DancingDinosaur last checked, Westfield was a z196 shop running DB2.

As IBM reports, leading insurers are leveraging cloud, analytics and social technologies to stay ahead of their competition. Specifically, more than 60% of identified leading insurers are focused on advanced analytics to improve their claims handling in order to streamline processes and increase customer satisfaction. Westfield’s multi-year claims handling transformation initiatives, including process, organizational and technology changes, focus on using data and analytics to better serve customers.

For Westfield, IBM developed a new protocol to migrate data for use with predicative models, built simulation models to evaluate bottlenecks in the claims process, and designed a strategy for expedited workflow.  This simulation helped expedite organizational changes. The new claims system will also utilize a suite of IBM counter-fraud capabilities to detect suspicious activity.

In addition, IBM helped Westfield optimize its current claims handling process to provide a seamless, fully-integrated customer experience. Westfield’s claims system with Guidewire is now consolidated to ensure efficient operations across its network.

To further drive its cloud business IBM simplified its cloud contract with a goal of reducing the complexity and speeding the signing of cloud agreements. The result is a standard, two-page agreement that replaces the previous longer, more complex contracts, which typically entailed long negotiations and reviews before a deal was signed. By comparison, its cloud competitors require customers to review and commit to more complex contracts that commonly are at least five times longer and also incorporate terms and conditions from other websites, IBM reports.

Citing leading industry analyst firms, IBM claims global leadership in cloud computing with a diverse portfolio of open cloud solutions designed to enable clients for the hybrid cloud era. IBM has helped more than 30,000 cloud clients around the world. It boasts of over 30,000 cloud clients, invested more than $7 billion since 2007 in 17 acquisitions to accelerate its cloud initiatives, and holds more than 1,560 cloud patents. IBM also processes more than 5.5 million client transactions daily through its public cloud.

IBM’s initiatives in cloud computing will not diminish its interest in the System z enterprise cloud platform. To the contrary a recent IBM analysis shows the z enhancing the economic advantages of the cloud: a business scaling up to about 200 virtual machines (VM) gets far more efficient and economical results by using the Enterprise Linux Server as an enterprise cloud than with a virtualized x86 or public-cloud model. And the deal gets even better if you acquire the Enterprise Linux Server under either the Solution Edition program for Enterprise Linux  or the System z Solution Edition for Cloud Computing.

DancingDinosaur is Alan Radding, a longtime IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. Check out more of his work at Technologywriter.com and here.

Compuware Aims for Mainframe Literacy in CIOs

November 13, 2014

Many IT professionals, especially younger ones, are clueless about the mainframe. Chris O’Malley, president of the mainframe business at Compuware, has met CIOs who are versed in everything about IT and have seemingly done everything there is with computers, but “they are not literate about the mainframe.” That means the mainframe never comes to mind. IBM could give away a zEnterprise for free, which it comes close to doing today through the System z Solution Edition program and these CIOs would ignore it. O’Malley wants to address that.

compuware MainframeExcellence2025_cover

In response, Compuware is following the path of the IBM System z Academic Initiative, but without the extensive global involvement of colleges and universities, with a program called Mainframe Excellence 2025, which it describes as a generational call for strategic platform stewardship. “We’re also trying to debunk a lot of issues around the mainframe,” O’Malley continues.

compuware O'Malley headshot

Chris O’Malley, Pres. Mainframe, Compuware

Compuware refers to Mainframe Excellence 2025 as a manifesto, something of a call to arms for millennials to storm the IT gates and liberate IT management from enslavement to x86 computing. Somehow DancingDinosaur doesn’t see it happening exactly that way; it envisions coexistence and synergy.

Most of the Mainframe Excellence document goes over ground DancingDinosaur and many others have covered before. It is delightful, however, to see others refreshing the arguments. And, the document adds some interesting data. For instance, over 1.15 million CICS transactions are executed on System z every second of every day! That’s more than all Google searches, YouTube views, Facebook likes, and Twitter tweets combined.

It also pays homage to what it refers to as the mainframe’s culture of excellence. It characterizes this culture by rigorous adherence to a standard of excellence demonstrably higher than that associated with other platforms, notably x86. IT organizations actually expect, accept, and plan for problems and patches in other platforms (think Microsoft Patch Tuesday). Mainframe professionals, on the other hand, have zero-tolerance for downtime and system failures and the mainframe generally lives up to those high expectations.

Ironically, the document points out that the culture of excellence has created a certain chasm between mainframe professionals and the rest of IT. In fact, this ingrained zero-failure culture of the mainframe community—including both vendors and enterprise IT staffs—can sometimes put it at odds with the very spirit of innovation that allows the mainframe to deliver the repeated advances in price/performance and new capabilities that consistently produce tremendous value.

Compuware concludes its report with an action checklist:

  • Fully inventory existing mainframe data, applications (including business rules), capacity, utilization/MSUs and management tools, a veritable trove a value embedded in mainframe code and business rules.
  • Build a fact-based skills plan with a realistic timeline.
  • Ramp up current and road-mapped mainframe capabilities.
  • Rightsize investments in mainframe application stewardship.
  • Institute an immediate moratorium on short-term cost-cutting that carries long-term negative consequences.
  • Combat denial and hype in regards to non-mainframe platform capabilities, costs and risks.

And Compuware’s final thought should give encouragement to all those who must respond to the mainframe-costs-too-much complaint:  IT has a long history of under-estimating real TCO and marginal costs for new platforms while over-estimating their benefits. A more sober assessment of these platforms will make the strategic value and economic advantages of the mainframe much more evident in comparison.

Compuware certainly is on the right track with Mainframe Excellence 2025. Would like, however, to see the company coordinate its efforts with the System z Academic Initiative, the Master the Mainframe effort, and such.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

IBM Creates Comprehensive Cloud Security Portfolio

November 6, 2014

On Wednesday IBM introduced what it describes as the industry’s first intelligent security portfolio for protecting people, data, and applications in the cloud. Not a single product but a set of products that taps a wide range of IBM’s cloud security, analytics, and services offerings.  The portfolio dovetails with IBM’s end-to-end mainframe security solution as described at Enterprise2014 last month.

Cloud security certainly is needed. In a recent IBM CISO survey, 44% of security leaders said they expect a major cloud provider to suffer a significant security breach in the future; one that will drive a high percentage of customers to switch providers, not to mention the risks to their data and applications.  Cloud security fears have long been one of the biggest impediments to organizations moving more data, applications, and processes to the cloud. These fears are further complicated by the fact the IT managers feel that much their cloud providers do is beyond their control. An SLA only gets you so far.

2014 IBM study of CISO 44 high

The same survey found 86% of leaders surveyed say their organizations are now moving to cloud, of those three-fourths see their cloud security budget increasing over the next 3-5 years.

As is typical of IBM when it identifies an issue and feels it has an edge, the company assembles a structured portfolio of tools, a handful of which were offered Wednesday. The portfolio includes versions of IBM’s own tools optimized for the cloud and tools and technologies IBM has acquired.  Expect more cloud security tools to follow. Together the tools aim to manage access, protect data and applications, and enable visibility in the cloud.

For example, for access management IBM is bringing out Cloud Identity Services which  onboards and handles users through IBM-hosted infrastructure.  To safeguard access to cloud-deployed apps it is bringing a Cloud Sign-On service used with Bluemix. Through Cloud Sign-On developers can quickly add single-sign on to web and mobile apps via APIs.  Another product, Cloud Access Manager, works with SoftLayer to protect cloud applications with pattern-based security, multi-factor authentication, and context-based access control. IBM even has a tool to handle privileged users like DBAs and cloud admins, the Cloud Privilege Identity Manager.

Here is a run-down of what was announced Wednesday. Expect it to grow.

  • Cloud Identity Services—IBM Cloud Identity Services
  • Cloud Sign-On Service –IBM Single Sign On
  • Cloud Access Manager –IBM Security Access Manager
  • Cloud Privileged Identity Manager—IBM Security Privileged Identity Manager (v2.0)
  • Cloud Data Activity Monitoring—IBM InfoSphere Guardium Data Activity Monitoring
  • Cloud Mobile App Analyzer Service –IBM AppScan Mobile Analyzer
  • Cloud Web App Analyzer Service –IBM AppScan Dynamic Analyzer
  • Cloud Security Intelligence –IBM QRadar Security Intelligence (v7.2.4)
  • Cloud Security Managed Services –IBM Cloud Security Managed Services

Now let’s see how these map to what the z data center already can get with IBM’s End-to-End Security Solution for the Mainframe. For starters, security is built into every level of the System z structure: processor, hypervisor, operating system, communications, and storage.

In terms of security analytics; zSecure, Guardium, AppScan, and QRadar improve your security intelligence. Some of these tools are included in the new Cloud security portfolio. Intelligence is collected from z/OS, RACF, CA ACF2, CA Top Secret, CICS, and DB2. The zSecure suite also helps address compliance challenges. In addition, InfoSphere Guardium Real-time Activity Monitoring handles activity monitoring, blocking and masking, and vulnerability assessment.

Of course the z brings its crypto coprocessor, Crypto Express4S, which complements the cryptographic capabilities of CPACF. There also is a new zEC12 coprocessor, the EP11 processor, amounting to a Crypto Express adapter configured with the Enterprise PKCS #11 (EP11) firmware, also called the CEX4P adapter. It provides hardware-accelerated support for crypto operations that are based on RSA’s PKCS #11 Cryptographic Token Interface Standard. Finally, the z supports the necessary industry standards, like FIPS 140-2 Level 4, to ensure multi-tenanted public and private cloud workloads remain securely isolated. So the cloud, at least, is handled to some extent.

The mainframe has long been considered the gold standard for systems security. Now it is being asked to take on cloud-oriented and cloud-based workloads while delivering the same level of unassailable security. Between IBM’s end-to-end mainframe security solution and the new intelligent (analytics-driven) security portfolio for the cloud enterprise shops now have the tools to do the job right.

And you will want all those tools because security presents a complex, multi-dimensional puzzle requiring different layers of integrated defense. It involves not only people, data, applications, and infrastructure but also mobility, on premise and off premise, structured, unstructured, and big data. This used to be called defense in depth, but with the cloud and mobility the industry is moving far beyond that.

DancingDinosaur is Alan Radding, a veteran IT analyst with well over 20 years covering IT and the System z. You can find more of my writing at Technologywriter.com and here. Also follow DancingDinosaur on Twitter, @mainframeblog.

Real-Time Analytics on z Lead at IBM Enterprise2014 Opening Day

October 8, 2014

Users have always been demanding about performance. But does the 5-minute rule noted by Tom Rosamilia in the opening keynote at IBM Enterprise2014 go too far? It now seems users expect companies to respond, or at least acknowledge, their comments, questions, or problems in five minutes. That means companies need to monitor and analyze social media in real-time and respond appropriately.

Building on client demand to integrate real-time analytics with consumer transactions, IBM yesterday announced new capabilities for its System z. Specifically, IBM is combining the transactional virtues of the z with big data analytic capabilities into a single, streamlined, end-to-end data system. This real-time integration of analytics and transaction processing can allow businesses to increase the value of a customer information profile with every interaction the customer makes.  It also promises one way to meet the 5-minute rule, especially when a customer posts a negative comment on social media.

With the new integrated capability you can apply analytics to social sentiment and customer engagement data almost as the transactions are occurring. The goal is to gain real-time insights, which you can do on the mainframe because the data already is there and now the real time analytics will be there. There is no moving of data or logic.  The mainframe already is doing this when it is being used for fraud prevention. This becomes another case where the mainframe can enable organizations to achieve real-time insights and respond within five minutes. Compared to fraud analysis the 5-minute expectation seems a luxury.

By incorporating social media into the real time analytic analysis on the mainframe you can gain an indication of how the business is performing in the moment, how you stack up to your competitors, and most importantly, meet the 5-minute response expectation.  Since we’re talking about pretty public social sentiment data, you also could monitor your competitors’ social sentiment and analyze that to see how well they are responding.

And then there are the more traditional things you can do with the integration of analytics with transactional data to provide real-time, actionable insights on commercial transactions as they occur. For example you could take advantage of new opportunities to increase sales or prevent customer churn.

According to IBM this is being driven by the rise of mobile and smartphones, numbering in the billions in a few years. The combination of massive amounts of data and consumers who are empowered with mobile access is creating a difficult challenge for businesses, IBM noted in the announcement. Consumers now expect an immediate response—the 5 minute rule—to any interaction, at any time, and through their own preferred channel of communication. Unfortunately, many businesses are trying to meet this challenge and deliver instantaneous, on-demand customer service with outdated IT systems that can only provide after-the-fact intelligence.

Said Ross Mauri, General Manager, System z, IBM Systems & Technology Group: “Off-loading operational data in order to perform analytics increases cost and complexity while limiting the ability of businesses to use the insights in a timely manner.” The better approach, he continued, is to turn to an end-to-end solution that makes analytics a part of the flow of transactions and allows companies to gain real time insights while improving their business performance with every transaction.

Of course,  Mauri was referring specifically to the System z.  However, Power Systems and especially the new POWER8 machines, which have a strong presence here at IBM Enterprise2014, can do it too. Speaker after speaker emphasized that the Power machines are optimized for lightning fast analytics, particularly real time analytics.

Still, this was a z announcement so IBM piled on a few more goodies for the z. These include new analytics capabilities for the mainframe to enable better data security and provide companies with the ability to integrate Hadoop big data with the z. Specifically, IBM is delivering:

  • IBM InfoSphere BigInsights for Linux on System z – Combines open-source Apache Hadoop with IBM innovations to deliver enterprise grade Hadoop for System z clients;
  • IBM DB2 Analytics Accelerator – Enhances data security while delivering 2000 times the response time for complex data queries.
  • New capabilities in Linux and the cloud for system z, such as IBM Elastic Storage for Linux on System z, which extends the benefits of Elastic Storage to the Linux environment on z servers, and IBM Cloud Manager with OpenStack for System z, which enables heterogeneous cloud management across System z, Power and x86 environments.

Many of these pieces are available now.  You can meet the 5-minute rule sooner than you may think.

Alan Radding is DancingDinosaur. Follow him on Twitter, @mainframeblog, or check out his website, Technologywriter.com

 

SoftLayer Direct Link Brings Hybrid Cloud to System z and Power

June 26, 2014

Back in February, IBM announced that SoftLayer was integrating IBM Power Systems into its cloud infrastructure, a move that promised to deliver a level and breadth of services beyond what has traditionally been available over the cloud. Combined with new services and tools announced at the same time, this would help organizations deploy hybrid and private cloud environments.

Back then IBM included the System z in the announcement as well by bolstering its System z cloud portfolio with IBM Wave for z/VM. IBM Wave promises to provide rapid insight into an organization’s virtualized infrastructure with intelligent visualization, simplified monitoring and unified management. Specifically, Wave helps the organization more easily manage large numbers of virtual machines.

Now it is June, the snow has finally melted and IBM’s SoftLayer is introducing Direct Link to the computing public. Direct Link had previously been available to only a select few customers. Direct Link, in effect, is a specialized content delivery network for creating hybrid clouds. Organizations would connect their private IT infrastructure to public cloud resources by going directly to the SoftLayer platform, which streamlines delivery over the network. Direct Link users avoid the need to traverse the public Internet.

The focus here is on hybrid clouds. When an organization with a private cloud, say a mainframe hosting a large amount of IT resources and services behind the firewall, needs resources such as extra capacity or services it doesn’t have, it can turn to the public cloud for those extra resources or services. The combination of the private cloud and tightly connected public cloud resources form a hybrid cloud.  If you’re attending a webinar on hybrid clouds at this point the speaker usually says …and then you just punch out to the public cloud to get x, y, or z resource or service. It always sounds so simple, right?

As far as the System z goes, SoftLayer was not actually integrated with the z in the February announcement, although DancingDinosaur expects it will be eventually if IBM is serious about enterprise cloud computing. For now, the z sits in the on-premise data center, a private cloud so to speak. It runs CICS and DB2 and all the systems it is known for and, especially, security. From there, however, it can connect to an application server, dedicated or virtual, on the SoftLayer Cloud Server to form a Hybrid System z-Enterprise Cloud. As presented at SHARE this past spring, the resulting Hybrid System z-Cloud Enterprise Architecture (slides 46-49) provides the best of both worlds, secure transactions combined with the dynamics of the cloud.

Direct Link itself consists of a physical, dedicated network connection from your data center, on-premise private cloud, office, or co-location facility to SoftLayer’s data centers and private network through one of the company’s 18 network Points of Presence (PoPs) around the world. These PoPs reside within facilities operated by SoftLayer partners including Equinix, Telx, Coresite, Terremark, Pacnet, InterXion and TelecityGroup, which provide access for SoftLayer customers, especially those with infrastructure co-located in the same facilities.

Direct Link, essentially an appliance, eliminates the need to traverse the public Internet to connect to the SoftLayer private network. Direct Link enables organizations to completely control access to their infrastructure and services, the speed of their connection to SoftLayer, and how data is routed. In the process, IBM promises:

  • Higher network performance consistency and predictability
  • Streamlined and accelerated workload and data migration
  • Improved data and operational security

If you are not co-located in any of the above facilities operated by one of SoftLayer’s POP partners then it appears you will have will to set up an arrangement with one of them. SoftLayer promises to hold your hand and walk you through the set up process.

When you do have it set up Direct Link pricing appears quite reasonable. Available immediately, Direct Link pricing starts at $147/month for a 1Gbps network connection and $997/month for a 10Gbps network connection.

According to Trevor Jones, writing for Tech Target, IBM’s pricing undercuts AWS slightly and Microsoft’s by far. Next month Microsoft, on a discounted rate for its comparable Express Route service, will charge $600 per month for 1 Gbps and $10,000 for 10 Bbps per month. Amazon uses its Direct Connect service priced at $0.30 per hour for 1 Gbps and 10 Gbps at $2.25 per hour.

Your System z or new Power server integrated with SoftLayer can provide a solid foundation for hybrid cloud nirvana. Just add Direct Link and make arrangements with public cloud resources and services. Presto, you have a hybrid cloud.

BTW, IBM Enterprise 2014 is coming in Oct. to Las Vegas. DancingDinosaur expects to hear a lot of the z and Power, SoftLayer, and hybrid clouds there.

DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com


Follow

Get every new post delivered to your Inbox.

Join 778 other followers

%d bloggers like this: