Posts Tagged ‘CA Technologies’

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

CA Technologies Joins System z and Cloud for Cheaper Storage

December 2, 2013

A pair of announcements at the AWS re:Invent conference in mid November aimed to combine System z with the cloud. The first addressed how to unite the z with the cloud through new tools that support storage, virtualized environments, and application delivery for the purpose of  meeting the management demands of what CA refers to as dynamic data centers by blending mainframe and cloud capabilities.

The idea here is to blend the z with cloud infrastructures that offer greater flexibility to manage enterprise data centers and balance workloads across platforms. Citing a Forrester Consulting study commissioned by IBM that noted how organizations, by including the mainframe in cloud infrastructures, can enable a broader mix of infrastructure service options. The idea, for example, is to enable a mix of Linux virtual machines from both the mainframe for data needing to meet high SLAs and from commodity infrastructure when SLA requirements are less stringent. The study also pointed out that the z can better accommodate high densities of very small workloads with resource guarantees — something very difficult to achieve on commodity resources. CA is supporting System z and the cloud with several new software releases that bring improved efficiencies and cost savings.

The second announcement is similar to the first except it looks specifically at cloud storage for the z, particularly when backing up data through Amazon Web Services and Riverbed Technology. The promise here is to streamline storage management operations while cutting storage costs to pennies per gigabyte. Essentially, z shops can use the CA tools to back up their data and archive it very cheaply in the cloud.

CA Cloud Storage for System z, when used with  Amazon Web Services (AWS) cloud storage and the Riverbed Whitewater cloud storage appliance enables mainframe data centers greater storage agility at lower storage costs. The upshot: disaster recovery readiness is improved and AWS cloud storage is accessed without changing the existing backup infrastructure.

The product not only lets organizations reduce data storage costs by taking advantage of low cloud storage costs but delivers the elastic capacity and flexibility.  CA also insists the product eliminates purpose-built robots and disks, but that doesn’t seem to be entirely the case.

Rather, it incorporates Riverbed Whitewater, itself a purpose-built storage appliance that helps integrate cloud storage infrastructures to securely deliver instant recovery and cost-effective storage for backup and data archiving. By using CA Cloud Storage for System z and the Riverbed appliance, z shops can back up IBM System z storage data to Amazon S3, a storage infrastructure designed for mission-critical and primary data storage or to Amazon Glacier, an extremely low-cost storage service for which retrieval times of several hours are suitable. Both services are highly secure and scalable and designed for 99.999999999 percent durability, according to CA.

Apparently CA is deploying the software with AWS and Riverbed for itself. The company expects achieve scalable storage while reducing the cost of its own backups. In addition, it picks up the benefits of elastic storage, which should improve its disaster recovery and ensure faster response to business needs without having to depend on offsite tape recalls, according to the company.

Both CA offerings, in effect, blend the System z with the cloud to increase flexibility and reduce cost. “The growth of System z and the increased adoption of Linux on the mainframe make it an optimal platform for reliably and cost-effectively delivering IT services that support opportunities around cloud, big data, and mobile,” said Joe Clabby, president, Clabby Analytics commenting on the CA announcements. In short, he noted the product combination enables IT workers to bridge the IT on-premise/cloud gap and manage the cross-enterprise and cross-platform operations of today’s dynamic data center.

Of course, for z data centers there are other ways to bridge the gap. IBM, for example, has been nudging the z toward to cloud for some time, as DancingDinosaur reported here. IBM also has its own relationship involving the Riverbed Whitewater appliance and Tivoli Storage Manager. Whatever approach you choose, it is time for z shops to explore how they can leverage the cloud.

You can follow DancingDinosaur on Twitter, @mainframeblog.

Next Generation zEnterprise Developers

April 19, 2013

Mainframe development keeps getting more complicated.  The latest complication can be seen in Doug Balog’s reference to mobile and social business on the zEnterprise, reported by DancingDinosaur here a few weeks ago. That is what the next generation of z developers face.

Forget talk about shortages of System z talent due to the retirement of mainframe veterans.  The bigger complication comes from need for non-traditional mainframe development skills required to take advantage mobile and social business as well as other recent areas of interest such as big data and analytics. These areas entail combining new skills like JSON, Atom, Rest, Hadoop, Java, SOA, Linux, hybrid computing along with traditional mainframe development skills like CICS and COBOL, z/VM, SQL, VSAM, and IMS. This combination is next to impossible to find in one individual. Even assembling a coherent team encompassing all those skills presents a serious challenge.

The mainframe industry has been scrambling to address this in various ways.  CA Technologies added GUI to its various tools and BMC has similarly modernized its various management and DB2 tools. IBM, of course, has been steadily bolstering the Rational RDz tool set.   RDz is a z/OS Eclipse-based software IDE.  RDz streamlines and refactors z/OS development processes into structured analysis, editing, and testing operations with modern GUI tools, wizards, and menus that, IBM notes, are perfect for new-to the-mainframe twenty- and thirty-something developers, the next generation of z developers.

Compuware brings its mainframe workbench, described as a modernized interactive developer environment that introduces a new graphical user interface for managing mainframe application development activities.  The interactive toolset addresses every phase of the application lifecycle.

Most recently, Micro Focus announced the release of its new Enterprise Developer for IBM zEnterprise.  The product enables customers to optimize all aspects of mainframe application delivery and promises to drive down costs, increase productivity, and accelerate innovation. Specifically, it enables both on- and off-mainframe development, the latter without consuming mainframe resources, to provide a flexible approach to the delivery of new business functions. In addition, it allows full and flexible customization of the IDE to support unique development processes and provides deep integration into mainframe configuration management and tooling for a more comprehensive development environment. It also boasts of improved application quality with measurable improvement in delivery times.  These capabilities together promise faster developer adoption.

Said Greg Lotko, Vice President and Business Line Executive, IBM System z, about the new Micro Focus offering:  We are continually working with our technology partners to help our clients maximize the value in their IBM mainframes, and this latest innovation from Micro Focus is a great example of that commitment.

Behind all of this development innovation is an industry effort to cultivate the next generation of mainframe developers. Using a combination of trusted technology (COBOL and mainframe) and new innovation (zEnterprise, hybrid computing, expert systems, and Eclipse), these new developers; having been raised on GUI and mobile and social, can leverage what they learned growing up to build the multi-platform, multi-device mainframe applications that organizations will need going forward.

As these people come on board as mainframe-enabled developers organizations will have more confidence in continuing to invest in their mainframe software assets, which currently amount to an estimated 200-300 billion lines of source code and may even be growing as mainframes are added in developing markets, considered a growth market by IBM.  It only makes sense to leverage this proven code base than try to replace it.

This was confirmed in a CA Technologies survey of mainframe users a year ago, which found that 1) the mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise; 2) the machine is viewed as an enabler of innovation as big data and cloud computing transform the face of enterprise IT—now add mobile; and 3) companies are seeking candidates with cross-disciplinary skill sets to fill critical mainframe workforce needs in the new enterprise IT thinking.

Similarly, a recent study by the Standish Group showed that 70 percent of CIOs saw their organizations’ mainframes as having a central and strategic role in their overall business success.  Using the new tools noted above organizations can maximize the value of the mainframe asset and cultivate the next generation mainframe developers.

Getting the Payback from System z Outsourcing

February 1, 2013

A survey from Compuware Corporation on attitudes of CIOs toward mainframe outsourcing showed a significant level of dissatisfaction with one or another aspect of mainframe outsourcing. Check out the survey here.

Mainframe outsourcing has been a fixture of mainframe computing since the outset. The topic  is particularly interesting in light of the recent piece DancingDinosaur posted on winning the talent war a couple of weeks ago. Organizations intending to succeed are scrambling to find and retain the talent they need for all their IT systems, mainframe and otherwise.  In short, they need skills in all the new areas, like cloud computing, mobile access, and most urgently, big data analytics.  In addition, there is the ongoing need for Java, Linux, WebSphere, and CICS in growing System z data centers.  The rise of z-based hybrid computing and expert integrated hybrid PureSystems to some extent broadens the potential talent pool while reducing the amount of skilled experts required. Still, mainframe outsourcing remains a popular option.

The new Compuware survey found that reducing costs is a major driver for outsourcing mainframe application development, maintenance, and infrastructure. Yet multiple  associated costs are frustrating 71% of CIOs. These costs result from increases in MIPS consumption, as well as higher investments in testing and troubleshooting due mainly to poor application quality and performance.  In fact, two-thirds (67%) of respondents reported overall dissatisfaction with the quality of new applications or services provided by their outsourcer. The source of the problem: a widening in-house skills gap and difficulties with knowledge transfer and staff churn within outsource vendors.

Compuware has published a related white paper titled, Mainframe Outsourcing: Removing the Hidden Costs, which expands on the findings from the study. The company’s recommendations to remove the costs amount to reverse engineering the problems revealed in the initial survey. These include:

  • Utilize MIPS better
  • Explore pricing alternatives to CPU-based pricing
  • Improve the quality of new applications
  • Boost knowledge transfer between outsourcers and staff
  • Measure and improve code efficiency at the application level
  • Take advantage of baseline measurement to objectively analyze outsourcer performance

The System z offers numerous tools to monitor and manage usage and efficiency, and vendors like Compuware, CA, BMC, and others bring even more.

The MIPS consumption problem is typical. As Compuware reports: mainframes are being used more than ever, meaning consumption is naturally on the rise. This is not a bad thing.

However, where consumption is escalating due to inefficient coding, adding unnecessary costs. For example, MIPS costs are increasing on average by 21% year over year, with 40% of survey respondents claiming that consumption is getting out of control. Meanwhile, 88% of respondents using pay structures based on CPU consumption (approximately 42% of those surveyed) think their outsourcer could manage CPU costs better, and 57% of all respondents believe outsourcers do not worry about the efficiency of the applications that they write.

New workloads also are driving costs. For example, 60% of survey respondents believe that the increase in applications like mobile banking are driving higher MIPS usage and creating additional costs. Just think what they’d report when big data analytic applications start kicking in although some of this processing should be offloaded to assist processors.

The Compuware study is interesting and informative. Yes, outsourcers should be pressed to utilize MIPS more efficiently. At a minimum, they should shift workloads to assist processors that have lower cost per MIPS.  Similarly, developers should be pressed to boost the efficiency of their code. But this will require an investment in tools to measure and benchmark that code and hire QA staff.

A bigger picture view, however, suggests that focusing just on MIPS is counterproductive. You want to encourage more workloads on the z even if they use more MIPS because the z can run at near 100% utilization and still perform reliably. Higher utilization translates into lower costs per workload. And with the cost per MIPS decreasing with each rev of the zEnterprise the cost per workload keeps improving.  Measure, monitor, and benchmark and do whatever else you can to drive efficient operation, but aim to leverage the zEnterprise to the max for your best overall payback.

Winning the Talent War with the System z

January 17, 2013

The next frontier in the ongoing talent war, according to McKinsey, will be deep analytics, a critical weapon required to probe big data in the competition underpinning new waves of productivity, growth, and innovation. Are you ready to compete and win in this technical talent war?

Similarly, Information Week contends that data expertise is called for to take advantage of data mining, text mining, forecasting, and machine learning techniques. The System z data center is ideally is ideally positioned to win if you can attract the right talent.

Finding, hiring, and keeping good talent within the technology realm is the number one concern cited by 41% of senior executives, hiring managers, and team leaders responding to the latest Harris Allied Tech Hiring and Retention Survey. Retention of existing talent was the next biggest concern, cited by 19.1%.

This past fall, CA published the results of its latest mainframe survey that came to similar conclusions. It found three major trends on the current and future role of the mainframe:

  1. The mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise
  2. The mainframe as an enabler of innovation as big data and cloud computing transform the face of enterprise IT
  3. Demand for tech talent with cross-disciplinary skills to fill critical mainframe workforce needs in this new view of enterprise IT

Among the respondents to the CA survey, 76% of global respondents believe their organizations will face a shortage of mainframe skills in the future, yet almost all respondents, 98%, felt their organizations were moderately or highly prepared to ensure the continuity of their mainframe workforce. In contrast, only 8% indicated having great difficulty finding qualified mainframe talent while 61% reported having some difficulty in doing so.

The Harris survey was conducted in September and October 2012. Its message is clear: Don’t be fooled by the national unemployment figures, currently hovering above 8%.  “In the technology space in particular, concerns over the ability to attract game-changing talent has become institutional and are keeping all levels of management awake at night,” notes Harris Allied Managing Director Kathy Harris.

The reason, as suggested in recent IBM studies, is that success with critical new technologies around big data, analytics, cloud computing, social business, virtualization, and mobile increasingly are giving top performing organizations their competitive advantage. The lingering recession, however, has taken its toll; unless your data center has been charged to proactively keep up, it probably is saddled with 5-year old skills at best; 10-year old skills more likely.

The Harris study picked up on this. When asking respondents the primary reason they thought people left their organization, 20% said people left for more exciting job opportunities or the chance to get their hands on some hot new technology.

Some companies recognize the problem and belatedly are trying to get back into the tech talent race. As Harris found when asking about what companies are doing to attract this kind of top talent 38% said they now were offering great opportunities for career growth. Others, 28%, were offering opportunities for professional development to recruit top tech pros. A fewer number, 24.5%, were offering competitive compensation packages while fewer still, 9%, offering competitive benefits packages.

To retain the top tech talent they already had 33.6% were offering opportunities for professional development, the single most important strategy they leveraged to retain employees. Others, 24.5%, offered opportunities for career advancement while 23.6% offered competitive salaries. Still a few hoped a telecommuting option or competitive bonuses would do the trick.

Clearly mainframe shops, like IT in general, are facing a transition as Linux, Java, SOA, cloud computing, analytics, big data, mobile, and social play increasing roles in the organization, and the mainframe gains the capabilities to play in all these arenas. Traditional mainframe skills like CICS are great but it’s just a start. At the same time, hybrid systems and expert integrated systems like IBM PureSystems and zEnterprise/zBX give shops the ability to tap a broader array of tech talent.

Gamification Comes to the zEnterprise

February 20, 2012

You can tell that gamification is coming to the zEnterprise when IBM, BMC, and CA Technologies are exploring gamification at roughly the same time.  It won’t be too long before gamification starts being applied to the zEnterprise tools and applications, probably starting with administrative tools.

Gamification refers to the process of applying gaming software techniques to non-game applications. The objective is to make the software and or business process more engaging and compelling.  Through gamification software should become easier and more intuitive.

Some of the first aspects of gaming to be applied mimic the scoring and rewards aspects of game playing.  For a management that is intent on measurement, gamification should be welcome by opening up a new dimension to metrics. At this point, however, gamification is talked about most frequently in reference to social networking and associated rewards and incentives.  DancingDinosaur’s sister blog, BottomlineIT, initially referenced it here .

IBM researchers looked at gamification in a recent paper here. The researchers noted that the goal of gamification is to incent repeat usage of social networks, increase contributions, and establish user reputations. They rely on incentives in the form of points, badges, and leveling that can help a player advance in status. In the workplace, game-like systems have been employed to collect information about employees and incent contribution within enterprise social software. Gamification also aims to create a sense of playfulness in non-game environments, which creates engagement or at least stickiness.

Based on their study, the researchers concluded that the removal of the points system (key to the incentives) resulted in a significant negative impact on the user activity of the site, and the contribution of content significantly decreased after the deactivation of the points system. This suggests that such extrinsic rewards did influence a segment of the user population to participate more intensely while the point system was in place. No big surprise there.

Gamification is being driven primarily by the smartphone and social networking crowd. Over a decade ago, DancingDinosaur published a book on knowledge management. A major obstacle then was getting knowledge experts to share their knowledge. The only solutions at that time appeared to be either bribe people (rewards, incentives) or threaten to fire them for not sharing. A few copies of that book, apparently, are still available on Amazon. The popularity of social networking along with gamification apparently has resolved this to some extent though social rewards and incentives.

For zEnterprise shops, the real question is: where does gamification add value.  A few places come to mind, such as system operations, administration, and help desk.  With vendors making z management tools increasingly accessible via devices like the iPhones and iPads, gamification could have real impact. DancingDinosaur first wrote about that in 2010 here. R. Wang and Insider Associates just published a survey of the processes gamification impacts. Check out their results here.

Trevor Eddolls, Toolbox.com, noted: Wouldn’t it be great to have software on your smartphone that not only identifies what you’re looking at (Web server or z/Linux LPAR, or whatever) and provides current performance information. And then makes it fun to resolve any problems that might have been identified. Perhaps the only green screens you’ll ever see will mean ‘game over’!

zEnterprise shops are unlikely to build gamification into the tools and processes on their own. Software vendors starting with BMC and CA, however, just might. At that point, gamification will come into the z data center through the tools they acquire. Who knows, maybe gamification will make job scheduling fun?

El Al Lowers System z Costs with GUI Tools

February 3, 2012

When DancingDinosaur looked at the cost per workload based on comparative workload analysis by IBM’s John Shedletsky the zEnterprise came out very well in most cases.  It wasn’t, however, the zEnterprise’s better price performance that necessarily made the difference. As Shedletsky noted, it was the lower cost of labor due to the more efficient management capabilities of the z, much of it resulting from the efficiency enabled by the Unified Resource Manager, which comes with the zEnterprise.

CA Technologies latched onto the idea that companies could lower their cost of mainframe computing by simplifying management tasks through a GUI and automation of routine tasks. To that end it launched the Mainframe 2.0 initiative a couple of years ago and has been steadily revamping its mainframe management tools for simplicity and efficiency. Back in May 2009, DancingDinosaur first looked at the CA initiative.

That, in fact, is what brought El Al, the Israeli airline, to CA Mainframe Chorus. “Cutbacks in budget and manpower make it hard for us to manage our growing processing power and cope with the intensifying complexity of our computing environment,” according to Arieh Berger, Operation System and Information Security Manager at EL AL. Like almost every other IT shop, the El Al system group is under pressure to do more with less.

For example, El Al deploys most of its core systems—ticket sales, reservations, the cargo system, financials, and more—on DB2 v8 running on a pair of z 10 machines, each with a zIIP.  The company plans to upgrade to DB2 10 in six months, noted Ita Israeli, El Al’s DB2 administrator. A long-time El Al database manager, Israeli was trained by her predecessor over a considerable period of time.  With Chorus’ new GUI and the information it makes easily available, such a lengthy orientation to the job won’t be necessary, even if the replacement is a mainframe newcomer.

“CA Chorus sits above all the other CA products we have and gathers information from them. It keeps a history in a way I couldn’t do before, at least not without a lot of extra work,” Israeli explained.  Without Chorus, the DBA administrator would have to search through each of the products, culling the information she wanted. “Now I can see it all in one screen through a browser. We don’t even have to install any code,” she added.

According to CA, EL AL has improved its IT operations by leveraging CA Mainframe Chorus’ enhanced reporting tools, more efficient handling of z/OS events, and more flexible automated monitoring of storage levels. The airline also has also taken advantage of faster problem detection and trouble-shooting, which is accelerated and simplified through a graphical display of complex business data, automated analysis of historical trends, and the generation of comparative diagrams to aid problem-solving.

“Now I would like our developers to start using it. For example, they will be able to see everything that connects to a table they are working on—all the rules—without having to navigate through a lot of screens,” Israeli said.  The airline has only had Chorus for a few months so uptake is just starting. At this point only Israeli and another work with the tool daily.

Although the tool enables the DBA team to function more efficiently every day it may deliver its biggest payback when veteran DBAs like Israeli retire in a few years. “It makes it very easy for people who don’t know the mainframe, who don’t know green screen, to see what’s going on,” she noted.

As the zEnterprise delivers increasingly better price/performance the cost of labor for ongoing operations and management becomes a bigger and more noticeable concern. Labor costs steadily increase, and the only way to rein in the cost is through automated, simplified tools that make possible greater efficiency and higher productivity.

zEnterprise Private Cloud ROI

December 20, 2011

Many mainframe veterans think the System z has long acted as a private cloud, at least since SOA appeared on the System z, allowing users to access data and logic residing on the System z through nothing more than their browser. And they are right.

The distributed world, unlike the mainframe world, sees private clouds as something new and radical because it is not straightforward there to virtualize, service-enable, and integrate all the piece parts that make up a private cloud. The System z learned these tricks years ago, and the zEnterprise with x86 and p-blades in an attached zBX makes it even easier.

With the z114 and the System z Solution Edition for Cloud Computing program a mainframe-based private cloud becomes that much less expensive to acquire, especially since most of the piece parts already are included and optimized from the start. The System z Solution Edition for Cloud includes the z hardware, Tivoli software, and IBM services to deliver the foundation for the private cloud.

A private cloud, whether distributed or mainframe-based, does not come cheap. The payback, however, still is there; it just comes in a different form. The private cloud restructures IT around a services delivery model. Applications and users tap IT-based data and business logic as services. Cost savings are generated from the ensuing operational efficiency enabled through the standardization, automation and virtualization of IT services. When the organization progresses to the point where users can self-provision and self-configure the needed IT services through private cloud automation and management, the real efficiencies kick in.

According to IDC many of today’s private cloud business cases are being anchored by savings from application rationalization and IT staff productivity improvements in addition to expected optimization of hardware assets. But unlike the public cloud, which promises to shift IT spending from CAPEX to OPEX, private clouds actually drive increases in CAPEX since the organization is likely to invest in new hardware and software optimized for virtualized cloud services delivery and management automation.

With a mainframe private cloud, much of the investment in virtualized, optimized, and integrated hardware assets has already been made. The private cloud initially becomes more of an exercise in partitioning and reallocating those assets as a private cloud. Still, given the appeal of the IT services model, it is likely that the organization will boost its hardware assets to accommodate increasing demand and new services.

The greatest ROI of the private cloud, whether mainframe-based or distributed, comes from the business agility it enables. The virtualized pool of IT resources that makes up the private cloud can be easily reallocated as services to meet changing business needs. Instead of requiring weeks if not months to assemble and deploy the IT hardware and software resources necessary to support a new business initiative, those resources can be allocated from the pooled virtual resources in minutes or hours (provided, of course, sufficient resources are available). With a private cloud you can, in effect, change the business almost on-the-fly and with no additional investment.

As CIO, how are you going to put a value on this sudden agility? If it lets the organization effectively counter competitive challenges, seize new business opportunities, or satisfy new customer demands it could deliver astounding value. It all depends on the business leadership. If they aren’t terribly agile thinkers, however, the value might be minimal.

Other benefits from a private cloud include increased IT productivity and efficiency, the ability of business users to self-provision the desired IT resources (with appropriate policy-based automation controlling the provisioning behind the scenes), and an increased ability to monitor and measure IT consumption for purposes of chargeback or, as is more likely, show back. Such monitoring and measurement of IT consumption has long been a hallmark of the mainframe, whether a private cloud or not.

Even with a mainframe-based private cloud the organization will likely make additional investments, particularly in management automation to ensure efficient service delivery, monitoring, measurement, chargeback, self-provisioning, and orchestration. IBM Tivoli along with other mainframe ISVs like CA and BMC provide tools to do this.

In the end, the value of private cloud agility when matched with agile thinking business leadership should more than offset the additional investments required. And with a zEnterprise-based private hybrid cloud, which comes highly virtualized already, you have a head start on any distributed private cloud.

CA Drives Linux on the System z

August 15, 2011

Does anyone not think Linux on the mainframe is here for the long run? IBM has been promoting Linux on the System z and now the zEnterprise for a decade. DancingDinosaur has long argued that Linux on z saved the mainframe from becoming niche product, albeit a big, pricey one.

Last week CA Technologies, which jumped onto the Linux on z bandwagon early with a slew of management products, expanded its portfolio of Linux on z software management tools along with announcing new partnerships.

Linux on z is a consolidation play aimed at saving money by eliminating rampant distributed Linux server sprawl while providing better management and reliability. The savings generally come from reducing the number of physical servers. Mainframe shops also experience savings by shifting workloads from general processors to lower cost-per-MIPS System z specialty processors. In short, the combination of lower cost specialty processors, namely IFLs, and the massive virtualization supported by z/VM drives down the cost per workload. As a result, over 1,300 mainframe shops are using Linux on z to one degree or another.

Specifically, CA introduced four sets of Linux on z management products:

  • CA VM:ManagerSuite for Linux on System z—includes various feature enhancements like new support for managing tapes under Linux on z, and others enhancements that help z shops install, deploy, and service their CA z/VM products more effectively and quickly
  • Velocity zVPS Performance Suite–brings real-time access to detail data from z/VM and Linux on System z platforms for optimized performance, capacity planning and cost chargeback
  • UPSTREAM for Linux on System z and UPSTREAM for z/OS UNIX–extend CA data protection capabilities with file level backup for Linux on System z and z/OS UNIX files
  • CA Mainframe Connector for Linux on System zenables CA z/OS-based automation products to receive event information from Linux on System z environments

Two of the new product sets come through CA partners, whose products CA will resell: INNOVATION Data Processing and Velocity Software.

INNOVATION brings UPSTREAM for Linux on z and UPSTREAM for UNIX on z. Both address data protection by allowing z/OS storage managers to centrally manage storage in hybrid mainframe and distributed environments without becoming UNIX file system experts.

Velocity Software brings its zVPS Performance Suite. This allows IT organizations to optimize performance and reduce costs via graphical, real-time access to detailed performance data and analysis of z/VM and Linux on System z performance. And it works: “We can attest to the time and cost savings of implementing CA Technologies and Velocity Software suites to manage our Linux environment,” Jerry Whitteridge, design engineer at Safeway, confirmed to DancingDinosaur.

CA makes it clear that its new tools do not compete with IBM’s zManager. “The Unified Resource Manager handles the z hardware complex. Our software operates at a layer above that,” explained Mark Combs, a CA senior VP. These are the same issues that IBM addresses with its Tivoli products. Having competitors can only be good for mainframe shops looking to lower the cost of software tools.

A major ISV like CA that boosts its Linux on z portfolio suggests that the future of Linux on z and of the z itself looks rosy. In CA’s case, it’s following customer demand for more and better System z tools that can handle Linux at the application layer, which only makes the picture rosier.

New CA and BMC System z tools deliver payback

June 1, 2010

CA Technologies (formerly known as CA and before that Computer Associates) and BMC each introduced new System z tools. Might be odd timing now, just months before IBM’s widely expected rev of the System z itself, but May is a nice month for big users conferences from which these kinds of announcements typically emerge.

The CA announcements focus on mainframe software management and database management for DB2 for z/OS. BMC announced zIIP support for some of its DB2 management tools. Both announcements will lead to cost reductions.

The most interesting of the new CA announcements is the release of Mainframe Software Manager, v3. It adds a GUI interface, which, as has been written about here previously, improves mainframe management productivity.  It also provides wizards, guided workflows in CA-speak, amounting to the equivalent of Install Shield for mainframes. DancingDinosaur has been cheerleading the adoption of GUIs for the System z as essential for cultivating the next generation of System z staff and lowering cost so this is welcome news.

To bolster database management CA also introduced CA Mainframe Chorus, described as a graphical workspace. CA refers to it as role-based, interactive visualization that integrates features across multiple products and disciplines for the purpose of facilitating collaboration and knowledge sharing between expert and novice mainframe staff.

You can build an attractive business case for GUI-based mainframe tools based on increased admin and operator productivity and the opportunity to use less skilled (meaning lower paid) people. Better yet, the ability to take advantage of System z assist processors (zIIP, zAAP, and IFL) provides an immediate payback by shifting workloads off the z’s general processor. This brings immediate licensing cost advantages.

BMC’s new DB2 for z/OS products take advantage of IBM’s zIIP, enabling mainframe shops to move more of their DB2 workloads to the lower-cost processors, effectively reducing mainframe operational costs. As BMC describes it, “this new zIIP offloading capability, along with previously-introduced BMC MainView zIIP exploitation efforts represent a significant step to reduce the costs of MIPS” (Million Instructions per Second). MIPS are considered a primary cost driver in mainframe environments.

BMC takes particular pains to note that this approach has been blessed by IBM. “We use the IBM-approved API,” noted Jay Lipovich, BMC director of mainframe product development. The company clearly does not want its use of the assist processor to be confused with NEON’s zPrime, which allows workloads not blessed by IBM to run on assist processors. NEON currently is entangled in lawsuits and counter-lawsuits with IBM.

One BMC customer reports offloading 30% of its MainView workload to the zIIP environment. As BMC reports: hardware plus software costs for a zIIP processor run $150 to $200 per MIPS compared with $2,200 to $3,400 for a general purpose processor. In addition, BMC’s recent global mainframe survey found mainframe capacity has continued to grow, which puts more pressure on the budgets of organizations that rely on the mainframe. In large shops with more than 10,000 MIPS, more than 47 percent of survey respondents said MIPS utilization is a top priority.

Mainframe computing often is more expensive than it need be. Yet there are ways to substantially cut mainframe computing costs. The use of assist processors and GUI tools represent an easy way to start doing just that.


%d bloggers like this: