Posts Tagged ‘virtualization’

Enterprise 2013 Details System z and Power Technology and New Capabilities

October 25, 2013

IBM announced a lot of goodies for z and Power users at Enterprise 2013 wrapping up in Orlando today. There were no blockbuster announcements, like a new z machine—we’re probably 12-18 months away from that and even then the first will likely focus on Power8—but it brought a slew of announcements nonetheless. For a full rundown on what was announced click here.

Cloud and analytics—not surprisingly—loom large. For example, Hadoop and a variety of other capabilities have been newly cobbled together, integrated, optimized, and presented as new big data offerings or as new cloud solutions.  This was exemplified by a new Cognos offering for CFOs needing to create, analyze and manage sophisticated financial plans that can provide greater visibility into enterprise profitability or the lack thereof.

Another announcement featured a new IBM Entry Cloud Configuration for SAP on zEnterprise. This is a cloud-enablement offering combining high-performance technology and services to automate, standardize and accelerate day-to-day SAP operations for reduced operational costs and increased ROI. Services also were big at the conference.

Kicking off the event was a dive into data center economics by Steve Mills, Senior Vice President & Group Executive, IBM Software & Systems. Part of the challenge of optimizing IT economics, he noted, was that the IT environment is cumulative. Enterprises keep picking up more systems, hardware and software, as new needs arise but nothing goes away or gets rationalized in any meaningful way.

Between 2000 and 2010, Mills noted, servers had grown at a 6x rate while storage grew at a 69x rate. Virtual machines, meanwhile, were multiplying at the rate of 42% per year. Does anyone see a potential problem here?

Mills’ suggestion: virtualize and consolidate. Specifically, large servers are better for consolidation. His argument goes like this: Most workloads experience variance in demand. But when you consolidate workloads with variance on a virtualized server the variance of the sum is less due to statistical multiplexing (which fits workloads into the gaps created by the variances). Furthermore, the more workloads you can consolidate, the smaller the variance of the sum. His conclusion: bigger servers with capacity to run more workloads can be driven to higher average utilization levels without violating service level agreements, thereby reducing the cost per workload. Finally, the larger the shared processor pool is the more statistical benefit you get.

On the basis of statistical multiplexing, the zEnterprise and the Power 795 are ideal choices for this. Depending on your workloads, just load up the host server, a System z or a big Power box, with as many cores as you can afford and consolidate as many workloads as practical.

Mills’ other cost savings tips: use flash to avoid the cost and complexity of disk storage. Also, eliminate duplicate applications—the fewer you run, the lower the cost. In short, elimination is the clearest path to saving money in the data center.

To illustrate the point, Jim Tussing from Nationwide described how the company virtualized and consolidated 60% on their 10,500 servers on a few mainframes and saved $46 million over five years. It also allowed the company to delay the need for an additional data center for 4 years.

See, if DancingDinosaur was an actual data center manager it could have justified attendance at the entire conference based on the economic tips from just one of the opening keynotes and spent the rest of the conference playing golf. Of course, DancingDinosaur doesn’t play golf so it sat in numerous program sessions instead, which you will hear more about in coming weeks.

You can follow DancingDinosaur on twitter, @mainframeblog

Data and Analytics—Key to Security at IBM Edge 2013

June 13, 2013

A well planned global bank robbery netted $45 million in one day without anyone setting foot in a bank. As reported in the New York Times, the robbers manipulated financial information with the stroke of a few keys and used that information to loot automated teller machines. To stop this kind of crime you need data security and analytics, not cops with guns drawn.

Forty-five million dollars disappeared, said IBM Fellow and VP, Innovation Bernie Meyerson, and nobody even noticed! Eventually they noticed and a few people—those visiting ATM machines—were arrested, but they weren’t the brains of the operation.

The lesson from this caper: you have to monitor data patterns and recognize abnormalities. No system picked up the fact that a handful of ATM access numbers were being used at the same time at widely dispersed locations. This goes beyond IT perimeter defense.

The solution, said Meyerson, is an agile defense based on real-time analytics. Then you can look for a variety of behaviors and attributes and stop an attack almost as soon as it is underway. Better yet, you might predict an attack before it starts.

For example, you can baseline normal behavior and use analytics to identify behavior outside the baseline. Or, you can profile an individual based on the files he or she normally views and the websites the person usually visits. Activities outside normal behavior would trigger an alarm that merits further exploration.

This kind of IT security goes beyond today’s standard IT security best practices built around perimeter protection, user identification and authorization, anti-virus, intrusion detection, and such. Rather, it is based on collecting a wide range of data in real-time and analyzing it to determine if it is outside the norm for that person. A bit Big Brother, yes, but it’s a dangerous world out there.

IBM brings a slew of products and services to this battle, including cognitive computing. In this case Watson, the Jeopardy winning IBM system, represents your biggest gun. Watson, which is smart and fast to begin with, also can learn. What, asks Meyerson, if cognitive systems like Watson could see the big picture and understand the context? Well, you can be pretty confident that $45 million wouldn’t disappear globally in hours without being noticed

Although most IT managers aren’t worrying about losing $45 million to theft in one day that doesn’t mean they don’t face complex and demanding security challenges. Cloud computing and mobile, especially with the added demand for BYOD,  is complicating what many previously considered good security blocking and tackling. Add to that Advanced Persistent Threats (APT)—slowly developing threats over an extended duration—spearfishing (different from phishing, another threat), zero-day attacks, SQL insertion, and attackers operating under the auspices of nation states; IT managers simply cannot relax their guard.

At Edge 2013 the IBM security team laid out its comprehensive security program that entails tools to automate security hygiene, manage security incidents through analytics, combat mobile threats, control network access, manage identity and third-party security, and address cloud and virtualization security. In short, the IBM program offers defense in depth.

IBM security experts at Edge 2013 also offered a variety of security tips: Begin with the assumption your organization has experienced attacks and already is infected to some extent, whether aware of it or not. A vulnerability analysis is a good place to start.

Understand how attackers work, such as by using social media to identify the weak points they can use to lure individually targeted managers—especially those who may have higher levels of system authorization—into opening an infected email. Pause before you click anything new.

Also avoid, replace, or update un-patched legacy systems, software, and systems with insecure configurations. That’s inviting trouble.

Finally, once the bad guy is in, you need a response strategy to move fast to isolate the problem and stop any spread while trying not to tip the bad guys off that you’re on to them. Resist the mad scramble to recover because it can ruin the evidence. Instead, follow your methodical response plan. You do have one, right?

IBM Expands Red Hat Partnership

May 9, 2011

IBM further embraced open source last week at Red Hat’s user conference in Boston around virtualization and cloud initiatives. The relationship, however, has been growing for over a decade as Red Hat Enterprise Linux (RHEL) becomes increasingly popular on the System z. The arrival of x and Power blades for the zBX should only increase the presence of RHEL on the System z.

Last year IBM selected Red Hat Enterprise Virtualization (RHEV) as a platform option for its development and test cloud service. Dev and test has emerged as a natural for cloud computing given its demands for quick setup and take down.

Although there weren’t any major specific System z announcements almost all System z shops run a mix of platforms, including System x for Linux and Windows, the Power platform for AIX and Linux, and are making forays into private, public, and hybrid clouds. So there was plenty coming out of the conference that will interest mainframe shops even if it wasn’t System z-specific.

With that in mind, here are three new Red Hat initiatives that will be of interest in mainframe shops:

First, open virtualization based on Red Hat’s open source KVM hypervisor. This enables an organization to create multiple virtual versions of Linux and Windows environments on the same server. This will help save money through the consolidation of IT resources and without the expense and limitations of proprietary technology. RHEV, an open source option, delivers datacenter virtualization by combining its centralized virtualization management system with the KVM hypervisor, which has emerged as a top hypervisor behind VMware.

According to Red Hat, RHEV delivers 45% better consolidation capacity than its competitors according to a recent Spec 1 virtualization benchmark and brings architectural support for up to 4,096 processor cores and up to 64TB of memory in the host, 32 virtual CPUs in the guest, and 1TB of RAM. This exceeds the abilities of proprietary hypervisors for Linux and Windows. Red Hat also reports RHEV Virtualization Manager can enable savings of up to 80% relative to comparable proprietary virtualization products in the first year (initial acquisition cost) and up to 66% over a course of three years. Finally support for such security capabilities as multi-tenancy combined with its scalability make it a natural for cloud computing.

Second, Red Hat introduced a platform-as-a-service (PaaS) initiative, called OpenShift, to simplify cloud development and deployment and reduce risk. It is aimed at open source developers and provides them with a flexible platform for developing cloud applications using a choice of development frameworks for Java, Python, PHP and Ruby, including Spring, Seam, Weld, CDI, Rails, Rack, Symfony, Zend Framework, Twisted, Django and Java EE. It is based on a cloud interoperability standard, Deltacloud, and promises to end PaaS lock-in, allowing developers to choose not only the languages and frameworks they use but the cloud provider upon which their application will run.

By building on the Deltacloud cloud interoperability standard, OpenShift allows developers to run their applications on any supported Red Hat Certified Public Cloud Provider, eliminating the lock-in associated with first-generation PaaS vendors. In addition it brings the JBoss middleware services to the PaaS experience, such as the MongoDB services and other RHEL services.

Third, Red Hat introduced CloudForms, a product for creating and managing IaaS in private and hybrid clouds. It allows users to create integrated clouds consisting of a variety of computing resources and still be portable across physical, virtual and cloud computing resources.  CloudForms addresses key problems encountered in first-generation cloud products: the cost and complexity of virtual server sprawl, compliance nightmares and security concerns.

What will make CloudForms of particular interest to heterogeneous mainframe shops is its ability to create hybrid clouds using existing computing resources: virtual servers from different vendors, such as Red Hat and VMware; different cloud vendors, such as IBM and Amazon; and conventional in-house or hosted physical servers, both racks and blades. This level of choice helps to eliminate lock-in and the need to undergo migration from physical to virtual servers in order to obtain the benefits of cloud.

Open source is not generally a mainframe consideration, but open source looms large in the cloud. It may be time for System z shops to add some of Red Hat’s new technologies to their System z RHEL, virtualization, and cloud strategies as they move forward.

US House kills mainframe for $750K

January 29, 2010

The US is facing multi-trillion-dollar deficits for years to come. A dysfunctional Congress can’t pass even the simplest of legislation without tacking on millions of dollars in earmarks. Yet, the House shut down its last mainframe computer, supposedly saving the taxpayers a whopping $750,000 a year in maintenance, fees, and energy costs. Watch the video of them pulling the plug here.

They have moved the applications to a variety of other servers and platforms, specifically x86 and UNIX servers running virtualization, according to Computerworld UK.  Of course the mainframe has the most robust, industrial strength virtualization of them all, whatever.

The final mainframe was 13 years old. Computerworld UK quotes the House director of facilities as saying “It wasn’t the fastest box in the world. Some of our blades and some of our standard servers have more capability than that entire 8-cubic-foot box has. Technology-wise, it’s obviously been surpassed.”

No kidding. The mainframe has gotten considerably smaller, fastest, cheaper, more flexible, and far more energy efficient in the past 13 years. Heck, they could have picked up one of IBM’s latest System z Solution Editions that would give them a BC-class z10, software, middleware, three years of maintenance, and more for a bit over $200,000. Hmm, wonder what all those distributed x86 and UNIX systems and their upkeep are costing us taxpayers.

Driving this change appears to be the House’s intense desire for green computing. Remember, these are the people who seem to be having trouble passing legislation that would cut our dependence on foreign oil. With the new platforms and virtualization the House has virtualized a couple of hundred servers, which indeed saves energy. A System z would have done as well or better.

The real issue, however, was not energy but the decline of IT skills in the House. As Computerworld UK reported, quoting the House facilities director: “We really don’t have those [mainframe] skill sets in-house anymore. We try not to maintain architecture that we can’t support ourselves.”

I see messages daily from experienced mainframe people unemployed and looking for work, some of whom probably live in the DC area. All these years and the House couldn’t find any skilled mainframe people, who are they kidding?

The House is just the latest and possibly the most visible recent manifestation of a problem that has the potential to seriously undermine the future of the mainframe no matter how much IBM enhances its capabilities, dumbs down its operations, and cuts prices—the unwillingness of management to invest in cultivating people with mainframe interests.

This issue, which came up in the discussions around the Union Pacific Railroad’s decision to try to migrate off the mainframe—at which it apparently has not yet succeeded—appeared  first in Network World and was taken up in this blog and elsewhere. It also was discussed at some length in this Mainframe Zone discussion.

If current IT managers perceive the mainframe as a costly, obsolete dinosaur heading to extinction they will not invest in cultivating mainframe skills no matter how wrong that perception is. This more than anything else will undermine the mainframe’s future and become a self-fulfilling prophecy.

%d bloggers like this: