Posts Tagged ‘System z’

Automated System z Workload Capping Can Save Big Bucks

June 20, 2014

IBM’s Monthly License Charge (MLC) pricing can be a powerful tool to significantly lower the cost of software licensing for a mainframe shop. The problem: it is frightfully complicated. DancingDinosaur has attended conferences that scheduled multi-part sessions just to cover the basic material. Figuring out which pricing program you qualify for is itself a challenge and you probably want a lawyer looking over your shoulder. Find IBM’s System z pricing page here.

One particularly galling challenge is estimating and capping the 4-hour utilization for each LPAR.  You can easily find yourself in a situation where you exceed the cap on one LPAR, resulting in a surcharge, while you have headroom to spare on other LPARs. The trick is to stay on top of this by constantly monitoring workloads and shift activity among LPARs to ensure you don’t exceed a cap.

This requires a skilled mainframe staffer with both a high level of z/OS skill and familiarity with z workloads and LPARs. While you’re at it throw in knowledge of data center operations and the organization’s overall business direction. Finding such an expert is costly and not easily spared for constant monitoring. It’s a task that lends itself to automation.

And that’s exactly what BMC did earlier this week when it introduced Intelligent Capping (iCap) for zSeries mainframes. On average, according to BMC, companies that actively manage and effectively prioritize their mainframe workloads save 10-15 percent more on their monthly license charges than those who use a more passive approach. Furthermore, instead of assigning a costly mainframe workload guru to manually monitor and manage this, BMC promises that the costs can be reduced while also diminishing risk to the business through the use of its intelligent iCap software that understands workloads, makes dynamic adjustments, and automates workload capping.

The savings, according to BMC, can add up fast. In one example, BMC cited saving 161 MSUs, which translated for that organization to over $55k that month. Given that a mainframe shop spends anywhere from few hundred thousand to millions of dollars per month on MLC charges savings of just a few percent can be significant. One BMC customer reportedly expects intelligent capping to save it 12% each month. Caveat: DancingDinosaur has not yet been able to speak with any BMC iCap customer to verify these claims.

But assuming they are true, iCap is a no-brainer for any mainframe shop paying anything but the most minimal MLC. BMC charges for iCap based on the customer’s capacity. It is willing to discuss a shared gain model by which the iCap charges are based on how much is saved but none of those deals apparently have been finalized.

This seems like a straightforward challenge for a mainframe management tool vendor but DancingDinosaur has found only a few actually doing it—BMC, Softwareonz, and IBM. Softwareonz brings AutoSoftCapping. The product promises to maximize software cost efficiency for IBM zSeries platforms, and specifically z/OS. It does so by automatically adjusting defined capacity by LPAR based upon workload while maintaining a consistent overall defined capacity for your CPC.

Softwareonz, Seattle, estimates it saves 2% on monthly charges, on the low end. At the high end, it has run simulations suggesting 20% savings.  AutoSoftCapping only works for datacenters running their z on the VWLC pricing model. Customers realistically can save 8-10%. Again, DancingDinosaur has not yet validated any savings with an actual Softwareonz customer.

Without automation, you have to do this manually, by adjusting defined capacity based on actual workloads. Too often that leave the organization with the choice of constraining workloads and thereby inhibiting performance or over-provisioning the cap and thereby driving up costs through wasted capacity.

So, if automatic MLC capping is a no brainer, why isn’t everybody doing it? Softwareonz sees several reasons, the primary one being the fear of the cap negatively impacting the VWLC four-hour rolling average. Nobody wants to impact their production workloads. Of course, the whole reason to apply intelligence to the automation is to reduce software costs without impacting production workloads. BMC offers several ways to ease the organization into this as they become more comfortable and confident in the tool.

Another reason suggested is that the System z operational team is protecting its turf from the inroads of automation. A large z shop might use a team of half a dozen or more people dedicated to monitoring and managing workloads manually. Bring in automation like iCAP or AutoSoftCapping and they expect pink slips to follow.

Of course, IBM brings the z/OS Capacity Provisioning tool for z/OS (v1.9 and above), which can be used to add and remove capacity through a Capacity Provisioning Manager (CPM) policy. This can be used to automatically control the defined capacity limit or the group capacity limits. The user interface for defining CPM policies is through z/OSMF.

If you are subject to MLC pricing, consider an automated tool. BTW, there also are consultants who will do this for you.

A note: IBM Enterprise Cloud System, covered by DancingDinosaur a few weeks ago here, is now generally available. It is an OpenStack-based converged offering that includes compute, storage, software, and services built around the zBC12. Check out the most recent details here.

Also take note: IBM Enterprise2014 is coming to Las Vegas in early Oct, Details here. The conference combines System z University and Power System University plus more. You can bet there will be multiple sessions on MLC pricing in its various permutations and workload capping.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or visit his website, www.technologywriter.com

 

Expanding Mainframe Linux and Cloud Computing

June 9, 2014

In case you wondered if IBM is seriously committed to both mainframe Linux and cloud computing on the System z platform you need only look at the June 2 announcement that the company is opening the first dedicated System z Linux and cloud computing competency center in Beijing.  According to the announcement, the new center is specifically intended to help organizations there take advantage of Linux and cloud computing solutions on the mainframe, and help accelerate adoption of Linux on System z in China.

This is just the most recent of a number of developments that boosted the System z profile. Even at the recent IBM Edge 2014 conference, which was not about the System z at all (a System z and Power conference, Enterprise 2014, is coming up in October) still managed to slip in some System z sessions and content, including one about protecting DB2 data on z/OS using tape and other sessions that included the System z and Power enterprise servers in discussions on various aspects of cloud computing or the use of flash.

Following the Mainframe50 announcement earlier in the spring, IBM introduced more System z enhancements including the IBM Enterprise Cloud System, an OpenStack-based converged offering that includes compute, storage, software, and services and built around the zBC12; IBM Wave for z/VM, which simplifies z/VM virtualization management and expedites an organization’s path to the cloud; and a new IBM Cloud Management Suite for System z, which handles dynamic provisioning and performance monitoring.

An interesting aspect of this announcement is the IBM’s focus on Linux. It has taken a decade for Linux to gain traction in System z data centers but patience is finally paying off.  Linux has proven instrumental in bringing new mainframe users to the platform (DancingDinosuar previously reported on Algar, a Brazilian telco) ; according to IBM, more than 50% of all new mainframe accounts since 2010 run Linux. To that end, DancingDinosaur has long recommended the Enterprise Linux Server Solution Edition program, a deeply discounted package hardware, middleware, and software. It represents the best and maybe the only bargain IBM regularly offers.

Linux itself has proven remarkably robust and has achieved widespread acceptance among enterprises running a variety of platforms. According to the IDC, Linux server demand is rising due to demand from cloud infrastructure deployments. The researcher expects that demand to continue into the future. In the first quarter of 2014, Linux server revenue accounted for 30 percent of overall server revenue, an increase of 15.4 percent.

Along with cloud computing, collaborative development appears to be contributing to the continued growth and adoption of Linux. According to the Linux Foundation, a new business model has emerged in which companies are joining together across industries to share development resources and build common open source code bases on which they can differentiate their own products and services. This collaborative approach promises to transform a number of industries, especially those involved with cloud computing, social and mobile. Apparently it provides a fast way to create the next generation of technology products.

In its latest survey, the Linux Foundation identified three drivers or the recent Linux growth:

  1. Collaborative software development—ninety-one percent of business managers and executives surveyed ruled collaborative software development somewhat to very important to their business while nearly 80 percent say collaborative development practices have been seen as more strategic to their organization over the past three years.
  2. Growing investments in collaborative software development—44 percent of business managers said they would increase their investments in collaborative software development in the next six months
  3. The benefits of collaboration—more than 77 percent of managers said collaborative development practices have benefited their organizations through a shorter product development cycle/faster time to market.

The bulk of the world’s critical transaction processing and production data continue to reside on the mainframe, around 70 percent, according to IBM. Similarly, 71% of all Fortune 500 companies have their core businesses on a mainframe. And this has remained remarkably steady over the past decade despite the rise of cloud computing. Of course, all these organizations have extensive multi-platform data centers and are adding growing numbers of on-premise and increasingly hybrid cloud systems.

Far from relying on its core production processing to carry the mainframe forever, the new Beijing mainframe Linux-cloud center demonstrates IBM’s intent to advance the mainframe platform in new markets. It is opening the mainframe up in a variety of ways; from z/OS in the cloud to Hadoop for z to new cloud-like pay-for-use pricing models. Watch DancingDinosaur for an upcoming post on the new pricing discounts for mobile transactions on z/OS.

DancingDinosaur is Alan Radding and can be followed on Twitter, @mainframeblog

New IBM Flash Storage for the Mainframe

June 2, 2014

IBM is serious about flash storage and they are enabling just about everything for flash—the DS8000 family, San Volume Controller, EasyTier, Real-time Compression (RtC), and more.  Of particular interest to DancingDinosaur readers should be the recently announced DS8870 all flash enclosure.

Storage in general is changing fast. Riding Moore’s Law for the past two decades, storage users could assume annual drops in the cost per gigabyte. It was as predictable as passing go in Monopoly and collecting $200. But with that ride coming to an end companies like IBM are looking elsewhere to engineer the continued improvements everyone assumed and benefited from. For example, IBM is combining SVC, RtC, and flash to get significantly more performance out of less actual storage capacity.

The DS8870 is particularly interesting. In terms of reliability, for instance, it delivers not five-nines (99.999%) availability but six-nines (99.9999%) availability. That works out to be about 30 seconds of downtime each year. It works with all IBM servers, not just the z, and it protects data through full disk encryption and advanced access control. With the new flash enclosure packed with IBM’s enhanced flash the DS8870 delivers 4x faster flash performance in 50% less space. That translates into a 3.2x improvement in database performance.

Flash is not cheap when viewed through the traditional cost/gigabyte metric, but the above performance data suggests a different way to gauge the cost of flash, which continues to steadily fall in price. The 3.2x increase in database performance, for example, means you can handle over 300% more transactions.

Let’s start with the assumption that more transactions ultimately translate into more revenue. The same for that extra 9 in availability. The high-performance all flash DS8870 configuration with the High Performance Flash Enclosure also reduces the footprint by 50% and reduces power consumption by 12%, which means lower space and energy costs. It also enables you to shrink batch times by 10%, according to IBM. DancingDinosaur will be happy to help you pull together a TCO analysis for an all-flash DS8870 investment.

The sheer specs of the new system are impressive. IBM reports the product’s up to 8 PCIe enclosures populated with 400 GB flash cards provides 73.6TB of usable capacity. For I/O capacity the 8 I/O bays installed in the base frame provide up to 128 8Gb FC ports. Depending on the internal server you install in the DS8870 you can also get up to 1TB of cache.

all flash rack enclosure

all flash rack enclosure

ds8870 rack

The Flash Enclosure itself is a 1U drawer that can take up to 30 flash cards.  By opting for thirty 400GB flash cards you will end up with 9.2TB Usable (12 TB raw). Since the high-performance all flash DS8870 can take up to 8 Flash Enclosures you can get 96TB raw (73.6TB usable) flash capacity per system.

A hybrid DS8870 system, as opposed to the high-performance all flash version, will allow up to 120 Flash cards in 4 Flash Enclosures for 48TB raw (36.8TB usable), along with 1536 2.5” HDDs/SSDs. Then, connect it all to the DS8870 internal PCIe fabric for impressive performance— 200,000 IOPS (100% read) and 130,000 IOPS (100% write). From there, you can connect it to flash-enabled SVC and Easy Tier.

Later this year, reports Clod Barrera, IBM’s storage CTO, you will be able to add 4 more enclosures in hybrid configurations for boosting flash capacity up to 96TB raw.  Together you can combine the DS8870, flash, SVC, RtC, and EasyTier for a lightning fast and efficient storage infrastructure.

Even the most traditional System z shop will soon find itself confronting mixed workloads consisting of traditional and non-traditional workload. You probably already are as mobile devices initiate requests for mainframe data. Pretty soon you will be faced with incorporating traditional and new workloads. When that happens you will want a fast, efficient, flexible infrastructure like the DS8870.

DancingDinosaur is Alan Radding. Follow him onTwitter, @mainframeblog

IBM Edge2014: It’s All About the Storage

May 22, 2014

When your blogger as a newbie programmer published his first desktop application in the pre-historic desktop computing era it had to be distributed on consumer tape cassette. When buyers complained that it didn’t work the problem was quickly traced to imprecise and inconsistent consumer cassette storage. Since the dawn of the computer era, it has always been about storage.

It still is. Almost every session at IBM Edge2014 seemed to touch on storage in one way or another.  Kicking it all off was Tom Rosamilia, Senior Vice President,  IBM Systems & Technology Group, who elaborated on IBM’s main theme not just for Edge2014 but for IBM at large: Infrastructure Matters Because Business Outcomes Matter. And by infrastructure IBM mainly is referring to storage. Almost every session, whether on cloud or analytics or mobile, touched on storage in one way or another.

To reinforce his infrastructure matters point Rosamilia cited a recent IBM study showing that 70% of top executives now recognize infrastructure as an enabler. However, just 10% reported their infrastructure was ready for the challenge.  As an interesting aside, the study found 91% of the respondents’ customer facing applications were using the System z, which only emphasizes another theme at IBM Edge2014—that companies need to connect systems of record with systems of engagement if they want to be successful.

In fact, IBM wants to speed up computing overall, starting with flash and storage. A study by the Aberdeen Group found that a 1 sec. delay in page load resulted in a 77% loss in conversions, 11% fewer page views, and a 16% decrease in customer satisfaction.  IBM’s conclusion: In dollar terms, this means that if your site typically earns $100,000 a day, this year you could lose $2.5 million in sales.  Expect all IBM storage to be enabled for some form of flash going forward.

First announced at IBM Edge2014 were the FlashSystem 840 and the IBM FlashSystem V840, which includes integrated data virtualization through IBM’s SVC and its various components. It also boasts a more powerful controller capable of rich capabilities like compression, replication, tiering, thin provisioning, and more. Check out the details here.

Also at Edge2014 there was considerable talk about Elastic Storage. This is the storage you have always imagined. You can manage mixed storage pools of any device. Integrate with any OS. Write policies to it. It seems infinitely scalable. Acts as a universal cloud gateway. And even works with tape.

Sounds magical doesn’t it?  According to IBM, Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required—sounds like EasyTier built in. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages. And it can run on a cluster of x86 and POWER-based servers and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors. Half the speakers at the conference glowed about Elastic Storage.  Obviously it exists, but it is not an actually named product yet. Watch for it, but it is going to have a different name when finally released, probably later this year. No hint at what that name will be.

IBM, at the conference, identified the enhanced XIV as the ideal cloud infrastructure. XIV eliminates complexity. It enables high levels of resiliency and ensures service levels. As one speaker said: “It populates LUNs and spreads the workload evenly. You don’t even have to load balance it.” Basically, it is grid storage that is ideal for the cloud.

LTFS (Linear Tape File System) was another storage technology that came up surprisingly frequently. Don’t assume that that tape has no future, not judging from IBM Edge2014. LTFS provides a GUI that enables you to automatically move infrequently accessed data from disk to tape without the need for proprietary tape applications. Implementing LTFS Enterprise Edition allows you to replace disk with tape for tiered storage and lower your storage TCO by over 50%. Jon Toigo, a leading storage analyst, has some good numbers on tape economics that may surprise you.

Another sometimes overlooked technology is EasyTier, IBM’s storage tiering tool.  EasyTier has evolved into a main way for IBM storage users to capitalize on the benefits of Flash. EasyTier already has emerged as an effective tool for both the DS8000 and the Storwize V7000.  With EasyTier small amounts of Flash can deliver big performance improvements.

In the coming weeks DancingDinosaur will look at other IBM Edge 2014 topics.  It also is time to start thinking about IBM Enterprise 2014, which combines the System z and Power platforms. It will be at the Venetian in Las Vegas, Oct 6-10. IBM Enterprise 2014 is being billed as the premier enterprise infrastructure event.

BTW, we never effectively solved the challenge of distributing desktop programs until the industry came out with 5.5” floppy disks. Years later my children used the unsold floppies as little Frisbees.

Follow Alan Radding and DancingDinosaur on Twitter, @mainframeblog

IBM Edge2014 as Coming out Party for OpenStack

May 7, 2014

IBM didn’t invent OpenStack (Rackspace and NASA did), but IBM’s embrace of OpenStack in March 2013 as its standard for cloud computing made it a legit standard for enterprise computing. Since then IBM has made its intention to enable its product line, from the System z on down, for the OpenStack set of open source technologies.  Judging from the number of sessions at IBM Edge 2014, (Las Vegas, May 19-23 at the Venetian) that address one or another aspect of OpenStack you might think of IBM Edge2014 almost as a coming out celebration for OpenStack and enterprise cloud computing.

OpenStack is a collection of open source technologies. the goal of which is to provide a scalable computing infrastructure for both public and private clouds. As such it has become the foundation of IBM’s cloud strategy, which is another way of saying it has become what IBM sees as its future. An excellent mini-tutorial on OpenStack, IBM, and the System z can be found at mainframe-watch-Belgium here.

At IBM Edge2014 OpenStack is frequently included in sessions on storage, cloud, and storage management.  Let’s take a closer look at a few of those sessions.

IBM Storage and Cloud Technologies

Presenter Christopher Vollmar offers an overview of the IBM storage platforms that contain cloud technologies or provide a foundation for creating a private storage cloud for block and file workloads. This overview includes IBM’s SmartCloud Virtual Storage Center, SmartCloud Storage Access, Active Cloud Engine, and XIV’s Hyper-Scale as well as IBM storage products’ integration with OpenStack.

OpenStack and IBM Storage

Presenters Michael Factor and Funda Eceral explain how OpenStack is rapidly emerging as the de facto platform for Infrastructure as a Service. IBM is working fast to pin down the integration of its storage products with OpenStack. This talk presents a high level overview of OpenStack, with a focus on Cinder, the OpenStack block storage manager. They also will explain how IBM is leading the evolution of Cinder by improving the common base with features such as volume migration and ability to change the SLAs associated with the volume in the OpenStack cloud. Already IBM storage products—Storwize, XIV, DS8000, GPFS and TSM—are integrated with OpenStack, enabling self-provisioning access to features such as EasyTier or Real-time Compression via standard OpenStack interfaces. Eventually, you should expect virtually all IBM products, capabilities, and services to work with and through OpenStack.

IBM XIV and VMware: Best Practices for Your Cloud

Presenters Peter Kisich, Carlos Lizarralde argue that IBM Storage continues to lead in OpenStack integration and development. They then introduce the core services of OpenStack while focusing on how IBM storage provides open source integration with Cinder drivers for Storwize, DS8000 and XIV. They also include key examples and a demonstration of the automation and management IBM Storage offers through the OpenStack cloud platform.

IBM OpenStack Hybrid Cloud on IBM PureFlex and SoftLayer

Presenter Eric Kern explains how IBM’s latest version of OpenStack is used to showcase a hybrid cloud environment. A pair of SoftLayer servers running in IBM’s public cloud are matched with a PureFlex environment locally hosting the OpenStack controller. He covers the architecture used to set up this environment before diving into the details around deploying workloads.

Even if you never get to IBM Edge2014 it should be increasingly clear that OpenStack is quickly gaining traction and destined to emerge as central to Enterprise IT, any style of cloud computing, and IBM. OpenStack will be essential for any private, public, and hybrid cloud deployments. Come to Edge2014 and get up to speed fast on OpenStack.

Alan Radding/DancingDinosaur will be there. Look for me in the bloggers lounge between and after sessions. Also watch for upcoming posts on DancingDinosaur about OpenStack and the System z and on OpenStack on Power Systems.

Please follow DancingDinosaur on Twitter, @mainframeblog.

Best System z TCO in Cloud and Virtualization

May 1, 2014

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines in terms of TCO.  The analysis, which DancingDinosaur will dig into below, was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

DancingDinosaur has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 based systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial z Enterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM and the few IT analysts who talk to z customers have been saying for some time. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual machines compared to the public cloud and a bit more VMs compared to x86 machines.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance. View the IBM z TCO presentation here.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for AWS reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instances.

When IBM applied its analysis to 398 I/O diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM kept the assumptions equivalent across the platforms. If you make different software and middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the overall comparative rankings probably won’t change all that much.

Still time to register for IBM Edge2014 in Las Vegas, May 19-23. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog

Happy 50th System z

April 11, 2014

IBM threw a delightful anniversary party for the mainframe in NYC last Tuesday, April 8. You can watch video from the event here

About 500 people showed up to meet the next generation of mainframers, the top winners of the global Master of the Mainframe competition. First place went to Yong-Sian Shih, Taiwan; followed by Rijnard van Tonder, South Africa; and Philipp Egli, United Kingdom.  Wouldn’t be surprised if these and the other finalists at the event didn’t have job offers before they walked out of the room.

The System z may be built on 50-year old technology but IBM is rapidly driving the mainframe forward into the future. It had a slew of new announcements ready to go at the anniversary event itself and more will be rolling out in the coming months. Check out all the doings around the Mainframe50 anniversary here.

IBM started the new announcements almost immediately with Hadoop on the System z. Called  zDoop, the industry’s first commercial Hadoop for Linux on System z, puts map reduce big data analytics directly on the z. It also announced Flash for mainframe, consisting of the latest generation of flash storage on the IBM DS8870, which promises to speed time to insight with up to 30X the performance over HDD. Put the two together and the System z should become a potent big data analytics workhorse.

But there was even more. Mobile is hot and the mainframe is ready to play in the mobile arena too. Here the problem z shops experience is cost containment. Mainframe shops are seeing a concurrent rise in their costs related to integrating new mobile applications. The problem revolves around the fact that many mobile activities use mainframe resources but don’t generate immediate income.

The IBM System z Solution for Mobile Computing addresses this with new pricing for mobile workloads on z/OS by reducing the cost of the growth of mobile transaction volumes that can cause a spike in software charges. This new pricing will provide up to a 60% reduction on the processor capacity reported for Mobile activity, which can help normalize the rate of transaction growth that generates software charges. The upshot: much mobile traffic volume won’t increase your software overhead.

And IBM kept rolling out the new announcements:

  • Continuous Integration for System z – Compresses the application delivery cycle from months to weeks or days.   Beyond this IBM suggested upcoming initiatives to deliver full DevOps capabilities for the z
  • New version of IBM CICS Transaction Server – Delivers enhanced mobile and cloud support for CICS, able to handle more than 1 billion transactions per day
  • IBM WebSphere Liberty z/OS Connect—Rapid and secure enablement of web, cloud, and mobile access to z/OS assets
  • IBM Security zSecure SSE – Helps prevent malicious computer attacks with enhanced security intelligence and compliance reporting that delivers security events to QRadar SIEM for integrated enterprise- wide security intelligence dashboarding

Jeff Frey, an IBM Fellow and the former CTO of System z, observed that “this architecture was invented 50 years ago, but it is not an old platform.”  It has evolved over those decades and continues evolve. For example, Frey expects the z to accommodate 22nm chips and a significant increase in the increase in the number of cores per chip. He also expects vector technology, double precision floating point and integer capabilities, and FPGA to be built in. In addition, he expects the z to include next generation virtualization technology for the cloud to support software defined environments.

“This is a modern platform,” Frey emphasized. Other IBMers hinted at even more to come, including ongoing research to move beyond silicon to maintain the steady price/performance gains the computing industry has enjoyed the past number of decades.

Finally, IBM took the anniversary event to introduce a number of what IBM calls first-in-the-enterprise z customers. (DancingDinosaur thinks of them as mainframe virgins).  One is Steel ORCA, a managed service provider putting together what it calls the first full service digital utility center.  Based in Princeton, NJ, Phase 1 will offer connections of less than a millisecond to/from New York and Philadelphia. The base design is 300 watts per square foot and can handle ultra-high density configurations. Behind the operation is a zEC12. Originally the company planned to use an x86 system but the costs were too high. “We could cut those costs in half with the z,” said Dave Crocker, Steel ORCA chairman.

Although the Mainframe50 anniversary event has passed, there will be Mainframe50 events and announcements throughout the rest of the year.  Again, you can follow the action here.

Coming up next for DancingDinosaur is Edge2014, a big infrastructure innovation conference. Next week DancingDinosaur will look at a few more of the most interesting sessions, and there are plenty. There still is time to register. Please come—you’ll find DancingDinosaur in the bloggers lounge, at program sessions, and at the Sheryl Crow concert.

Follow DancingDinosaur on Twitter, @mainframeblog

 

One week to Mainframe50—Be There Virtually

April 1, 2014

Back in February, DancingDinosaur started writing about the upcoming Mainframe50 celebration. Now we’re just one week away from what will be a nearly year-long celebration, introductions of new mainframe advances, and more. It all starts on Tues., April 8 in New York City.

You can join through Livestream for the event and news briefing.  Just click here and join in from wherever you are virtually.

Or you can register to attend the event by clicking here. DancingDinosaur will be there and plans to file a report later that day on this blog and also be tweeting throughout all the Mainframe50 events. Follow it all on Twitter, @mainframeblog.

Later this week, DancingDinosaur will be posting the latest in a series of reports from Edge 2014, being held in Las Vegas, May 19-23. There is still time to register and get a discount. You can find DancingDinosaur there in the Bloggers Lounge after sessions, keynotes, and the Sheryl Crow concert.

And please follow DancingDinosaur on Twitter, @mainframeblog

 

 

 

 

 

The Future of IBM Lies in the Cloud

March 13, 2014

In her annual letter to stockholders IBM CEO Virginia Rometty made it clear that the world is being forever altered by the explosion of digital data and by the advent of the cloud. So, she intends IBM to “remake the enterprise IT infrastructure for the era of cloud.” This where she is leading IBM.

DancingDinosaur thinks she has it right. But where does that leave this blog, which was built on the System z, Power Systems, and IBM’s enterprise systems? Hmm.

Rometty has an answer for that buried far down in her letter. “We are accelerating the move of our Systems product portfolio—in particular, Power and storage—to growth opportunities and to Linux, following the lead of our successful mainframe business. “

The rapidly emerging imperatives of big data, cloud computing, and mobile/social require enterprise-scale computing in terms of processing power, capacity, availability, security, and all the other ities that have long been the hallmark of the mainframe and IBM’s other enterprise class systems. She goes so far as to emphasize that point:  “Let me be clear—we are not exiting hardware. IBM will remain a leader in high-performance and high-end systems, storage and cognitive computing, and we will continue to invest in R&D for advanced semiconductor technology.”

You can bet that theme will be continued at the upcoming Edge 2014 conference May 19-23 in Las Vegas. The conference will include an Executive program, a Technical program with 550 expert technical sessions across 14 tracks, and a partner program. It’s being billed as an infrastructure innovation event and promises a big storage component too. Expect to see a lot of FlashSystems and XIV, which has a new pay-as-you-go pricing program that will make it easy to get into XIV and scale it fast as you need it. You’ll probably also encounter some other new go-to-market strategies for storage.

As far as getting to the cloud, IBM has been dropping billions to build out about as complete a cloud stack as you can get.  SoftLayer, the key piece, was just the start. BlueMix, an implementation of IBM’s Open Cloud Architecture, leverages Cloud Foundry to enable developers to rapidly build, deploy, and manage their cloud applications while tapping a growing ecosystem of available services and runtime frameworks, many of which are open source. IBM will provide services and runtimes into the ecosystem based on its already extensive and rapidly expanding software portfolio. BlueMix is the IBM PaaS offering that compliments SoftLayer, its IaaS offering. Cloudant, the most recent acquisition, brings database as a service (DBaaS) to the stack. And don’t forget IBM Wave for z/VM, which virtualizes and manages Linux VMs, a critical cloud operation for sure. With this conglomeration of capabilities IBM is poised to offer something cloud-like to just about any organization. Plus, tying WebSphere and its other middleware products to SoftLayer bolsters the cloud stack that much more.

And don’t think IBM is going to stop here. DancingDinosaur expects to see more acquisitions, particularly when it comes to hybrid clouds and what IBM calls systems of engagement. Hybrid clouds, for IBM, link systems of engagement—built on mobile and social technologies where consumers are engaging with organizations—with systems of record, the main workloads of the System z and Power Systems, where data and transactions are processed.

DancingDinosaur intends to be at Edge 2014 where it expects to see IBM detailing a lot of its new infrastructure and demonstrating how to use it. You can register for Edge 2014 here until April 20 and grab a discount.

Follow DancingDinosaur on Twitter: @mainframeblog

The Next Generation of Mainframers

March 6, 2014

With seemingly every young person with any technology inclinations aiming to become the next WhatsApp and walk away with some of Facebook’s millions it is fair to wonder: Where is the next generation of mainframers going to come from and who are they going to be?

The answer: IBM is lining them up now. As the mainframe turns 50 you’ll have a chance to meet some of these up and coming mainframers as part of IBM’s 50th Mainframe Anniversary celebration in New York, April 8, when IBM announces winners of the World Championship round of its popular Master of the Mainframe competition.

According to IBM, the Championship is designed to assemble the best university students from around the globe who have demonstrated superior technical skills through participation in their regional IBM Master the Mainframe Contests. Out of the 20,000 students who have engaged in country-level Master the Mainframe Contests over the last three years, the top 44 students from 22 countries have been invited to participate in the inaugural IBM Master the Mainframe World Championship.

These students will spend the month of March working through the Systems of Engagement concept, an expansion of the traditional Systems of Record—core transaction systems—that have been the primary workload of mainframe computing. The students will deploy Systems of Record mainframe business applications written with Java and COBOL using DB2 for z/OS API’s to demonstrate how the Systems of Engagement concept takes full advantage of the mainframe’s advanced capabilities. In short, the mainframe is designed to support tomorrow’s most demanded complex workloads  Big Data, Cloud, and Mobile computing workloads and do them all with the most effective enterprise-class security. The students will showcase their applications on April 7, 2014 in New York City where judges will determine which student earns the distinction of “Master the Mainframe World Champion.”

Representing the United States are Mugdha Kadam from the University of Florida, Elton Cheng from the University of California San Diego, and Rudolfs Dambis from the University of Nevada Las Vegas. You can follow the progress of the competitors here.  After March 17 the site will include a leaderboard so you can follow your favorites. No rumors of betting pools being formed yet but it wouldn’t surprise DancingDinosaur.  Win or not, each competitor should be a prime candidate if your organization needs mainframe talent.

This is part of IBM’s longstanding System z Academic Initiative, which has been expanding worldwide and now encompasses over 64,000 students at more than 1000 schools across 67 countries.  And now high school students are participating in the Master the Mainframe competition. Over 360 companies are actively recruiting from these students, including Baldor, Dillards, JB Hunt, Wal-mart, Cigna, Compuware, EMC, Fidelity, JP Morgan Chase, and more.

Said Jeff Gill, at VISA: “Discovering IBM’s Academic Initiative has been a critical success factor in building a lifeline to our future—a new base of Systems Engineers and Applications Developers who will continue to evolve our mainframe applications into flexible open enterprise solutions while maintaining high volume / high availability demands. Without the IBM Academic Initiative, perhaps we could have found students with aptitude – but participation in the Academic Initiative demonstrates a student’s interest in mainframe technology which, to us, translates to a wise long-term investment.“ Gill is one of the judges of the Masters the Mainframe World Championship.

Added Martin Kennedy of Citigroup: “IBM’s Master the Mainframe Contest offers a great resource to secure candidates and helps the company get critical skills as quickly as possible.”

The Master of the Mainframe Championship and even the entire 50th Anniversary celebration that will continue all year are not really IBM’s primary mainframe thrust this year.  IBM’s real focus is on emphasizing the forward-moving direction of the mainframe. As IBM puts in: “By continually adapting to trends and evolving IT, we’re driving new approaches to cloud, analytics, security and mobile computing to help tackle challenges never before thought possible.  The pioneering innovations of the mainframe all serve one mission—deliver game-changing technology that makes the extraordinary possible and improves the way the world works.

DancingDinosaur covers the mainframe and other enterprise-class technology. Watch this blog for more news on the mainframe and other enterprise systems including Power, enterprise storage, and enterprise-scale cloud computing.

With that noted, please plan to attend Edge 2014, May 19-23 in Las Vegas. Being billed as an infrastructure and storage technology conference, it promises to be an excellent follow-on to last year’s Edge conference.  DancingDinosaur will be there, no doubt hanging out in the blogger’s lounge where everyone is welcome. Watch this blog for upcoming details on the most interesting sessions.

And follow DancingDinosaur on Twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 666 other followers

%d bloggers like this: