Posts Tagged ‘Oracle’

System z Takes BackOffice Role in IBM-Apple Deal

July 21, 2014

DancingDinosaur didn’t have to cut short his vacation and race back last week to cover the IBM-Apple agreement. Yes, it’s a big deal, but as far as System z shops go it won’t have much impact on their data center operations until late this year or 2015 when new mobile enterprise applications apparently will begin to roll out.

The deal, announced last Tuesday, promises “a new class of made-for-business apps targeting specific industry issues or opportunities in retail, healthcare, banking, travel and transportation, telecommunications, and insurance among others,” according to IBM. The mainframe’s role will continue to be what it has been for decades, the backoffice processing workhorse. IBM is not porting iOS to the z or Power or i or any enterprise platform.

Rather, the z will handle transaction processing, security, and data management as it always has. With this deal, however, analytics appears to be assuming a larger role. IBM’s big data and analytics capabilities is one of the jewels it is bringing to the party to be fused with Apple’s legendary consumer experience. IBM expects this combination—big data analytics and consumer experience—to produce apps that can transform specific aspects of how businesses and employees work using iPhone and iPad devices and ultimately, as IBM puts it, enable companies to achieve new levels of efficiency, effectiveness and customer satisfaction—faster and easier than ever before.

In case you missed the point, this deal, or alliance as IBM seems to prefer, is about software and services. If any hardware gets sold as a result, it will be iPhones and iPads. Of course, IBM’s MobileFirst constellation of products and services stand to gain. Mainframe shops have been reporting a steady uptick in transactions originating from mobile devices for several years. This deal won’t slow that trend and might even accelerate it. The IBM-Apple alliance also should streamline and simplify working with and managing Apple’s mobile devices on an enterprise-wide basis.

According to IBM its MobileFirst Platform for iOS will deliver the services required for an end-to-end enterprise capability, from analytics, workflow and cloud storage to enterprise-scale device management, security and integration. Enhanced mobile management includes a private app catalog, data and transaction security services, and a productivity suite for all IBM MobileFirst for iOS offerings. In addition to on premise software solutions, all these services will be available on Bluemix—IBM’s development platform available through the IBM Cloud Marketplace.

One hope from this deal is that IBM will learn from Apple how to design user-friendly software and apply those lessons to the software it subsequently develops for the z and Power Systems. Would be interesting see what Apple software designers might do to simplify using CICS.

Given the increasing acceptance of BYOD when it comes to mobile, data centers will still have to cope with the proliferation of operating systems and devices in the mobile sphere. Nobody is predicting that Android, Amazon, Google, or Microsoft will be exiting the mobile arena as a result, at least not anytime soon.

Finally, a lot of commentators weighed in on who wins or loses in the mobile market. In terms of IBM’s primary enterprise IT competitors Oracle offers the Oracle Mobile Platform. This includes mobile versions of Siebel CRM, JD Edwards, PeopleSoft, and a few more. HP offers mobile app development and testing and a set of mobile application services that include planning, architecture, design, build, integration, and testing.

But if you are thinking in terms of enterprise platform winners and losers IBM is the clear winner; the relationship with Apple is an IBM exclusive partnership. No matter how good HP, Oracle, or any of IBM’s other enterprise rivals might be at mobile computing without the tight Apple connection they are at a distinct disadvantage. And that’s before you even consider Bluemix, SoftLayer, MobileFirst, and IBM’s other mobile assets.

BTW, it’s not too early to start planning for IBM Enterprise 2014. Mark your calendar, Oct 6-10 at the Venetian in Las Vegas. This event should be heavily z and Power.

DancingDinosaur is Alan Radding. Follow him on Twitter @mainframeblog or at Technologywriter.com.

Oracle Partners to Match IBM in the Cloud

June 28, 2013

At IBM Edge 2013 earlier in June the company effectively declared the cloud the face of the computing future, citing a fast compound annual growth rate that would result in 25% of companies using public or private clouds by 2015. Everywhere it seems, noted Clod Barrera,  IBM Chief Technical Strategist in a presentation at the June conference, companies are  “re-plumbing IT, making it more cloud-like.”

That means offering  user self service, click through service, service catalogs, chargeback, and more.  These companies,  Barrera continued,  start with what they have and want to layer the cloud user experience and cloud speed and flexibility on top of it.  They’re not intending to throw anything away either, not the System z or P or anything else.

With that in mind, IBM has been rolling out a stream of SmartCloud offerings for every flavor of organization and making acquisitions to facilitate its cloud strategy, the most recent being SoftLayer. You also read about an even more recent effort targeting the C-Suite last week here in DancingDinosaur.

The SoftLayer acquisition  promises to make it easier and faster for organizations around the world to incorporate cloud computing by marrying the speed and simplicity of SoftLayer’s public cloud services with the enterprise grade reliability, security and openness of the IBM SmartCloud portfolio.

SoftLayer accelerates IBM’s ability to integrate public and private clouds for its clients with flexibility that provides deployment options to enable a faster, broader transformation for small, medium and large businesses. All the while it addresses a range of performance and security models.  The addition of SoftLayer potentially gives IBM a big advantage compared to its enterprise rivals, particularly Oracle and HP.

Suddenly racing to bolster its cloud presence, Oracle has gone on a partnering binge with Salesforce.com, Microsoft, and NetSuite just in the past week. While the details differ in each of the deals they all boil down to Oracle agreeing to play nice in the cloud with former competitors.  The hope is that customers will be able to make the various products they use from each vendor work together in the cloud, never a sure thing.

While Oracle appears to be just getting off Square One in the cloud, IBM already is delivering an increasingly capable set of cloud services that enable organizations to use the cloud to rethink IT and to reinvent the business. Rethinking IT means rapidly delivering IT services and integrating those services across cloud environments for the purpose of increasing efficiency. Reinventing the business means faster time to market for new services, a new focus on differentiated processes, and meeting changing customer expectations through real time access to technology in the cloud. And to make sure it also plays nice, IBM is emphasizing support for a variety of open cloud standards initiatives.

The formula for getting started with cloud is pretty straightforward, and IBM has been reciting it like a mantra for a year or more.  Barerra again laid it out at his IBM Edge 2013 session: Start with consolidation to bring things together and create as much system homogenization as is reasonable.  You can help this by adding a virtualization layer like SVC, which creates the appearance of homogeneity so you can at least manage it as one.

What you end up with, at least, is homogeneous systems behavior, which enables you to more easily automate systems processes.  Later on you can add capabilities like automation and the orchestration of entire workflow. Later still you can add capabilities to deal with specific requirements  for performance, service levels,  multiple tiers, and differentiated services like chargeback. IBM has already locked this down in a set if three cloud offerings dubbed Consolidate and Virtualize, Automate and Manage, and  Optimize/Cloud Ready.

An upcoming DancingDinosaur post will delve into Barerra’s cloud storage strategy, which begins with storage for workload optimized systems and moves through his taxonomy of storage in the cloud. It also will cover his six classes of cloud storage (as seen from the VM).  As  more systems are virtualized for private and public clouds and as cloud storage in general becomes ever more critical few enterprise players are really talking about this the way Barerra does.

zEnterprise vs. Intel Server Farms

May 17, 2013

How many Intel x86 servers do you need to match the performance of a zEnterprise and at what cost for a given workload? That is the central question every IT manager has to answer.

It is a question that deserves some thought and analysis. Yet often IT managers jump to their decision based on series of gut assumptions that on close analysis are wrong. And the resulting decision more often than not is for the Intel server although an honest assessment of the data in many instances should point the other way. DancingDinosaur has periodically looks at comparative assessments done by IBM. You can find a previous one, lessons from Eagle studies, here.

 The first assumption is that the Intel server is cheaper. But is it? IBM benchmarked a database workload on SQL Server running on Intel x86 and compared it to DB2 on z/OS.  To support 23,000 users, the Intel system required 128 database cores on four HP servers.  The hardware cost $0.34 million and the software cost $1.64 million for a 3-year TCA of $1.98 million. The DB2 system required just 5 cores at a hardware/software combined 3-year TCA of $1.4 million

What should have killed the Intel deal was the software cost, which has to be licensed based on the number of cores. Sure, the commodity hardware was cheap, but the cost of the database licensing drove up the Intel cost. Do IT managers wonder why they need so many Intel cores to support the same number of users they can support with far fewer z cores? Obviously many don’t.

Another area many IT managers overlook is I/O performance and its associated costs. This becomes particularly important as an organization deploys virtual machines.  Increasing the I/O demand on an Intel system uses more of the x86 core for I/O processing, effectively reducing the number of virtual machines that can be deployed per server and raising hardware costs.

The zEnterprise handles I/O differently. It provides 4-16 dedicated system assist processors for the offloading of I/O requests and an I/O subsystem bus speed of 8 GBps.

The z also does well with z/VM for Linux guest workloads. In this case IBM tested three OLTP database production workloads (4 server nodes per cluster), each supporting 6,000 trans/sec, Oracle Enterprise Edition, and Oracle Real Application Cluster (RAC) running on 12 HP DL580 servers (192 cores). This was compared to three Oracle RAC clusters of 4 nodes per cluster with each node as a Linux guest under z/VM . The zEC12 had 27 IFLs. Here the Oracle HP system cost $13.2 million, about twice as much as on the zEC12, $5.7 million. Again, the biggest cost savings came from the need for fewer Oracle licenses due to fewer cores.

The z also does beats Intel servers when running mixed high- and low- priority workloads on the same box. In one example, IBM compared high priority online banking transaction workloads with low priority discretionary workloads.  The workloads running across 3 Intel servers with 40 cores each (120 cores total) cost $13.7 million compared to z/VM on an zEC12 running 32 IFLs, which cost $5.77 million (58% less).

Another comparison demonstrates that core proliferation between Intel and the z is the killer. One large workload test required sixteen 32-way HP Superdome App. Production/Dev/ Test servers and eight 48-way HP Superdome DB Production/Dev/Test for a total of 896 cores. The 5-year TCA came to $180 million. The comparable workload running on a zEC12 41-way production/dev/test system used 41 general purpose processors (38,270 MIPS) with a 5-year TCA of $111 million.

When you look at the things a z can do to keep concurrent operations running that Intel cannot you’d hope non-mainframe IT managers might start to worry. For example, the z handles core sparing transparently; Intel must bring the server down.  The z handles microcode updates while running; Intel can update OS-level drivers but not firmware drivers. Similarly, the z handles memory and bus adapter replacements while running; Intel servers must be brought down to replace either.

Not sure what it will take for the current generation of IT managers to look beyond Intel. Maybe a new business class version of the zEC12 at a stunningly low price. You tell me.

BTW; are you planning to attend IBM Edge 2013 in Las Vegas, Jun 10-14? There will be much there to keep enterprise data center managers occupied.  Overall, IBM Edge 2013 will offer over 140 storage sessions, over 50 PureSystems sessions, more than 50 client case studies, and sessions on big data and analytics along with a full cloud track.  Look for me in the Social Media Lounge at the conference and in the sessions.  You can follow me on Twitter for conference updates@Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference.

Oracle’s Tough 3Q and New SPARC Chip

March 29, 2013

Almost like a good news/bad news joke, Oracle announced dismal financials last week along with the next rev of its SPARC processor. The company clearly is hoping that the new processor will revive its rapidly fading hardware business and pose some sort of challenge to IBM’s zEnterprise and Power Systems.

Hardware systems product revenue was $671 million. That’s sounds good for a quarter until you realize it was down 23% over the previous year. Ouch. Hardware systems support didn’t do much better, falling to $570 million even as Oracle’s hardware maintenance prices continued to climb, noted Timothy Sipples, who writes a blog called Mainframe.  Hardware platforms go through refresh cycles, as DancingDinosaur readers know, but Oracle has been struggling at this with Sun for three years.

Note that these figures include what Oracle calls its engineered systems like Exadata and Exalogic. These types of systems combine Oracle’s Sun hardware with its software in an optimized product. Such systems were expected to provide the synergies necessary to justify the initial Sun acquisition. And maybe they will someday, but Oracle stockholders have to be getting impatient. Along with the engineered systems was Oracle’s SPARC SuperCluster.  During that time IBM has been delivering its own highly optimized systems, hybrid systems, a new generation of  HPC systems, and expert-integrated systems.

Oracle’s 3Q report didn’t even mention its storage business, which consists mainly of StorageTek tape products and Oracle’s Sun ZFS Storage Appliance family.  By comparison, IBM has been advancing its storage offerings with products like Storwize, XIV, Real-time Compression, SSD, and more.

About the only bright spot Oracle could point to was its cloud effort. In the 3Q report it declared: “The Oracle Cloud is the most robust and comprehensive cloud platform available with services at the infrastructure (IaaS), platform (PaaS) and application (SaaS) level. In Q3, our SaaS revenue alone grew well over 100% as lots of new customers adopted our Sales, Service, Marketing and Human Capital Management applications in the Cloud,” according to Oracle President, Mark Hurd. And even here IBM has been busily building out its SmartCloud as-a-service offerings and putting them into a slew of SmarterPlanet initiatives.

From the standpoint of DancingDinosaur readers, who tend to focus on the System z, zEnterprise, and Power Systems, the most interesting part of Oracle’s recent activity is the new SPARC processor, the T5. New T5 servers can have up to eight microprocessors while Oracle’s new M5 system can be configured with up to thirty-two microprocessors. The M5 runs the Oracle database 10 times faster than the M9000 it replaces, according to Oracle. For the record, the top end zEC12 includes 101 cores. The zEC12 chip runs at 5.5 GHz.

Elizabeth Stahl, IBM’s chief technical strategist and benchmark guru, wrote this on her blog about Oracle’s T5 claims: Many of the claims are Oracle’s own benchmarks that are not published and audited. For price claims, Oracle, as they’ve done in the past, only factors in the price of the pizza box – make sure you add in the all-important software and storage. Stahl goes on to directly address Oracle’s benchmark claims here.

DancingDinosaur has been waiting for a rebound of the SPARC platform in the hopes that it might revive the Solaris on z initiative led by David Boyes and others. They actually had it working and at least one serious bank was piloting it. Lack of support from Oracle/Sun and IBM killed it. Solaris on z could have attracted Sun customers to the zEnterprise, mainly those in banking and financial services where Solaris and Sun were strong.  In case you are interested, Oracle still offers Solaris, now Oracle Solaris 11, and touts it as the first cloud OS.

zEnterprise Workload Economics

February 21, 2013

IBM never claims that every workload is suitable for the zEnterprise. However, with the advent of hybrid computing, the low cost z114, and now the expected low cost version of the zEC12 later this year you could make a case for any workload that benefits from the reliability, security, and efficiency of the z is fair game.

John Shedletsky, VP, IBM Competitive Project Office, did not try to make that case. To the contrary, earlier this week he presented the business case for five workloads that are optimum economically and technically on the zEnterprise.  They are:  transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform. None of these should be a surprise; possibly with the exception of analytics and consolidated platform they represent traditional mainframe workloads. DancingDinosaur covered Shedletsky’s z cost/workload analysis last year here.

This comes at a time when IBM has started making a lot of noise about new and different workloads on the zEnterprise. Doug Balog, head of IBM System z mainframe group, for example, was quoted widely in the press earlier this month talking about bringing mobile computing workloads to the z. Says Balog in Midsize Insider: “I see there’s a trend in the market we haven’t directly connected to z yet, and that’s this mobile-social platform.”

Actually, this isn’t even all that new either. DancingDinosaur was writing about organizations using SOA to connect CICS apps running on the z to users with mobile devices a few years ago here.

What Shedletsky really demonstrated this week was the cost-efficiency of the zEC12.  In one example he compared a single workload, app production/dev/test running on a 16x, 32-way HP Superdome and an 8x, 48-way Superdome with a zEC12 41-way. The zEC12 delivered the best price/performance by far, $111 million (5yr TCA) for the zEC12 vs. $176 million (5yr TCA) for the two Superdomes.

When running Linux on z workloads with the zEC12 compared to 3 Oracle database workloads (Oracle Enterprise Edition, Oracle RAC, 4 server nodes per cluster) supporting 18K transactions/sec.  running on 12 HP DL580 servers (192 cores) the HP system priced out at $13.2 million (3yr TCA). That compared to a zEC12 running 3 Oracle RAC clusters (4 nodes per cluster, each as a Linux guest) with 27 IFLs, which priced out at $5.7 million (3yr TCA). The zEC12 came in at less than half the cost.

With analytics such a hot topic these days Shedletsky also presented a comparison of the zEnterprise Analytics System 9700 (zEC12, DB2 v10, z/OS, 1 general processor, 1 zIIP) and an IDAA with a current Teradata machine. The result: the Teradata cost $330K/queries per hour compared to $10K/queries per hour.  Workload time for the Teradata was 1,591 seconds for 9.05 queries per hour. That compared to 60.98 seconds and 236 queries per hour on the zEC12. The Teradata total cost was $2.9 million versus $2.3 million for the zEC12.

None of these are what you would consider new workloads, and Shedletsky has yet to apply his cost analysis to mobile or social business workloads. However, the results shouldn’t be much different. Mobile applications, particularly mobile banking and other mobile transaction-oriented applications, will play right into the zEC12 strengths, especially when they are accessing CICS on the back end.

While transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform remain the sweet spot for the zEC12, Balog can continue to make his case for mobile and social business on the z. Maybe in the next set of Shedletsky comparative analyses we’ll see some of those workloads come up.

For social business the use cases aren’t quite clear yet. One use case that is emerging, however, is social business big data analytics. Now you can apply the zEC12 to the analytics processing part at least and the efficiencies should be similar.

IBM PureData Brings New Analytics Platform

October 18, 2012

IBM finally has started to expand its PureSystems family of systems with the introduction of the PureData System.  The system promises to let organizations more efficiently manage and quickly analyze petabytes of data and then intelligently apply those insights in addressing business issues across their organization.

This is not a surprise. From the start, IBM talked about a family of PureSystems beyond the initial PureFlex and PureApplications. When the PureSystems family was introduced last spring, DancingDinosaur expected IBM to quickly add new expert servers starting with something it guessed would be called PureAnalytics and maybe another called PureTransactions.  PureData isn’t that far off. The new systems are being optimized specifically for transactional operations and data analytics workloads.

Specifically, PureData System for Transactions has been integrated and optimized as a ready-to-run database platform designed and tuned specifically for transactional data workloads. It supports both DB2 applications unchanged and Oracle database applications with only minimal changes. The machines come as three workload-specific models optimized either for transactional, operational, and big data analytics. They are:

  • PureData System for Transactions: Aimed at retail and credit card processing environments that depend on rapid handling of transactions and interactions these transactions may be small, but the volume and frequency require fast and efficient processing. The new system provides hardware and software configurations integrated and optimized for flexibility, integrity, availability and scalability for any transaction workload.
  • PureData System for Analytics: Enables organizations to quickly and easily analyze and explore big data, up to multi petabytes in volume. The new system simplifies and optimizes performance of data warehouse services and analytics applications. Powered by Netezza technology (in-memory analytics), the new system aims to accelerate analytics and boasts what IBM describes as the largest library of in-database analytic functions on the market today. Organizations can use it to predict and avoid customer churn in seconds, create targeted advertising and promotions using predictive and spatial analysis, and prevent fraud.
  • PureData System for Operational Analytics: Here organizations can receive actionable insights concurrently on more than 1,000 business operations to support real-time decision making. Operational warehouse systems are used for fraud detection during credit card processing, to deliver customer insights to call center operations (while the customer is still on the call or online), and track and predict real-time changes in supply and demand.

All the systems include PureSystems pattern-based expertise and automation. From a configuration standpoint, the full rack system can be pretty rich: 386 x86 processor cores, 6.2 TB DRAM, 19.2 TB flash (SSD), 128 TB disk (HDD), advanced storage tiering, up to 10x compression, a high speed RDMA interconnect, and dual internal 10 GB network links. Systems, however, can range from 96 cores to 386 cores. IBM reports early customer results of 10-100x faster performance over traditional custom-built systems and 20x greater concurrency and throughput for tactical queries resulting, in part, from IBM’s patented MPP hardware acceleration.

IBM hasn’t disclosed pricing, which is highly subject to the particular configuration anyway. However, the company is quick to tout its introductory deals: Credit-qualified clients that elect IBM financing can see immediate benefits with PureData System by deferring their first payment until January 2013 or obtaining a zero percent (interest-free) loan for 12, 24 or 36 months.

PureData may be better thought of as a data appliance delivering data services fed by applications that generate the data and reside elsewhere. With its factory built-in expertise, patterns, and appliance nature organizations can have, according to IBM, a PureData system up and running in hours, not days or weeks; run complex analytics in minutes, not hours; and handle more than 100 databases on a single system. PureData can be deployed in one step simply by specifying the cluster name, description, and applicable topology pattern. Built-in expertise handles the rest.

Now the game is to guess what the next PureSystems expert server will be. DancingDinosaur’s guess: a highly scalable implementation of VDI, maybe called PureDesktop.

Updated Software for IBM zEC12

October 11, 2012

Everyone gets excited by a new piece of hardware, but it is the software that enables the new machine to work its magic. This certainly is the case with the zEC12. On Oct. 3 IBM announced  upgrades to zEnterprise workhorse software like CICS, Omegamon, Cognos, and zSecure intended to better tap the capabilities of zEC12. Even IMS and Sterling are getting a refresh.

Also getting increased attention is Netezza, which has emerged as a key component of IBM’s data analytics approach. Netezza enables IBM to counter Oracle’s Exalytics, another in-memory data analytics appliance. In fact, IBM’s announcement of the newest PureSystems, the PureData System, earlier this week gives IBM another counter punch.

For the zEnterprise IBM adds a flexible storage capability that provides the performance of the IDAA while removing the cost of storage from the z. Netezza will work with whatever IBM storage the organization prefers.  A new incremental update capability propagates data changes as they occur, making it possible to analyze activity almost immediately. This resolves the problem of the data currency, in effect providing as close to real-time analytics as most organizations will get or need.

CICS, which already had become a mainframe workhorse through SOA and web services, now adds rich cloud capabilities too. CICS v5.1 brings new web app capabilities built on the WAS Liberty Profile. New PaaS capabilities enable it to host SaaS apps based on CICS applications. It also employs a new lightweight Java web container that combines Java Servlets and JSPs with fast local access to CICS applications.  IBM reports the enhanced CICS v5.1 delivers a 25% performance gain.

Various online discussion groups are buzzing about the zEC12 software enhancements.  A sampling:

  • IBM provides DB2 10 performance enhancements for z/OS. As importantly for mixed platform (hybrid) shops DB2 10 LUW (Linux UNIX Windows) also will provide similar performance improvements.
  • There is added support for Oracle’s PL/SQL for DB2 10 for stored procedures and Oracle application interfaces for Java, Pro*C, Pro*COBOL, and Forms.
  • IBM also announced significant transactional performance improvements when running WebSphere on the zEC12.
  • IBM has started a Beta Testing Program for the new CICS Transaction Server 5.1 release that has a significant number of enhancements to support Web Applications and CICS application modernization, mainly through IBM’s Rational HATS.
  •  IBM has also improved performance of the C/C++ V1.13 compiler, Metal C feature of the IBM z/OS XL C/C++ compiler; and PL/1 V4.3 compiler for the zEC12.

Maybe less of a buzz generator but IBM Sterling gets a boost with the Sterling B2B Integrator V5.2.4 and Sterling File Gateway V2.2.4 for integration and file-based exchanges. IBM’s zSecure suite V1.13.1 brings new integration with QRadar, expanded integration points with DB2, enhanced RACF database cleanup capabilities, and support for the new enhanced CICS Transaction Server.

IBM also used the announcement to promote the relaunch of zEnterprise Analytics System 9710 (previously called IBM Smart Analytics System 9710) an unusual combo data decision system for analytics. It joins high performance data warehouse management with System z availability and recoverability using the z114. When the IDAA is added the result is a hybrid system of MPP and SMP technologies that combines mixed workload capabilities—both transaction and high speed analytical applications—on single platform tuned for operational business analytics.

Independent Assessment, publisher of DancingDinosaur, has finally released its newest white paper, zEnterprise BladeCenter Extension (zBX): the Case for Adopting Hybrid Computing. It is the most updated look at the zBX yet, including details on the zEC12. Available for free. Click here.

EMC Introduces New Mainframe VTL

August 16, 2012

EMC introduced the high end DLm8000, the latest in its family of VTL products. This one is aimed for large enterprise mainframe environments and promises to ensure consistency of data at both production and recovery sites and provide the shortest possible RPO and RTO for critical recovery operations.

It is built around EMC VMAX enterprise storage and its SRDF replication and relies on synchronous replication to ensure immediate data consistency between the primary and target storage by writing the data simultaneously at each. Synchronous replication addresses the potential problem latency mismatch that occurs with the usual asynchronous replication, where a lag between writes to the primary and to the backup target storage can result in inconsistent data.

Usually this mismatch exists for a brief period. EMC suggests the issue, especially for large banks and financial firms—its key set of mainframe target customers—is much more serious. Large financial organizations with high transaction volume, EMC notes, have historically faced recovery challenges because their mainframe tape and DASD data at production and secondary sites were never fully in synch.  As such, recovery procedures often slowed until the differences between the two data sets were resolved, which slowed the resulting failover.  This indeed may be a real issue but for only a small number of companies, specifically those that need an RTO and RPO of just about zero.

EMC used the introduction of the DLm8000 to beat up tape backup in general. Physical tape transportation by third party records management companies, EMC notes, hinders recovery efforts by reducing what it refers to as the granularity of RPOs while dramatically increasing the RTO.  In addition, periodic lack of tape drive availability for batch processing and for archive and backup applications can impair SLAs, further increasing the risks and business impact associated with unplanned service interruptions. That has been long recognized, but, remember EMC is a company that sells disk, not tape storage, and ran a Tape Sucks campaign after its purchase of Data Domain. What would you expect them to say? 

The DLm8000 delivers throughput of up to 2.7 GB/s, which it claims is 2.5x the performance of its nearest competitor. DancingDinosaur can’t validate that claim, but EMC does have a novel approach to generating the throughput. The DLm8000 is packed with eight Bus-Tech engines (acquired in its acquisition of Bus-Tech in Nov. 2010) and it assigns two FICON connections to each engine for a total of 16 FICON ports cranking up the throughput. No surprise they can aggregate that level of throughput.

EMC has not announced pricing for the DLm8000. The device, however, is the top of its VTL lineup and VMAX enterprise storage tops its storage line. With high throughput and synchronous replication, this product isn’t going to be cheap. However, if you need near zero RPO and RTO then you have only a few choices.

Foremost among those choices should be the IBM TS7700 family, particularly the 7740 and the 7720. Both of these systems provide VTL connectivity. The TS7700 avoids the latency mismatch issue by using a buffer to get the most optimal write performance and then periodically synch primary and target data. “Synchronous as EMC does it for VTL is overkill,” says an IBM tape manager. The EMC approach essentially ignores the way mainframe tape has been optimized.

Among the other choices are the Oracle Virtual Storage Manager and Virtual Library Extension. Oracle uses StorageTek tape systems. The Oracle approach promises to improve tape drive operating efficiencies and lower TCO by optimizing tape drive and library resources through a disk-based virtual tape architecture. HDS also has a mainframe tape backup and VTL product that uses Luminex technology.

EMC is a disk storage company and its DLm8000 demonstrates that. When it comes to backup, however, mainframe shops are not completely averse to tape. Disk-oriented VTL has some advantages but don’t expect mainframe shops to completely abandon tape.

In breaking storage news, IBM today announced acquiring Texas Memory Systems (TMS), a long established (1978) Texas company that provides solid state memory to deliver significantly faster storage throughput and data access while consuming less power. TMS offers its memory as solid state disk (SSD) through its RamSan family of shared rackmount systems and Peripheral Component Interconnect Express (PCIe) cards. SSD may be expensive on a cost per gigabyte basis but it blows away spinning hard disk on a cost per IOPS. Expect to see IBM to use TMS’s SSD across its storage products as one of its key future storage initiatives, as described by Jai Menon, CTO and VP, Technical Strategy for IBM Systems and Technology Group (STG), at last June’s Storage Edge 2012 conference. BottomlineIT, DancingDinosaur’s sister blog, covered it here back in June. BTW, Edge 2013 already is scheduled for June 10-14 in Las Vegas.

HP-UX and AIX : The Difference is POWER7

July 10, 2012

HP’s enterprise-class UNIX operating system, HP-UX, faces a stark future compared to IBM’s AIX. The difference comes down to the vitality of the underlying platforms. IBM runs AIX on the POWER platform, now at POWER7 and evolving to POWER8 and even POWER9 (although the naming may change)—a dynamic platform if ever there was one. Meanwhile, HP-UX has been effectively stranded on the withering Itanium platform. Oracle has stopped development for Itanium, and Intel, HP’s partner in Itanium, has been, at best, lackluster in its support.

It not clear whether HP-UX is a better UNIX than AIX, but in an industry driven by ever increasing demands for speed, throughput, cost-efficiency, and energy efficiency, the underlying platform matters. HP-UX customers surely will outgrow their Itanium-based systems without a platform boost.

“There’s no question that [our] Business Critical Server business has been hurt by this,” said HP CEO Meg Whitman in the transcript of an interview with the Wall Street Journal’s All Things D column. The business, which had been growing 10% a year before Oracle spurned further support of Itanium now is declining by 20-30% a year (Ouch!).  So Whitman is counting on two things: 1) winning its lawsuit against Oracle, which is still making its way through the courts and 2) porting HP-UX to an advanced x86 platform, namely Xeon. “Ultimately we’ve got to build UNIX on a Xeon chip, and so we will do that,” she told All Things D. All spring long there had been hints that this was imminent, but an official HP announcement never materialized.

Of course Oracle wants the HP customers running Oracle on HP-UX with Itanium to jump to its Sun platform.  IBM, however, has been wooing and winning those same customers to its System z or POWER platforms. Oracle runs on both the z and POWER platforms.  Running Oracle on Linux on System z yields substantial savings on Oracle licensing. But IBM wants to do even better by migrating the Oracle shops to DB2 as well, with incentives and tools to ease the transition.

What HP customers also get when they move to POWER or to the z is a platform in both cases with a real platform future, unlike either Itanium or Sun’s server platforms. DancingDinosaur has long extolled the zEnterprise and hybrid computing, but POWER is dynamic in its own right and when you look at the role it now plays in IBM’s new PureSystems, another IBM hybrid platform, POWER becomes all that more attractive.

From the start HP with HP-UX and Itanium was bound to have to settle for compromises given the different parties—HP, Intel, Oracle—involved. With POWER7, IBM system developers got exactly what they wanted, no compromises. “We gave the silicon designers a bunch of requirements and they gave us our wish list,” says Ian Robinson, IBM’s PowerVM virtualization and cloud product line manager. As a result POWER7, which runs AIX, Linux, and System i on the same box, got a slew of capabilities, including more memory bandwidth and better ways to divide cores.

POWER7, which amazed the IT world with its stunning Watson victory at Jeopardy, also is turning out to be an ideal virtualization and cloud machine. The rate of virtualization and cloud adoption by POWER7 shops is running something north of 90%, notes Robinson. The adoption of PowerVM, the POWER7 hypervisor built in at both the motherboard and firmware levels is close to 100%. And now POWER7 is a key component of IBM’s PureFlex initiative, a major IBM strategic direction.

Meanwhile, Whitman is fighting a costly court battle in the hope of coercing grudging support for the Itanium platform from Oracle. The trial began in June and mud has been flying ever since. Even if HP wins the case, don’t expect the story to end soon. Using appeals and delay tactics Oracle could put off the final outcome so long that Itanium will have shriveled to nothing while POWER7 continues along IBM’s ambitious roadmap.

Server Wars: zEnterprise vs. Oracle and HP

March 2, 2012

In the server market, especially the enterprise server market, IBM’s only serious rivals are HP and Oracle, and the latest Gartner tally shows IBM with z and Power clearly pulling ahead. According to Gartner, IBM was the 2011 market leader in the worldwide server market based on revenue, ending the year with $4.7 billion in revenue in the last quarter of 2011 for a total share of 33.7%.

Among enterprise-class RISC servers the results were problematic, except for IBM. Noted Gartner: Overall, RISC and Itanium UNIX revenue decreased 3.9% in the fourth quarter of 2011, although this top-level figure does not tell the whole story. HP had weak results in this segment, but IBM is benefiting from the difficulties of other vendors and consolidating its lead. IBM grew RISC/Itanium UNIX revenue by 21.4% and ended the fourth quarter with a 48.4% share of revenue in this segment.

The bad decisions HP has made, especially announcements to kill WebOS and its tablet devices and its decision to get out of the PC business, have finally hit home.  The company’s 1Q2012 financials were dismal.  Revenue was down 7% while earning per share dropped 32%.

Compared to the HP results, IBM had a good quarter, announcing fourth-quarter 2011 diluted earnings of $4.62 per share, compared with diluted earnings of $4.18 per share in the fourth quarter of 2010, an increase of 11%. Fourth-quarter net income was $5.5 billion compared with $5.3 billion in the fourth quarter of 2010, an increase of 4%. Operating (non-GAAP) net income was $5.6 billion compared with $5.4 billion in the fourth quarter of 2010, an increase of 5%.

All this despite a weak quarter for IBM’s hardware group, which reported revenues of $5.8 billion for the quarter, down 8% from the year before. The group’s pre-tax income was $790 million, a decrease of 33% due mainly to unexpectedly weak mainframe sales following a streak of record setting mainframe quarterly gains. Gartner attributes this “largely due to cyclical weakness in its System z product line.” If that’s the case, Gartner must be expecting a new rev of the zEnterprise in 18 months. Let’s hope.

Based on anecdotal evidence from discussions with data center managers, DancingDinosaur is expecting System z sales to pick up before that. Some managers have reported delaying upgrades of their existing z until the economy more clearly rebounds. Others have been sniffing around the zBX, curious to kick the tires of hybrid computing. All they need is a good business case and maybe an incentive. DancingDinosaur would like to see a discounted zBX offer with a little more punch than a couple of free blades, maybe bundled into a Solution Edition package.

In the meantime, IBM continues to compete with HP and Oracle/Sun for high end server sales. That puts the z196 against Oracle’s SPARC SuperCluster T4-4. Oracle describes it as the world’s fastest general purpose engineered system that delivers high performance, availability, scalability and security across a wide range of enterprise applications, including database, middleware, and Oracle and custom applications. The SuperCluster T4-4, according to Oracle, provides a completely optimized package of servers, storage and software that integrates with Oracle Exadata Storage Servers and Oracle Exalogic Elastic Cloud while utilizing the Oracle ZFS Storage Appliance, InfiniBand I/O fabric, and Oracle Solaris 11. Sound nice except IBM has been optimizing the System z for multiple workloads for years.

HP offers the Integrity Superdome 2. It is built around a modular, blade design, a fault-tolerant Crossbar fabric, and 64-socket scalability that handles 256 cores (more in a future release.) It promises a low entry price (unspecified—so unable to compare with the $75k z114), but its processors run significantly slower than zEnterprise processors.

Actually, it looks like HP is betting its server future on a new line, the HP ProLiant Generation 8 (Gen8). These servers represent an effort to redefine data center economics by automating every aspect of the server life cycle and spawned a new systems architecture, the HP ProActive Insight architecture, which will span the entire HP Converged Infrastructure. The servers will include integrated lifecycle automation that HP estimates can save 30 days of admin time each year per admin; dynamic workload acceleration, which can boost performance 7x; and automated energy optimization, which HP promises will nearly double compute-per-watt capacity, thereby saving an estimated $7 million in energy costs in a typical data center over three years This clearly is where HP expects to compete—against commodity x86-based machines.

DancingDinosaur sees this kind of competition as only good for enterprises that depend on IT.


Follow

Get every new post delivered to your Inbox.

Join 667 other followers

%d bloggers like this: