Posts Tagged ‘System p’

IBM Technical Computing Tackles Big Data

October 26, 2012

IBM Technical Computing, also referred to as high performance computing (HPC), bolstered its Platform Computing Symphony product for big data mainly by adding enterprise-ready InfoSphere BigInsights Hadoop capabilities. The Platform Symphony product now includes Apache Hadoop, map/reduce and indexing capabilities, application accelerators, and development tools. IBM’s recommended approach to simplifying and accelerating big data analytics entails the integration of Platform Symphony, General Parallel File System (GPFS), Intelligent Cluster, and DCS3700 storage.

This is not to say that IBM is leaving the traditional supercomputing and HPC market. Its Sequoia supercomputer recently topped the industry by delivering over 16 petaflops of performance.  Earlier this year it also unveiled the new LRZ SuperMUC system, built with IBM System x iDataPlex direct water cooled dx360 M4 servers encompassing more than 150,000 cores to provide a peak performance of up to three petaflops.  SuperMUC, run by Germany’s Bavarian Academy of Science’s Leibniz Supercomputing Centre, will be used to explore the frontiers of medicine, astrophysics, quantum chromodynamics, and other scientific disciplines.

But IBM is intent on broadening the scope of HPC by pushing it into mainstream business. With technical computing no longer just about supercomputers the company wants to extend technical computing to diverse industries. It already has a large presence in the petroleum, life sciences, financial services, automotive, aerospace, defense, and electronics for compute-intensive workloads. Now it is looking for new areas where a business can exploit technical computing for competitive gain.  Business analytics and big data are the first candidates that come to mind.

When it comes to big data, the Platform Symphony product already has posted some serious Hadoop benchmark results:

  • Terasort , a big data benchmark that tests the efficiency MapReduce clusters in handling very large datasets—Platform Symphony used 10x less cores
  • SWIM, a benchmark developed at UC Berkley that simulates real-world workload patterns on Hadoop clusters—Platform Symphony ran 6x faster
  • Sleep, a standard measure to compare core scheduling efficiency of MapReduce workloads—Platform Symphony came out 60x faster.

Technical computing at IBM involves System x, Power, System i, and PureFlex—just about everything except z. And it probably could run on the z too through x or p blades in the zBX.

Earlier this month IBM announced a number of technical computing enhancements including a high-performance, low-latency big data platform encompassing IBM’s Intelligent Cluster, Platform Symphony, IBM GPFS, and System Storage DCS3700. Specifically for Platform Symphony is a new low latency Hadoop multi-cluster capability that scales to 100,000 cores per application and shared memory logic for better big data application performance.

Traditionally, HPC customers coded their own software to handle the nearly mind-boggling complexity of the problems they were trying to solve. To expand technical computing to mainstream business, IBM has lined up a set of ISVs to provide packaged applications covering CAE, Life Science, EDA, and more. These include Rogue Wave, ScaleMP, Ansys, Altair, Accelrys, Cadence, Synopsys, and others.

IBM also introduced the new Flex System HPC Starter Configuration, a hybrid system that can handle both POWER7 and System x.  The starter config includes the Flex Enterprise Chassis, an Infiniband (IB) chassis switch, Power7 compute node, and an IB expansion card for Power or x86 nodes. Platform Computing software handles workload management and optimizes resources. IBM describes it as a high density, price/performance offering but hasn’t publicly provided any pricing. Still, it should speed time to HPC.

As technical computing goes mainstream it will increasingly focus on big data and Hadoop.  Compute-intensive, scientific-oriented companies already do HPC. The newcomers want to use big data techniques to identify fraud, reduce customer churn, make sense of customer sentiment, and similar activities associated with big data. Today that calls for Hadoop which has become the de facto standard for big data, although that may change going forward as a growing set of alternatives to Hadoop gain traction.

Workload-Driven zEnterprise Solution Edition Program

March 12, 2012

The IBM Solution Edition (SE) program for the zEnterprise is one of the few true bargains in mainframe computing. It delivers a zEnterprise mainframe as a bundle with software, middleware, and maintenance at a steep discount. It is, however, workload-specific.  It is a great deal only if you can live within the workload constraints of the SE agreement.

To qualify for the SE deal you must use it for a workload new to your mainframe environment. This deal is designed to expand the usage of the mainframe at a given shop.

IBM notes, the SE is tailored to meet specific business needs and designed to enable maximum value from the current IT infrastructure in the fastest possible time and at the lowest cost. The SE program addresses the following specific workloads: ACI, App Dev, Chordiant (CRM), Cloud Computing, Enterprise Linux, GDPS, Security, and WebSphere as well as SAP.

Take the SE for SAP program. The program makes it more affordable for companies to benefit from the strengths of System z for their SAP environment.  If you already are running SAP in a distributed environment you can bring it to the mainframe through SE for SAP or Linux on System z by taking advantage of the System z SE for Enterprise Linux, which is one of the most versatile SE offerings .

Hybrid computing environments also can be accommodated under the SE program. For example, when implementing the SAP with z/OS and DB2 on zEnterprise companies may choose to implement the SAP application server on zEnterprise with Linux on z or on the zBX with POWER7 blades and AIX.  This comes at a higher cost as both the zBX and Unified Resource Manager are considered SE optional products.

The SE program offers a great deal if you can live within the constraints. But as one DancingDinosuar reader writes, “from experience we now know that the reality is somewhat different. The SE contract imposes procurement terms and usage restrictions that detract from the perceived discount relative to the business-as-usual (BAU) price to meet the same business requirement.

For example, software price protection is only guaranteed for the SE term as long as there are no software version upgrades. Version upgrades will incur revised charges. Very few applications, it turns out, are truly fenced in to the extent that this issue can be ignored. Furthermore, since the SE is a bottom-line deal, there is no way to determine exactly what discount level would be applicable in the event of a version upgrade.

Another complaint: SE software is licensed for peak workload. It is highly unlikely, however, in any BAU situation the monthly software charges would be for 100% of the capacity of the machine/LPAR from day one and for every month through the contract period. In a BAU situation, the highest four hour rolling average for an on-line workload typically is less than 80% of the machine/LPAR capacity. It’s wonderful that the z can perform reliably at near 100% utilization, but most shops don’t run it that way.

This manager has identified a dozen or so similar SE gotchas, many arising only when something has changed at the company that requires changes to the z. IBM insists that SE users can add and delete software from the stack, spread workloads across LPARs, and buy more capacity if needed.  Where IBM gets difficult, and rightly so, is when the requested change alters the nature of the workload. “The SE is workload-based pricing and will be the best price you can get for that workload and capacity,” says an IBM SE manager.

Another DancingDinosuar reader balked at the SE qualifying workload notion. The whole concept of a qualifying workload, he notes, is an inhibitor.  Workloads evolve over time.  Furthermore, if you price a z114 to run a certain workload compared to the price of a System p capable of running that same workload, the p likely will be more attractive.  The z only wins when other factors (reliability, fault-tolerance, granularity of virtualization, scalability, etc.) make the premium worth it.

SE delivers a great deal but you must read the fine print closely to make sure you can live within the workload constraints for the 3-5 years of the SE agreement before you take the low price.

Anyone who has considered the SE program and will talk about it (anonymously, of course) please contact me.

IBM System z wins overseas

August 31, 2010

IDC notes that 2Q continued the downward trend for non-x86 servers. Despite that IBM has been turning up new System z customers from Russia, Namibia, Vietnam, and Korea. DancingDinosaur was able to connect by phone with Comepay, a Russian company and IBM’s latest announced System z win.

Comepay operates 200 commerce kiosks in 11 regions of Russia that enable consumers to pay for a range of services including internet, digital TV, mobile phones and utilities. The company is pursuing an expansion strategy and expect to increase transactions three-fold, from 10,000 to 30,000 per second.  The new System z will support further business expansion strategy.

Until now, the company had been running the kiosks on an IBM x86 blade server Windows platform. That includes four servers running Microsoft SQL Server and seven front end servers handling OLTP and the customer portal. The system generally worked except for periods on instability and erratic behavior. Making changes also proved difficult.

After looking at alternatives, including IBM System P, HP Integrity NonStop, and Sun SPARC Enterprise, the company opted for the IBM System z10. Clinching the deal for IBM was the availability of z/OS Parallel Sysplex, Workload Management (WLM) for z/OS, and the ability to run DB2 on the Sysplex.

In addition to z/OS, it also runs Linux Enterprise Server with Nginx, Apache, SUSE Linux, Mono, and WebSphere. The Nginx effort is focusing on Comepay’s customer portal. Over time, the company expects to migrate all its various workloads—transaction processing, OLAP, the customer portal, and more—to the System z. At that point, it will use the Wintel platform only for domain control and management. Then, possibly in mid 2011, Comepay may add a z196. Stay tuned.

Like Comepay, new System z customers have been cropping up in far corners of the world in recent months. At the end of 2009, the First National Bank of Namibia ordered two System z10 BC-class machines as part of the bank’s $15 million project to bring its core banking systems into Namibia. Previously, they  resided in South Africa, but the Namibia Central Bank apparently wanted them local.

Earlier this summer, Vietnam Joint Stock Bank for Industry and Trade (VietinBank),  one of the largest banking institutions in Vietnam, selected the  System z10 mainframe to support the expansion of its banking businesses, which, reportedly, grew by over 35 percent last year. The new z will be optimized for high transaction banking workloads.

VietinBank expects to benefit from the z10’s advanced systems management features including capacity management and security as the country’s appetite for traditional and more advanced banking services continues to grow.  VietinBank also can claim to be the country’s first Linux on the mainframe customer, enabling the bank to take advantage the substantial software licensing savings possible when running application on Linux on z.

The bank’s total assets accounts for over 20 percent of the entire Vietnamese banking industry. The bank operates nearly 850 branches and transaction offices and nearly 1,200 Automatic Teller Machines (ATMs) throughout the 56 provinces and cities in Vietnam.

These System z wins stand out as bright spots in what has been a tough non-x86 server market. As IDC notes in its 2Q 2010 worldwide server market report:  The market for non-x86 servers declined 16.0% year over year to $3.9 billion in 2Q10. IDC believes that demand for non-x86 systems will improve in the second half of the year now. Let’s hope 3Q and 4Q results reflect a market embrace of the new z196 and POWER7 systems.

IBM Power7 System p vs. System z

February 28, 2010

The new IBM Power7 System p looked pretty impressive at a recent briefing. So impressive that one System z staffer came up to me and asked: “What does this do to the z?”

The announcement certainly generated a lot of press, much of it focusing on the competition between IBM, HP/Intel, and Sun/Oracle in the high end UNIX market. This year is shaping up to be an enterprise server shootout.

Few seemed concerned about the implications for the System z. On the spec sheet, however, the new Power7 System p family looks potent: up to 8 cores, 32 threads, dynamic memory expansion, automated optimization, impressive SPECint benchmarks, massive software parallelization, support for AIX/Linux/System i (OS/400), and good balance between cores, memory, bandwidth, cache, and more. What’s not to like?

Throughout the briefing the System p engineers emphasized how extensively they integrated and optimized the various hardware, firmware, and software components of the new Power7 machines. Price/performance comparisons showed new optimized systems blowing away aggregations of servers from Itanium, SPARC, and x86-based rivals. IBM makes similar analyses and comes up with comparable results in comparisons between the latest z10 and various collections of HP and Sun machines.

One of the classic strengths of the mainframe is that IBM optimized, virtualized, integrated, and automated z hardware, firmware, middleware, and software from the ground up for the z. Unlike Larry Ellison vowing to optimize Sun hardware to make the most of Oracle’s software, the System z already is optimized and has been for years. Now the new System p is making a similar claim.

In a presentation last June, IBM’s Karl Freund laid out where the different IBM platforms fit in the enterprise systems scheme of things. The mainframe is designated for transaction processing and database applications based on its scalability, high transaction rates, QoS, ability to handle peak workloads, resiliency, and security. IBM’s UNIX platforms like System p are designated for business applications, analytics, and HPC.

OK, I’ll grant HPC should go to UNIX, but business apps and analytics are arguable. Why put WebSphere on z if not for business applications or Cognos for Linux on z if not for BI and analytics? The plain truth: the z and p platforms compete for some workloads.

When asked about the overlap between the z and p platforms IBM executives see no problem: the Power7 System p is for UNIX applications. OK, System z doesn’t run AIX and Power7 System p doesn’t run z/OS, but both can run Linux.

This may become a moot issue when a new System z arrives, as expected, later this year. That’s the hybrid z discussed here last week. It promises, in IBM-speak, to simplify, consolidate, and reduce the costs of managing IT infrastructure by integrating, virtualizing, and coherently managing the multiple and varied heterogeneous processing elements of a deployed business service.

Over the past few months I’ve been speaking with System p users to better understand its appeal. For many that means AIX, which you won’t find on the z. Still, the System z and p stories could become quite entangled this year, especially when you throw in the platform moves of IBM’s competitors. Watch this space as things develop.

 


Follow

Get every new post delivered to your Inbox.

Join 672 other followers

%d bloggers like this: