Posts Tagged ‘System x’

IBM Technical Computing Tackles Big Data

October 26, 2012

IBM Technical Computing, also referred to as high performance computing (HPC), bolstered its Platform Computing Symphony product for big data mainly by adding enterprise-ready InfoSphere BigInsights Hadoop capabilities. The Platform Symphony product now includes Apache Hadoop, map/reduce and indexing capabilities, application accelerators, and development tools. IBM’s recommended approach to simplifying and accelerating big data analytics entails the integration of Platform Symphony, General Parallel File System (GPFS), Intelligent Cluster, and DCS3700 storage.

This is not to say that IBM is leaving the traditional supercomputing and HPC market. Its Sequoia supercomputer recently topped the industry by delivering over 16 petaflops of performance.  Earlier this year it also unveiled the new LRZ SuperMUC system, built with IBM System x iDataPlex direct water cooled dx360 M4 servers encompassing more than 150,000 cores to provide a peak performance of up to three petaflops.  SuperMUC, run by Germany’s Bavarian Academy of Science’s Leibniz Supercomputing Centre, will be used to explore the frontiers of medicine, astrophysics, quantum chromodynamics, and other scientific disciplines.

But IBM is intent on broadening the scope of HPC by pushing it into mainstream business. With technical computing no longer just about supercomputers the company wants to extend technical computing to diverse industries. It already has a large presence in the petroleum, life sciences, financial services, automotive, aerospace, defense, and electronics for compute-intensive workloads. Now it is looking for new areas where a business can exploit technical computing for competitive gain.  Business analytics and big data are the first candidates that come to mind.

When it comes to big data, the Platform Symphony product already has posted some serious Hadoop benchmark results:

  • Terasort , a big data benchmark that tests the efficiency MapReduce clusters in handling very large datasets—Platform Symphony used 10x less cores
  • SWIM, a benchmark developed at UC Berkley that simulates real-world workload patterns on Hadoop clusters—Platform Symphony ran 6x faster
  • Sleep, a standard measure to compare core scheduling efficiency of MapReduce workloads—Platform Symphony came out 60x faster.

Technical computing at IBM involves System x, Power, System i, and PureFlex—just about everything except z. And it probably could run on the z too through x or p blades in the zBX.

Earlier this month IBM announced a number of technical computing enhancements including a high-performance, low-latency big data platform encompassing IBM’s Intelligent Cluster, Platform Symphony, IBM GPFS, and System Storage DCS3700. Specifically for Platform Symphony is a new low latency Hadoop multi-cluster capability that scales to 100,000 cores per application and shared memory logic for better big data application performance.

Traditionally, HPC customers coded their own software to handle the nearly mind-boggling complexity of the problems they were trying to solve. To expand technical computing to mainstream business, IBM has lined up a set of ISVs to provide packaged applications covering CAE, Life Science, EDA, and more. These include Rogue Wave, ScaleMP, Ansys, Altair, Accelrys, Cadence, Synopsys, and others.

IBM also introduced the new Flex System HPC Starter Configuration, a hybrid system that can handle both POWER7 and System x.  The starter config includes the Flex Enterprise Chassis, an Infiniband (IB) chassis switch, Power7 compute node, and an IB expansion card for Power or x86 nodes. Platform Computing software handles workload management and optimizes resources. IBM describes it as a high density, price/performance offering but hasn’t publicly provided any pricing. Still, it should speed time to HPC.

As technical computing goes mainstream it will increasingly focus on big data and Hadoop.  Compute-intensive, scientific-oriented companies already do HPC. The newcomers want to use big data techniques to identify fraud, reduce customer churn, make sense of customer sentiment, and similar activities associated with big data. Today that calls for Hadoop which has become the de facto standard for big data, although that may change going forward as a growing set of alternatives to Hadoop gain traction.

IBM Boosts System z Job Hunting

June 12, 2011

People looking for a mainframe job got a big boost from IBM this month with the introduction of a new System z job website called the System z Job Board. After registering for the site you get to review job postings or post one.

A quick glance at the first few pages of job posts show a range of mainframe jobs from entry-level to advanced. Companies listing System z job openings included: IBM, EMC, state of Colorado, Blue Cross/Blue Shield of Colorado, Tata, Unum, Fidelity, Humana, GT Software, state of Minnesota, CVS, and more. Even Apple was there, looking, it appeared, to fill an education and marketing position.

The zEnterprise, what amounts to a hybrid cross-platform, cross-OS enterprise server (z/OS, z/VM, Linux on z, Power, AIX, Linux, x, and soon, Windows), promises to mix up the demand curve for mainframe people and, hopefully, open new opportunities. This comes alongside the purported wave of job vacancies expected from a surge of retirement among aging mainframe veterans. That this massive wave of retirements, seemingly in defiance of demographics, has yet to materialize is puzzling. DancingDinosaur suspects the big hits many 401k plans took in recent years dampened any rosy dreams of retirement and led many to delay plans.

Momentum for the System z mainframe also continues, IBM reports, mainly in emerging markets with shops in Brazil, Mexico, Russia, China, Africa, and India having selected IBM mainframe servers in the past year, but also in North America. Payment Solution Providers (PSP), Toronto, consolidated its entire IT infrastructure on the System z after the 11-year-old company determined that an HP and Oracle infrastructure lacked the security PSP required,

 The jobs website is an extension of IBM’s Academic Initiative for System z program, a global project by IBM to align with colleges, universities and businesses across the globe to develop mainframe and large enterprise skills among college students with an eye toward future employment with Fortune 500 companies worldwide. The Academic Initiative for System z program currently involves 814 colleges and universities across the globe.

The question of mainframe skills demand has been an ongoing debate among various LinkedIn mainframe groups for years. The tenor of the debate has turned generally more positive in recent months. You can follow it here. Or join LinkedIn for free here and subscribe to any or all of the various mainframe groups.

Finally, for job hunters a number of independent mainframe people have begun to compile a worldwide registry of mainframe shops as a Wiki. When DancingDinosaur last checked nearly 500 companies were listed. You can find the list here; you will need to register if you want to submit the names of any mainframe shops not currently listed. Granted the list is incomplete; there probably are several thousand active mainframe shops in the world. Even DancingDinosaur noticed several it had covered that weren’t listed. Still, it is a terrific start; as a Wiki you can bet it will evolve, especially if you contribute.

Among unemployed mainframers looking for work, the hardest hit judging from comments posted on LinkedIn are the 50-somethings. Their problems probably have more to do with the current job market in general and attitudes toward middle-aged workers than with anything specifically to do with the mainframe.

IBM Expands Red Hat Partnership

May 9, 2011

IBM further embraced open source last week at Red Hat’s user conference in Boston around virtualization and cloud initiatives. The relationship, however, has been growing for over a decade as Red Hat Enterprise Linux (RHEL) becomes increasingly popular on the System z. The arrival of x and Power blades for the zBX should only increase the presence of RHEL on the System z.

Last year IBM selected Red Hat Enterprise Virtualization (RHEV) as a platform option for its development and test cloud service. Dev and test has emerged as a natural for cloud computing given its demands for quick setup and take down.

Although there weren’t any major specific System z announcements almost all System z shops run a mix of platforms, including System x for Linux and Windows, the Power platform for AIX and Linux, and are making forays into private, public, and hybrid clouds. So there was plenty coming out of the conference that will interest mainframe shops even if it wasn’t System z-specific.

With that in mind, here are three new Red Hat initiatives that will be of interest in mainframe shops:

First, open virtualization based on Red Hat’s open source KVM hypervisor. This enables an organization to create multiple virtual versions of Linux and Windows environments on the same server. This will help save money through the consolidation of IT resources and without the expense and limitations of proprietary technology. RHEV, an open source option, delivers datacenter virtualization by combining its centralized virtualization management system with the KVM hypervisor, which has emerged as a top hypervisor behind VMware.

According to Red Hat, RHEV delivers 45% better consolidation capacity than its competitors according to a recent Spec 1 virtualization benchmark and brings architectural support for up to 4,096 processor cores and up to 64TB of memory in the host, 32 virtual CPUs in the guest, and 1TB of RAM. This exceeds the abilities of proprietary hypervisors for Linux and Windows. Red Hat also reports RHEV Virtualization Manager can enable savings of up to 80% relative to comparable proprietary virtualization products in the first year (initial acquisition cost) and up to 66% over a course of three years. Finally support for such security capabilities as multi-tenancy combined with its scalability make it a natural for cloud computing.

Second, Red Hat introduced a platform-as-a-service (PaaS) initiative, called OpenShift, to simplify cloud development and deployment and reduce risk. It is aimed at open source developers and provides them with a flexible platform for developing cloud applications using a choice of development frameworks for Java, Python, PHP and Ruby, including Spring, Seam, Weld, CDI, Rails, Rack, Symfony, Zend Framework, Twisted, Django and Java EE. It is based on a cloud interoperability standard, Deltacloud, and promises to end PaaS lock-in, allowing developers to choose not only the languages and frameworks they use but the cloud provider upon which their application will run.

By building on the Deltacloud cloud interoperability standard, OpenShift allows developers to run their applications on any supported Red Hat Certified Public Cloud Provider, eliminating the lock-in associated with first-generation PaaS vendors. In addition it brings the JBoss middleware services to the PaaS experience, such as the MongoDB services and other RHEL services.

Third, Red Hat introduced CloudForms, a product for creating and managing IaaS in private and hybrid clouds. It allows users to create integrated clouds consisting of a variety of computing resources and still be portable across physical, virtual and cloud computing resources.  CloudForms addresses key problems encountered in first-generation cloud products: the cost and complexity of virtual server sprawl, compliance nightmares and security concerns.

What will make CloudForms of particular interest to heterogeneous mainframe shops is its ability to create hybrid clouds using existing computing resources: virtual servers from different vendors, such as Red Hat and VMware; different cloud vendors, such as IBM and Amazon; and conventional in-house or hosted physical servers, both racks and blades. This level of choice helps to eliminate lock-in and the need to undergo migration from physical to virtual servers in order to obtain the benefits of cloud.

Open source is not generally a mainframe consideration, but open source looms large in the cloud. It may be time for System z shops to add some of Red Hat’s new technologies to their System z RHEL, virtualization, and cloud strategies as they move forward.


Follow

Get every new post delivered to your Inbox.

Join 689 other followers

%d bloggers like this: