Posts Tagged ‘open source’

LinuxONE is a Bargain

September 21, 2018

LinuxONE may be the best bargain you’ll ever find this season, and you don’t have to wait until Santa brings it down your chimney. Think instead about transformation and digital disruption.  Do you want to be in business in 3 years? That is the basic question that faces every organization that exists today, writes Kat Lind, Chief Systems Engineer, Solitaire Interglobal Ltd, author of the white paper Scaling the Digital Mountain.

Then there is the Robert Frances Group’s  Top 10 Reasons to Choose LinuxONE. DancingDinosaur won’t rehash all ten. Instead, let’s selectively pick a few, starting with the first one, Least Risk Solution, which pretty much encapsulates the LinuxONE story. It reduces business, compliance, financial, operations, and project risks. Its availability, disaster recovery, scalability and security features minimize the business and financial exposures. In addition to pervasive encryption it offers a range of security capabilities often overlooked or downplayed including; logical partition (LPAR) isolation, and secure containers.

Since it is a z dedicated to Linux, unlike the z13 or z14 z/OS machines that also run Linux but not as easily or efficiently,  As the Robert Frances Group noted: it also handles Java, Python; and other languages and tools like Hadoop, Docker, other containers, Chef, Puppet, KVM, multiple Linux distributions, open source, and more.  It also can be used in a traditional legacy environment or used as the platform of choice for cloud hosting. LinuxONE supports tools that enable DevOps similar to those on x86 servers.

And LinuxONE delivers world class performance. As the Robert Frances Group puts it: LinuxONE is capable of driving processor utilization to virtually 100% without a latency impact, performance instabilities, or performance penalties. In addition, LinuxONE uses the fastest commercially available processors, running at 5.2GHz, offloads I/O to separate processors enabling the main processors to concentrate on application workloads, and enables much more data in memory, up to 32TB.

In addition, you can run thousands of virtual machine instances on a single LinuxONE server. The cost benefit of this is astounding compared to managing the equivalent number of x86 servers. The added labor cost alone would break your budget.

In terms of security, LinuxONE is a no brainer. Adds Lind from Solitaire:  Failure in this area erodes an organization’s reputation faster than any other factor. The impact of breaches on customer confidence and follow-on sales has been tracked, and an analysis of that data shows that after a significant incursion, the average customer fall-off exceeds 41% accompanied by a long-running drop in revenues. Recovery involves a significant outlay of service, equipment, and personnel expenses to reestablish a trusted position, as much as 18.6x what it cost to get the customer initially. And Lind doesn’t even begin to mention the impact when the compliance regulators and lawyers start piling on. Anything but the most minor security breach will put you out of business faster than the three years Lind asked at the top of this piece.

But all the above is just talking in terms of conventional data center thinking. DancingDinosaur has put his children through college doing TCO studies around these issues. Lind now turns to something mainframe data centers are just beginning to think about; digital disruption. The strategy and challenges of successfully navigating the chaos of cyberspace translates into a need to have information on both business and security and how they interact.

Digital business and security go hand in hand, so any analysis has to include extensive correlation between the two. Using data from volumes of customer experience responses, IT operational details, business performance, and security, Solitaire examined the positioning of IBM LinuxONE in the digital business market. The results of that examination boil down into three: security, agility, and cost. These areas incorporate the primary objectives that organizations operating in cyberspace today regard as the most relevant. And guess who wins any comparative platform analysis, Lind concludes: LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

 

 

 

 

 

 

 

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Revamped IBM Power Systems LC Takes on x86

September 9, 2016

To hear IBM, its revamped and refreshed Power Systems LC lineup will undermine x86 (Intel), HPE, Dell/EMC, and any other purveyor of x86-based systems. Backed by accelerators provided by OpenPower community members, IBM appears ready extend the x86 battle to on premises, in the cloud, and the hybrid cloud. It promises to deliver better performance at lower cost for all the hot workloads too: artificial intelligence, deep learning, high performance data analytics, and compute-heavy workloads.

ibm-power-systems-s821lc

Two POWER8 processors, 1U config, priced 30% less than an x86 server

Almost a year ago, Oct. 2015, DancingDinosaur covered IBM previous Power Systems LC announcement here. The LC designation stands for Linux Community, and the company is tapping accelerators and more from the OpenPower community, just as it did with its recent announcement of POWER9 expected in 2017, here.

The new Power LC systems feature a set of community delivered technologies IBM has dubbed POWERAccel, a family of I/O technologies designed to deliver composable system performance enabled by accelerators. For GPU acceleration the NVDIA NVLink delivers nearly 5x better integration between POWER processors and the NVIDIA GPUs.  For FPGA acceleration IBM tapped its own CAPI architecture to integrate accelerators that run natively as part of the application.

This week’s Power Systems LC announcement features three new machines:

  • S821LC (pictured above)—includes 2 POWER8 sockets in a 1U enclosure and intended for environments requiring dense computing.
  • S822LC—brings 2 POWER8 sockets for big data workloads and adds big data acceleration through CAPI and GPUs.
  • S822LC—intended for high performance computing, it incorporates the new POWER8 processor with the NVDIA NVLink to deliver 2.8x the bandwidth to GPU accelerators and up to 4 integrated NVIDIA Pascal GPUs.

POWER8 with NVLink delivers 2.8 x the bandwidth compared to a PCle data pipe. According to figures provided by IBM comparing the price-performance of the Power S822LC for HPC (20-core, 256 GB, 4x Pascal) with a Dell C4130 (20-core, 256 GB 4xK80) and measured by total queries per hour (gph) the Power System delivered 2.1x better price-performance.  The Power Systems server cost more ($66,612) vs. the Dell ($57,615) but the Power System delivered 444 qph vs. Dell’s 185 qph.

The story plays out similarly for big data workloads running MongoDB on the IBM Power S8221LC for big data (20-core, 128 GB) vs. an HP DL380 (20-core, 128 GB). Here the system cost (server, OS, MongoDB annual subscription) came to $24,870 for IBM Power and $29,915 for HP.  Power provided 40% more performance at a 31% lower hardware/maintenance cost.

When it comes to the cloud the new IBM Power Systems LC offerings get even more interesting from a buyer’s standpoint. IBM declared the cloud a strategic imperative about 2 years ago and needs to demonstrate adoption that can rival the current cloud leaders; AWS, Google, and Microsoft (Azure). To that end IBM has started to tack on free cloud usage.

For example, during the industry analyst launch briefing IBM declared: Modernize your Power infrastructure for the Cloud, get access to IBM Cloud for free and cut your current operating costs by 50%. Whether you’re talking on-premises cloud or hybrid infrastructure the freebies just come. The free built-in cloud deployment service options include:

  • Cloud Provisioning and Automation
  • Infrastructure as a Service
  • Cloud Capacity Pools across Data Centers
  • Hybrid Cloud with BlueMix
  • Automation for DevOps
  • Database as a Service

These cover both on-premises, where you can transform your traditional infrastructure with automation, self-service, and elastic consumption models or a hybrid infrastructure where you can securely extend to Public Cloud with rapid access to compute services and API integration. Other freebies include open source automation, installation and configuration recipes, cross data center inventory, performance monitoring via the IBM Cloud, optional DR as a service for Power, and free access and capacity flexibility with SolfLayer (12 month starter pack).

Will the new LC line and its various cloud freebies get the low cost x86 monkey off IBM’s back? That’s the hope in Armonk. The new LC servers can be acquired at a lower price and can deliver 80% more performance per dollar spent over x86-based systems, according to IBM. This efficiency enables businesses and cloud service providers to lower costs and combat data center sprawl.

DancingDinosaur has developed TCO and ROI analyses comparing mainframe and Power systems to x86 for a decade, maybe more.  A few managers get it, but most, or their staff, have embedded bias and will never accept non-x86 machines. To them, any x86 system always is cheaper regardless of the specs and the math. Not sure even free will change their minds.

The new Power Systems LC lineup is price-advantaged over comparatively configured Intel x86-based servers, costing 30% less in some configurations.  Online LC pricing begins at $5999. Additional models with smaller configurations sport lower pricing through IBM Business Partners. All but the HPC machine are available immediately. The HPC machine will ship Sept. 26.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Latest New Mainframe puts Apache Spark Native on the z System

April 1, 2016

IBM keeps rolling out new versions of the z System.  The latest is the z/OS Platform for Apache Spark announced earlier this week. The new machine is optimized for marketers, data analysts, and developers eager to apply advanced analytics to the z’s rich, resident data sets for real-time insights.

ibm_zos_apache_spark_app

z/OS Platform for Apache Spark

Data is everything in the new economy; and the most and best data you can grab and the fastest you can analyze it, the more likely you will win. The z, already the center of a large, expansive data environment, is well positioned to drive winning data-fueled strategies.

IBM z/OS Platform for Apache Spark enables Spark, an open-source analytics framework, to run natively on z/OS. According to IBM, the new system is available now. Its key advantage:  to enable data scientists to analyze data in place on the system of origin. This eliminates the need to perform extract, transform and load (ETL), a cumbersome, slow, and costly process. Instead, with Spark the z breaks the bind between the analytics library and underlying file system.

Apache Spark provides an open-source cluster computing framework with in-memory processing to speed analytic applications up to 100 times faster compared to other technologies on the market today, according to IBM. Apache Spark can help reduce data interaction complexity, increase processing speed, and enhance mission-critical applications by enabling analytics that deliver deep intelligence. Considered highly versatile in many environments, Apache Spark is most regarded for its ease of use in creating algorithms that extract insight from complex data.

IBM’s goal lies not in eliminating the overhead of ETL but in fueling interest in cognitive computing. With cognitive computing, data becomes a fresh natural resource—an almost infinite and forever renewable asset—that can be used by computer systems to understand, reason and learn. To succeed in this cognitive era businesses must be able to develop and capitalize on insights before the insights are no longer relevant. That’s where the z comes in.

With this offering, according to IBM, accelerators from z Systems business partners can help organizations more easily take advantage of z Systems data and capabilities to understand market changes alongside individual client needs. With this kind of insight managers should be able to make the necessary business adjustments in real-time, which will speed time to value and advance cognitive business transformations among IBM customers.

At this point IBM has identified 3 business partners:

  1. Rocket Software, long a mainframe ISV, is bringing its new Rocket Launchpad solution, which allows z shops to try the platform using data on z/OS.
  1. DataFactZ is a new partner working with IBM to develop Spark analytics based on Spark SQL and MLlib for data and transactions processed on the mainframe.
  1. Zementis brings its in-transaction predictive analytics offering for z/OS with a standards-based execution engine for Apache Spark. The product promises to allow users to deploy and execute advanced predictive models that can help them anticipate end users’ needs, compute risk, or detect fraud in real-time at the point of greatest impact, while processing a transaction.

This last point—detecting problems in real time at the point of greatest impact—is really the whole reason for Spark on z/OS.  You have to leverage your insight before the prospect makes the buying decision or the criminal gets away with a fraudulent transaction. After that your chances are slim to none of getting a prospect to reverse the decision or to recover stolen goods. Having the data and logic processing online and in-memory on the z gives you the best chance of getting the right answer fast while you can still do something.

As IBM also notes, the z/OS Platform for Apache Spark includes Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLlib) and Graphx, combined with the industry’s only mainframe-resident Spark data abstraction solution. The new platform helps enterprises derive insights more efficiently and securely. In the processing the platform can streamline development to speed time to insights and decision and simplify data access through familiar data access formats and Apache Spark APIs.

Best of all, however, is the in-memory capabilities as noted above. Apache Spark uses an in-memory approach for processing data to deliver results quickly. The platform includes data abstraction and integration services that enable z/OS analytics applications to leverage standard Spark APIs.  It also allows analysts to collect unstructured data and use their preferred formats and tools to sift through data.

At the same time developers and analysts can take advantage of the familiar tools and programming languages, including Scala, Python, R, and SQL to reduce time to value for actionable insights. Of course all the familiar z/OS data formats are available too: IMS, VSAM, DB2 z/OS, PDSE or SMF along with whatever you get through the Apache Spark APIs.

This year we already have seen the z13s and now the z/OS Platform for Apache Spark. Add to that the z System LinuxOne last year. z-Based data centers suddenly have a handful of radically different new mainframes to consider.  Can Watson, a POWER-based system, be far behind? Your guess is as good as anyone’s.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Gets Serious about Linux on z Systems

February 12, 2016

 

It has taken the cloud, open source, and mobile for IBM to finally, after more than a decade of Linux on z, for the company to turn it into the agile development machine it should have been all along. Maybe z data centers weren’t ready back then, maybe they aren’t all that ready now, but it is starting to happen.

Primary_LinuxONE_LeftAngle-1 (1)

LinuxONE Rockhopper, Refreshed for Hybrid Cloud Innovation

In March, IBM will make its IBM Open Platform available for the IBM LinuxONE (IOP) portfolio available at no cost. IOP includes a broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase and more, as well as Apache Hadoop 2.7.1. Continuing its commitment to contributing back to the open source community, IBM has optimized the Open Managed Runtime project (OMR) for LinuxONE. Now IBM innovations in virtual machine technology for new dynamic scripting languages will be brought to enterprise-grade strength.

It doesn’t stop there. IBM has ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. IBM expects to begin contributing code to the Go community this summer.

Back in December IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services, including the Watson Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

Following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This will be closely tied to Canonical’s Ubuntu port to the z expected this summer.

Also, through new work by SUSE to collaborate on technologies in the OpenStack space, SUSE tools will be employed to manage public, private, and hybrid clouds running on LinuxONE.  Open source, OpenStack, open-just-about-everything appears to be the way IBM is pushing the z.

At a presentation last August on Open Source & ISV Ecosystem Enablement for LinuxONE and IBM z, Dale Hoffman, Program Director, IBM’s Linux SW Ecosystem & Innovation Lab, introduced the three ages of mainframe development; our current stage being the third.

  1. Traditional mainframe data center, 1964–2014 includes • Batch • General Ledger • Transaction Systems • Client Databases • Accounts payable / receivable • Inventory, CRM, ERP Linux & Java
  2. Internet Age, 1999–2014 includes–• Server Consolidation • Oracle Consolidation • Early Private Clouds • Email • Java®, Web & eCommerce
  3. Cloud/Mobile/Analytics (CAMSS2) Age, 2015–2020 includes– • On/Off Premise, Hybrid Cloud • Big Data & Analytics • Enterprise Mobile Apps • Security solutions • Open Source LinuxONE and IBM z ecosystem enablement

Hoffman didn’t suggest what comes after 2020 but we can probably imagine: Cognitive Computing, Internet of Things, Blockchain. At least those are trends starting to ramp up now.

He does, however, draw a picture of the state of Linux on the mainframe today:

  • 27% of total installed capacity run Linux
  • Linux core capacity increased 16% from 2Q14 to 2Q15
  • 40% of customers have Linux cores
  • 80% of the top 100 customers (in terms of installed MIPS) run Linux on the mainframe
  • 67% of new accounts run Linux

To DancingDinosaur, this last point about the high percentage of new z accounts running Linux speaks to where the future of the z is heading.

Maybe as telling are the following:

  • 64% of companies participate in Open Source projects
  • 78% of companies run on open source
  • 88% of companies to increase open source contributions in the next 2-3 year
  • 47% to release internal tools & projects as OSS
  • 53% expect to reduce barriers to employee participation in open source
  • 50% report that more than half of their engineers are working on open source projects
  • 66% of companies build software on open source

Remember when open source and Linux first appeared for z, data center managers were shocked at the very concept. It was anti-capitalist at the very least, maybe even socialist or communist. Look at the above percentages; open source has gotten about as mainstream as it gets.

It will be interesting to see how quickly developers move to LinuxONE for their CAMSS projects. IBM hasn’t said anything about the pricing of the refreshed Rockhopper model or about the look and feel of the tools. Until the developers know, DancingDinosaur expects they will continue to work on the familiar x86 tools they are using now.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort Brings z System Integration Software to Open Source Tools

October 13, 2015

In a series of announcements last month, Syncsort integrated its DMX-h data integration software with Apache Kafka, an open distributed messaging system. This will enable mainframe shops to tap DMX-h’s easy-to-use GUI to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

Spark graphic

Courtesy of IBM

Syncsort also delivered an open source contribution of an IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform. Not stopping there, Syncsort is integrating the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark too. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, on premise or in the cloud.

Said Tendü Yoğurtçu, General Manager of Syncsort’s big data business, in the latest announcement: “We are seeing increased demand for real-time analytics in industries such as healthcare, financial services, retail, and telecommunications.” With these announcements, Syncsort sees itself delivering the next generation streaming ETL and Internet of Things data integration platform.

Of course, the Syncsort offer should be unnecessary for most z System users except those that are long term Syncsort shops or are enamored of Syncsort’s GUI.  IBM already  offers Spark native on z/OS and Linux on z so there is no additional cost.  BTW, Syncsort itself was just acquired. What happens with its various products remains to be seen.

Still  IBM has been on a 12-year journey to expand mainframe workloads—Linux to Hadoop and Spark and beyond—the company has been urging mainframe shops as fast as fast as possible to become fully engaged in big data, open source, and more. The Syncsort announcements come at a precipitous time; mainframe data centers can more easily participate in the hottest use cases: real-time data analytics, streaming data analytics across diverse data sources, and more at the time when the need for such analytics is increasing.

Apache Spark and some of these other technologies should already be a bit familiar to z System data centers; Apache Kafka will be less familiar. DancingDinosaur noted Spark and others here, when LinuxOne was introduced.

To refresh, Apache Spark consists of a fast engine for large-scale data processing that provides over 80 high-level operators to make it easy to build parallel apps or use them interactively from the Scala, Python, and R shells. It also offers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.  As noted above Syncsort offers an open source version of the IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform.

Spark already has emerged as one of the most active big data open source projects, initially as a fast memory-optimized processing engine for machine learning and now as the single compute platform for all types of workloads including real-time data processing, interactive queries, social graph analysis, and others. Given Spark’s success, there is a growing need to securely access data from a diverse set of sources, including mainframes, and to transform the data into a format that is easily understandable by Spark.

Apache Kafka, essentially an enterprise service bus, is less widely known. Apache Kafka brings a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system. Kafka is often used in place of traditional message brokers like JMS and AMQP because of its higher throughput, reliability and replication. Syncsort has integrated its data integration software with Apache Kafka’s distributed messaging system to enable users to leverage DMX-h’s GUI as part of an effort to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

According to Matei Zaharia, creator of Apache Spark and co-founder & CTO of Databricks: “Organizations look to Spark to enable a variety of use cases, including streaming data analytics across diverse data sources”.  He continues: “Syncsort has recognized the importance of Spark in the big data ecosystem for real-time streaming applications and is focused on making it easy to bring diverse data sets into Spark.” IBM certainly recognizes this too, and the z System is the right platform for making all of this happen.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Power Systems LC Aims to Expand the Power Systems Market

October 8, 2015

IBM is rapidly trying to capitalize on its investment in POWER technology and the OpenPOWER Foundation to expand the POWER franchise. The company is offering up the  Power Systems LC Server family; LC for Linux Community. This addresses how processing will be used in the immediate future; specifically in Hybrid Cloud, Hyperscale Data Centers, and Open Solutions. You could probably throw in IoT and big data/real-time analytics too although those weren’t specifically mentioned in any of the LC announcement materials or briefings.

Linux Community 1 lc server

Courtesy of IBM:  the new Power S822LC (click to enlarge)

The LC Server family  comes with a new IBM go-to-market strategy, as IBM put it: buy servers the way you want to buy them; online with simple pricing and a one-click purchase (coming soon). Your choice of standard configurations or have your configuration customized to meet your unique needs through IBM’s global ecosystem of partners and providers. Same with a selection of service and support options from an array of IBM technology partners.

There appear to be three basic configurations at this point:

  1. Power Systems S812LC: designed for entry and small Hadoop workloads
  2. Power Systems S822LC for Commercial Computing: ideal for data in the cloud and flexible capacity for MSPs
  3. Power Systems S822LC for High Performance Computing: for cluster deployments across a broad range of industries

According to the latest S812LC spec sheet, the IBM 8348 Power System S812LC server with POWER8 processors is optimized for data and Linux. It is designed to deliver superior performance and throughput for high-value Linux workloads such as industry applications, open source, big data, and LAMP.  It incorporates OpenPOWER Foundation innovations for organizations that want the advantages of running their big data, Java, open source, and industry applications on a platform designed and optimized for data and Linux. Modular in design, the Power S812LC is simple to order and can scale from single racks to hundreds.

The Power S812LC server supports one processor socket, offering 8-core 3. 32 GHz or 10-core 2.92 GHz POWER8 configurations in a 19-inch rack-mount, 2U drawer configuration. All the cores are activated. The server provides 32 DIMM memory slots. Memory features supported are 4 GB (#EM5A), 8 GB (#EM5E), 16 GB (#EM5C), and 32 GB (#EM5D), allowing for a maximum system memory of 1024 GB.

The LC Server family will leverage a variety of innovations that have been brought out by various members of the OpenPOWER Foundation over the last few months.  These include innovations from Wistron, redislabs, Tyan, Nvidia, Mellanox, Ubuntu, and Nallatech in the areas of big data, GPU acceleration, HPC, and cloud. And, of course, IBM’s CAPI.

No actual pricing was provided. In response to a question from DancingDinosaur about whether the arrival of products from the OpenPOWER Foundation was driving down Power Systems prices, the response was a curt: “We haven’t seen the drag down,” said an IBM manager. Oh well, so much for an imminent price war over Power Systems.

However, IBM reported today that  based on its own internal testing, a new Power Systems LC server can complete an average of select Apache Spark workloads – including analyzing Twitter feeds, streaming web page views and other data-intensive analytics – for less than half the cost of an Intel E5-2699 V3 processor-based server, providing clients with 2.3x better performance per dollar spent. Additionally, the efficient design of a Power Systems LC server allows for 94% more Spark social media workloads in the same rack space as a comparable Intel-based server.

These new systems are exactly what is needed to make the POWER platform viable over the long term, and it can’t be just an IBM show. With OpenPOWER Foundation members delivering innovations there is no telling what can be done in terms of computing with POWER9 and POWER10 when they come.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Topaz Brings Distributed Style to the Mainframe

January 30, 2015

Early in January Compuware launched the first of what it promised would be a wave of tools for the mainframe that leverage the distributed graphical style of working with systems.  The company hopes the tool, Topaz, will become a platform that hooks people experienced with distributed computing, especially Millennials, on working with the mainframe. The company is aiming not just for IT newbies but experienced distributed IT people who find the mainframe alien.

Compuware is pitching Topaz as a solution for addressing the problem of the wave of retirements of experienced mainframe veterans. The product promises to help developers, data architects, and other IT professionals discover, visualize, and work with both mainframe and non-mainframe data in a familiar, intuitive manner.  They can work with it without actually having to directly encounter mainframe applications and databases in their native formats.

compuware topaz screen

Topaz Relationship Visualizer (click to enlarge)

DancingDinosaur has received the full variety of opinions on the retiring mainframe veteran issue, ranging from a serious concern to a bogus issue. Apparently the issue differs with each mainframe shop. In this case, demographics ultimately rule, and people knowledgeable about the mainframe (including DancingDinosaur, sadly) are getting older.  Distributed IT folks, however, know how to operate data centers, manage applications, handle data, and run BI and analytics—all the things we want any competent IT shop to do. So, to speed their introduction to the mainframe it makes sense to give them familiar tools that let them work in accustomed ways.

And Topaz definitely has a familiar distributed look-and-feel. Check out a demonstration of it here. What you will see are elements of systems, applications, and data represented graphically. Click an item and the relevant relationships are exposed. Click again to drill down to detail. To move data between hosts just drag and drop the desired files between distributed hosts and the mainframe.  You also can use a single distributed-like editor to work with data on Oracle, SQL Server, IMS, DB2 and others across the enterprise. The actions are simple, intuitive, and feel like any GUI tool.

The new tool should seem familiar. Compuware built Topaz using open source Eclipse. It also made use of ISPF, the mainframe toolset. Read about Eclipse here.

With Topaz Compuware is trying to address a problem IBM has been tackling through its System z Academic Initiative—to answer where next generation of mainframers will come from.  With its contests and university curriculum IBM is trying to captivate young people early with job possibilities and slick technologies, and catch them as young as high school.

Compuware is aiming for working IT professionals in the distributed environment. They may not be much younger than their mainframe counterparts. but Compuware is giving them a tool that will allow them to immediately start doing meaningful work with both distributed and mainframe systems and do it in a way they immediately grasp.

Topaz treats mainframe and non-mainframe assets in a common manner. As Compuware noted: In an increasingly dynamic big data world it makes less and less sense to treat any platform as an island of information. Topaz takes a huge step in the right direction.

Finally, expect to see Topaz updates and enhancements quarterly. Compuware describes Topaz as an agile development effort, drawing a pointed contrast to the rather languid pace of some mainframe ISVs in getting out updates.  If the company is able to achieve its aggressive release cycle goals that alone may help change perceptions of the mainframe as a staid, somewhat dull platform.

With Topaz Compuware is off to a good start, but you can see where and how the toolset can be expanded upon.  And Compuware even hinted at opening the Topaz platform to other ISVs. Don’t hold your breath, but at the least it may get other mainframe ISVs to speed their own efforts, making the mainframe overall a more dynamic platform. With the z13 IBM raised the innovation bar (see DancingDinosaur here and here). Now other mainframe ISVs must up their game.

DancingDinosaur is Alan Radding, a long-time IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. You also can read more of my writing at Technologywriter.com and here.

Open POWER Consortium Aims to Expand the POWER Ecosystem beyond IBM

August 7, 2013

With IBM’s August 6 announcement of new POWER partners, including Google, not only is IBM aiming to expand the variety of POWER workloads but establish an alternative ecosystem to Intel/ x86 that continues to dominate general corporate computing.  Through the new Open POWER Consortium, IBM will make  POWER hardware and software available for open development for the first time as well as offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can enable innovative customization in creating new styles of server hardware for a variety of computing workloads.

IBM has a long history of using open consortiums to grab a foothold in different markets;  as it did with Eclipse (open software development tools), Linux (open portable operating system), KVM (open hypervisor and virtualization), and OpenStack (open cloud interoperability). In each case, IBM had proprietary technologies but could use the open source consortium strategy to expand market opportunities at the expense of entrenched proprietary competitors like Microsoft or VMware.  The Open POWER Consortium opens a new front against Intel, which already is scrambling to fend off ARM-based systems and other lightweight processors.

The establishment of the Open POWER Consortium also reinforces IBM’s commitment to the POWER platform in the face of several poor quarters. The commitment to POWER has never really wavered, insists an IBM manager, despite what financial analysts might hint at. Even stronger evidence of that commitment to POWER is POWER8, which is on track for 2014 if not sooner, and POWER9, which is currently in development, he confirmed.

As part of its initial collaboration within the consortium, IBM reported it and NVIDIA will integrate NVIDIA’s CUDA GPU and POWER.  CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).  GPUs increasingly are being used to boost overall system performance, not just graphics performance. The two companies envision powerful computing systems based on NVIDIA GPUs and IBM’s POWER CPUs  and represent an example of the new kind of systems the open consortium can produce.

However, don’t expect immediate results.  The IBM manager told DancingDinosaur that the fruits of any collaboration won’t start showing up until sometime next year. Even the Open POWER Collaboration website has yet to post anything. The consortium is just forming up; IBM expects the public commitment of Google to attract other players, which IBM describes as the next generation of data-center innovators.

As for POWER users, this can only be a good thing. IBM is not reducing its commitment to the POWER roadmap, plus users will be able to enjoy whatever the new players bring to the POWER party, which could be considerable. In the meantime, the Open POWER Consortium welcomes any firm that wants to innovate on the POWER platform and participate in an open, collaborative effort.

An even more interesting question may be where else will IBM’s interest in open systems and open consortiums take it. IBM remains “very focused on open and it’s a safe bet that IBM will continue to support open technologies and groups that support that,” the IBM manager told DancingDinosaur.  IBM, however, has nothing to announce after the Open POWER Consortium. Hmm, might a z/OS open collaborative consortium someday be in the works?

SHARE will be in Boston next week. DancingDinosaur expects to be there and will report on the goings-on. Hope to see some of you there.  There also are plans for a big IBM System z/Power conference, Enterprise Systems 2013, toward to end of October in Florida.  Haven’t seen many details yet, but will keep you posted as they come in.


%d bloggers like this: