IBM Gets Serious about Linux on z Systems

February 12, 2016

 

It has taken the cloud, open source, and mobile for IBM to finally, after more than a decade of Linux on z, for the company to turn it into the agile development machine it should have been all along. Maybe z data centers weren’t ready back then, maybe they aren’t all that ready now, but it is starting to happen.

Primary_LinuxONE_LeftAngle-1 (1)

LinuxONE Rockhopper, Refreshed for Hybrid Cloud Innovation

In March, IBM will make its IBM Open Platform available for the IBM LinuxONE (IOP) portfolio available at no cost. IOP includes a broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase and more, as well as Apache Hadoop 2.7.1. Continuing its commitment to contributing back to the open source community, IBM has optimized the Open Managed Runtime project (OMR) for LinuxONE. Now IBM innovations in virtual machine technology for new dynamic scripting languages will be brought to enterprise-grade strength.

It doesn’t stop there. IBM has ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. IBM expects to begin contributing code to the Go community this summer.

Back in December IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services, including the Watson Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

Following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This will be closely tied to Canonical’s Ubuntu port to the z expected this summer.

Also, through new work by SUSE to collaborate on technologies in the OpenStack space, SUSE tools will be employed to manage public, private, and hybrid clouds running on LinuxONE.  Open source, OpenStack, open-just-about-everything appears to be the way IBM is pushing the z.

At a presentation last August on Open Source & ISV Ecosystem Enablement for LinuxONE and IBM z, Dale Hoffman, Program Director, IBM’s Linux SW Ecosystem & Innovation Lab, introduced the three ages of mainframe development; our current stage being the third.

  1. Traditional mainframe data center, 1964–2014 includes • Batch • General Ledger • Transaction Systems • Client Databases • Accounts payable / receivable • Inventory, CRM, ERP Linux & Java
  2. Internet Age, 1999–2014 includes–• Server Consolidation • Oracle Consolidation • Early Private Clouds • Email • Java®, Web & eCommerce
  3. Cloud/Mobile/Analytics (CAMSS2) Age, 2015–2020 includes– • On/Off Premise, Hybrid Cloud • Big Data & Analytics • Enterprise Mobile Apps • Security solutions • Open Source LinuxONE and IBM z ecosystem enablement

Hoffman didn’t suggest what comes after 2020 but we can probably imagine: Cognitive Computing, Internet of Things, Blockchain. At least those are trends starting to ramp up now.

He does, however, draw a picture of the state of Linux on the mainframe today:

  • 27% of total installed capacity run Linux
  • Linux core capacity increased 16% from 2Q14 to 2Q15
  • 40% of customers have Linux cores
  • 80% of the top 100 customers (in terms of installed MIPS) run Linux on the mainframe
  • 67% of new accounts run Linux

To DancingDinosaur, this last point about the high percentage of new z accounts running Linux speaks to where the future of the z is heading.

Maybe as telling are the following:

  • 64% of companies participate in Open Source projects
  • 78% of companies run on open source
  • 88% of companies to increase open source contributions in the next 2-3 year
  • 47% to release internal tools & projects as OSS
  • 53% expect to reduce barriers to employee participation in open source
  • 50% report that more than half of their engineers are working on open source projects
  • 66% of companies build software on open source

Remember when open source and Linux first appeared for z, data center managers were shocked at the very concept. It was anti-capitalist at the very least, maybe even socialist or communist. Look at the above percentages; open source has gotten about as mainstream as it gets.

It will be interesting to see how quickly developers move to LinuxONE for their CAMSS projects. IBM hasn’t said anything about the pricing of the refreshed Rockhopper model or about the look and feel of the tools. Until the developers know, DancingDinosaur expects they will continue to work on the familiar x86 tools they are using now.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Spectrum Suite Returns IBM to the Storage Game

January 29, 2016

The past four quarters haven’t been kind to IBM storage as the storage group racked up consecutive quarterly revenue losses. The Spectrum Suite V 1.0 is IBM’s latest software defined storage (SDS) initiative, one of the hottest trends in storage. The product release promises to start turning things around for IBM storage.

IBM Mobile Storage (Jared Lazarus/Feature Photo Service for IBM)

IBM Mobile Storage, Jamie,Thomas, GM Storage (Jared Lazarus/Feature Photo Service for IBM)

Driving interest in SDS is the continuing rapid adoption on new workload, new application, and new ways of storing and consuming data. The best thing about the Spectrum Suite is the way IBM is now delivering it—as a broad set of storage software capabilities that touch every type of storage operation. It doesn’t much matter which workloads or applications are driving it or what kind of storage you need.  Seventy percent of clients report deploying object storage, and 60% already are committed to SDS.  Over three-quarters of storage device interface (SDI) adopters also indicated a strong preference for single-vendor storage solutions.  This all bodes well for IBM’s Spectrum Suite.

Also working in IBM’s favor is the way storage has traditionally been delivered. Even within one enterprise there can be multiple point solutions from different vendors or even incompatible solutions from the same vendor. Companies need to transition among storage software offerings as business needs change, which entails adding and removing software licenses. This always is complex and may even lead to dramatic cost gyrations due to different licensing metrics and different vendor policies.  On top of that, procurement may not play along so quickly, leaving the organization with a gap in functionality.  Then there are the typical inconsistent user interfaces among offerings, which invariably reduces productivity and may increase errors.

Add to that the usual hassles of learning different products with different interfaces and different ways to run new storage processes. As a result, a switch to SDS may not be as smooth or efficient as you hoped, and it probably won’t be cheap.

IBM is counting on these storage complications, outlined above, and more to give it a distinct advantage in the SDS market  IBM should know; the company has been one of the offenders creating similar complications as they cobbled together a wide array of storage products with different interfaces and management processes over the years.

With the new Spectrum Storage Suite IBM finally appears to have gotten it right. IBM is offering a simplified and predictable licensing model for entire Spectrum Storage family. Pricing is pegged to the capacity being used, regardless of what that capacity is and how it is being used. Block, file, object—doesn’t matter; the same per-terabyte pricing applies. IBM estimates that alone can save up to 40% compared to licensing different software capabilities separately. Similarly, there are no software licensing hassles when migrating from one form of storage or data type to another. Even the cost won’t change unless you add capacity. Then, you pay the same per-terabyte cost for the additional capacity.

The Spectrum Suite and its licensing model work for mainframe shops running Linux on z and LinuxONE. Sorry, no z/OS yet.

The new Spectrum Storage approach has advantages when running a storage shop. There are no unexpected charges when using new capabilities and IBM isn’t charging for non-production uses like dev and test.

Finally, you will find a consistent user interface across all storage components in the Spectrum suite. That was never the case with IBM’s underlying storage hardware products but Spectrum SDS makes those difference irrelevant. The underlying hardware array doesn’t really matter; admins will rarely ever have to touch it.

The storage capabilities included in IBM Spectrum Storage Suite V1.0 should be very familiar to you from the traditional IBM storage products you probably are currently using. They include:

  • IBM Spectrum Accelerate, Version 11.5.3
  • IBM Spectrum Archive Enterprise Edition, Version 1.2 (Linux edition)
  • IBM Spectrum Control Advanced Edition 5.2
  • IBM Spectrum Protect Suite 7.1
  • IBM Spectrum Scale Advanced and Standard Editions (Protocols) V4.2
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Real-time Compression
  • IBM Spectrum Virtualize Software for SAN Volume Controller, Version 7.6 – Encryption Software

With Spectrum Storage you can, for example, run SAN storage, storage rich servers, and a tape library. Add up the storage capacity for each and pay the per-terabyte licensing cost. Re-allocate the existing capacity between the different types of storage and your charges don’t change. Pretty nifty, huh? To DancingDinosaur, who has sat through painful discussions of complicated IBM software pricing slopes, this is how you spell relief. Maybe there really is a new IBM coming that actually gets it.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem Continues Surge in 4Q15

January 22, 2016

DancingDinosaur follows technology, not financial investments, so you’d be an idiot if you considered what follows as investment advice. It is not.  Still, as one who has built a chunk of his career around the mainframe, it is good to see the z System continuing to remain in the black and beating the sexier Power lineup although I do follow both closely. See the latest IBM financials here.

  ibm-z13

The IBM z13 System

 Specifically, as IBM reported on Tuesday, revenues from z Systems mainframe server products increased 16 percent compared with the year-ago period (up 21 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS (millions of instructions per second), increased 28 percent.  Revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency).

Almost as good, revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency). Power revenues have been up most of the year although they got a little blurry in the accounting.

In the storage market, which is getting battered by software defined storage (SDS) on one hand and cloud-based storage on the other, IBM reported revenues from System Storage decreased 11 percent (down 7 percent adjusting for currency). The storage revenues probably won’t bounce back fast, at least not without IBM bringing out radically new storage products. That storage rival EMC got acquired by Dell should be some kind of signal that the storage market as the traditional enterprise players knew it is drastically different. For now object storage, SDS, and even Flash won’t replace the kind of revenue IBM used to see from DS8000 disk systems or TS enterprise tape libraries loaded with mechanical robotics.

Getting more prominence is IBM’s strategic initiative. This has been a company priority all year. Strategic initiatives include cloud, mobile, analytics, security, IoT, and cognitive computing. Q4 revenues, as reported by IBM, from these strategic imperatives — cloud, analytics, and engagement — increased 10 percent year-to-year (up 16 percent adjusting for currency).  For the full year, revenues from strategic imperatives increased 17 percent (up 26 percent adjusting for currency and the divested System x business) to $28.9 billion and now represents 35 percent of total IBM consolidated revenue.

For the full year, total cloud revenues (public, private and hybrid) increased 43 percent (up 57 percent adjusting for currency and the divested System x business) to $10.2 billion.  Revenues for cloud delivered as a service — a subset of the total cloud revenue — increased 50 percent to $4.5 billion; and the annual as-a-service run rate increased to $5.3 billion from $3.5 billion in the fourth quarter of 2014.

Meanwhile, revenues from business analytics increased 7 percent (up 16 percent adjusting for currency) to $17.9 billion.  Revenues from mobile more than tripled and from security increased 5 percent (up 12 percent adjusting for currency).

Commenting on IBM latest financial was Timothy Prickett Morgan, who frequently writes on IBM’s platforms. Citing Martin Schroeter, IBM’s chief financial officer, statements to analyst, Morgan suggested that low profit margins, which other financial analysts complained about, put pressure on the System z13 product line that launched early in the year. After a fast start, apparently, the z13 is now experiencing a slowdown in the upgrade cycle. It’s at this point that DancingDinosaur usually expects to see a new z, typically a business class version of the latest mainframe, the z13 in this case, but that does not appear to be in the offing. About the closest IBM got to that was the RockHopper model of the LinuxOne, a z optimized only for Linux, cloud, mobile, and analytics.

Morgan also noted that IBM added about 50 new mainframe customers for the year on an installed base of about 6,000 active customers. DancingDinosaur has been tracking that figure for years and it has not fluctuated much in recent years. And am never sure how to count the handful of IT shops that run a z in the IBM cloud.  But 5000-6000 active z shops still sounds about right.

Power Systems, which has also grown four quarters in a row, and was up 8 percent at constant currency. This has to be a relief to the company, which has committed over $1 billion to Power. IBM attributes some of this growth to its enthusiastic embrace of Linux on Power8, but Morgan complains of having no sense of how much of the Power Systems pie is driven by scale-out Linux machines intended to compete against Intel Xeon servers. Power also is starting to get some boost from the OpenPOWER Foundation, members that started to ship products in the past few months. It’s probably minimal revenue now but over time it should grow.

For those of us who are counting on z and Power to be around for a while longer, the latest financials should be encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Mobile Financial App Security Appears Shaky

January 15, 2016

IBM has made mobile a key strategic imperative going forward, even discounting mobile software license charges on z. However, a recent study suggests that mobile apps may be less secure than app users think. For example, 83% of the app users surveyed felt their applications were adequately secure. Yet, 90% of the applications Arxan Technologies tested were vulnerable to at least two of the Open Web Application Security Project (OWASP) Mobile Top 10 Risks.

dino Arxan_SOAS_Title_Image

The OWASP Top Ten is an awareness document for web application security. The OWASP Top Ten represents a broad consensus about what the most critical web application security flaws are. Security experts will use the list as a first step in changing the security awareness and software development culture around security in organizations around the world. You can find the Arxan report here.

In the latest study, 41% of mobile finance app users expect their finance apps to be hacked within the next six months. That’s not exactly a vote of confidence. Even worse, 42% of executive IT decision makers, those who have oversight or insight into the security of the mobile finance apps they produce, feel the same way.  Does this bother you?

It should. The researchers found that 81% of app users would change providers if apps offered by similar providers were more secure. While millennials are driving the adoption of mobile apps, their views on the importance of app security were equally as strong as the older non-millennials. Overall, survey results showed very little geographical discrepancies across the US, UK, Germany, and Japan.

This sentiment makes it sound like mobile finance applications are at a hopeless state of security where, despite Herculean efforts to thwart attackers, adversaries are expected to prevail. But the situation is not hopeless; it’s careless. Half the organizations aren’t even trying. Fully 50% of organizations have zero budget allocated for mobile app security—0, nothing, nada—according to the researchers.  By failing to step up their mobile security game organizations risk losing customers to competitors who offer alternative apps that are more secure.

How bad is the mobile security situation? When put to the test, the majority of mobile apps failed critical security tests and could easily be hacked, according to the researchers.  Among 55 popular mobile finance apps tested for security vulnerabilities, 92% were shown to have at least two OWASP Mobile Top 10 Risks. Such vulnerabilities could allow the apps to be tampered and reverse-engineered, which could clearly put sensitive financial information in the wrong hands or, even worse, potentially redirect the flow of money. Ouch!

Think about all the banks and insurance companies that are scrambling to deploy new mobile apps. As it turns out, financial services organizations, the researchers report, also are among the top targets of hackers seeking high-value payment data, intellectual property (IP), and other sensitive information. Specifically, employee, customer, and soft IP data are the top three targets of cyber-attacks in the financial services market; while at the same time theft of hard IP soared 183% in 2015, according to PwC, another firm researching the segment.

With the vast majority of cyber-attacks happening at the application layer, one would think that robust application security would be a fundamental security measure being aggressively implemented and increasingly required by regulators, particularly given the financial services industry’s rapid embrace of mobile financial apps. But apparently it is not.

So where does the financial mobile app industry stand? Among the most prevalent OWASP Mobile Top 10 Risks identified among the mobile finance apps tested the top 2 risks were:

1) Lack of binary protection (98%) – this was the most prevalent vulnerability

2) Insufficient transport layer protection (91%).

A distant third, at 58%, was unintended data leakage. All these vulnerabilities, the top two especially, make the mobile financial applications susceptible to reverse-engineering and tampering in addition to privacy violations and identity theft.

Says Arxan CTO Sam Rehman: “The impact for financial institutions and mobile finance app users can be devastating. Imagine having your mobile finance app leak your personal financial information and identity, or your app maliciously redirecting your money.” The customer outrage and bad press that followed wouldn’t be pretty, not to mention the costly lawsuits.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Docker on IBM z System

January 7, 2016

“If you want Docker on z, you can do it in next to 30 seconds, says Dale Hoffman,Program Director, Linux SW Ecosystem & Innovation Lab.  At least if you’re running Linux on z and preferably on a LinuxONE z.  With all the work Hoffman’s team has done laying the ground work for Docker on the z, you barely have to do anything yourself.

HybridCloud_Infographic (3)

Containers are ideal for cloud computing or, more importantly, for hybrid clouds, defined as the connection of one or more clouds to other clouds. Hybrid clouds are where IBM sees the industry and the z going, and containers, particularly Docker containers, have emerged as the vehicle to get enterprises there. Click here for an FAQ on Docker with z.

z System shops can get there fast using tools Hoffman’s group has already built for the z. To get started, just click here. Or, simply go to IBM Bluemix, from which you can build and deploy Docker containers for the z and other platforms. Back in June IBM introduced enterprise class containers that make it easier for developers to deliver production applications across their hybrid environments.

IBM also offers its own IBM branded containers that allow organizations to deploy, manage, and run application components on the IBM Bluemix development platform by leveraging the open-source Docker container technology. IBM Bluemix now offers three infrastructure compute technology choices to deploy applications – Docker containers, OpenStack virtual machines, or Cloud Foundry apps. Designed for enterprise production workloads, IBM Containers can be securely deployed with integrated scalability and reliability, which enterprise customers rely upon.

In keeping with IBM’s policy of not going it alone, the company also has become a founding member of a coalition of partners and users to create the Open Container Platform (OCP) that aims to ensure containers are interoperable. Features of the IBM Containers include integrated tools such as log analytics, performance monitoring and delivery pipeline, elastic scaling, zero downtime deployments, automated image security/vulnerability scanning, and access to Bluemix’s catalog of over 100 cloud services including Watson, Analytics, IoT and Mobile.

Enterprise z shops want containers because they need to be as fast and agile as the born-in-the-cloud upstarts challenge them. Think survival. Containers like Docker really provide ease of use, portability, and fast deployment almost anywhere to get new applications into production fast. Through containers Docker basically puts its engine/runtime on top of the OS and provides the virtual containers to deploy software into the container. The appeal of this is easy portability for the application/software to any Docker container anywhere and fast deployment.

Specifically the Docker technology provides application portability by utilizing open-source, standardized, light-weight, and self-sufficient container capabilities. IBM’s implementation of the Docker technology with enterprise capabilities further strengthens IBM’s support for hybrid cloud environments. Of course, not every application at every stage in its lifecycle will run in the public cloud—many if not most won’t ever–but IBM Containers enables the developers to determine when to run containers on premise and when to deploy to the public cloud on IBM Bluemix with full Internet connectivity. Image files created within IBM Containers support portability and can be instantiated as containers on any infrastructure that runs Docker.

Through the use of containers on z you can shape your environment using system virtualization and container elements according to your landscape and your requirements with hardly any constraints in performance.  In addition, Docker on z provides greater business agility to go to market quicker and solve business problems effectively through DevOps agility via Docker containers and microservices. Then add hybrid cloud and portability by which you move the same application across multiple clouds.   In short, you can define your IT structures according to your needs, not your system constraints.

Finally, there is nothing threatening about Docker containers on z. Docker is Docker is Docker, even on z, says Hoffman; it relies on the same container technology of Linux, which has been available on z for many years. So get started with containers on z and let DancingDinosaur know when you have success deploying your z containers.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Systems Sets 2016 Priorities

December 14, 2015

Despite its corporate struggles, IBM Systems, the organization that replaced IBM System and Technology Group (IBM STG) had a pretty good year in 2015. It started the year by launching the z13, which was optimized for the cloud and mobile economy. No surprise there. IBM made no secret that cloud, mobile, and analytics were its big priorities.  Over the year it also added cognitive computing and software defined storage to its priorities.

But it might have left out its biggest achievement of 2015.  This week IBM announced receiving a major multi-year research grant to IBM scientists to advance the building blocks for a universal quantum computer. The award was made by the U.S. Intelligence Advanced Research Projects Activity (IARPA) program. This may not come to commercial fruition in our working lives but it has the potential to radically change computing as we have ever envisioned it. And it certainly will put a different spin on worries about Moore’s Law.

Three Types of Quantum Computing

Right now, according to IBM, the workhorse of the quantum computer is the quantum bit (qubit). Many scientists are tackling the challenge of building qubits, but quantum information is extremely fragile and requires special techniques to preserve the quantum state. This fragility of qubits played a key part in one of the preposterous but exciting plots on the TV show Scorpion. The major hurdles include creating qubits of high quality and packaging them together in a scalable form so they can perform complex calculations in a controllable way – limiting the errors that can result from heat and electromagnetic radiation.

IBM scientists made a great stride in that direction earlier this year by demonstrating critical breakthroughs to detect quantum errors by combining superconducting qubits in lattices on computer chips – and whose quantum circuit design is the only physical architecture that can scale to larger dimensions.

To return to a more mundane subject, revenue, during 2015 DancingDinosaur reported the positive contributions the z System made to IBM’s revenue, one of the company’s few positive revenue performers. Turned out DancingDinosaur missed one contributor since it doesn’t track constant currency. If you look at constant currency, which smooths out fluctuations in currency valuations, IBM Power Systems have been on an upswing for the last 3 quarters: up 1% in Q1, up 5% in Q2, up 2% in Q3.   DancingDinosaur expects both z and Power to contribute to IBM revenue in upcoming quarters.

Looking ahead to 2016, IBM identified the following priorities:

  • Develop an API ecosystem that monetizes big data and cognitive workloads, built on the cloud as part of becoming a better service provider.
  • Win the architectural battle with OpenPOWER and POWER8 – designed for data and the cognitive era. (Unspoken, beat x86.)
  • Extend z Systems for new mobile, cloud and in-line analytics workloads.
  • Capture new developers, markets and buyers with open innovation on IBM LinuxONE, the most advanced and trusted enterprise Linux system.
  • Shift the IBM storage portfolio to a Flash and the software defined model that disrupts the industry by enabling new workloads, very high speed, and data virtualization for improved data economics.
  • Engage clients through a digital-first Go-to-Market model

These are all well and good. About the only thing missing is any mention of the IBM Open Mainframe Project that was announced in August as a partnership with the Linux Foundation. Still hoping that will generate the kind of results in terms of innovative products for the z that the OpenPOWER initiative has started to produce. DancingDinosaur covered that announcement here. Hope they haven’t given up already.  Just have to remind myself to be patient; it took about a year to start getting tangible results from OpenPOWER consortium.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Expect this to be the final DancingDinosaur for 2015.  Be back the week of Jan. 4

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort’s 2015 State of the Mainframe: Little Has Changed

November 30, 2015

Syncsort’s annual survey of almost 200 mainframe shops found that 83 percent of respondents cited security and availability as key strengths of the mainframe. Are you surprised? You can view the detailed results here for yourself.

synsort mainframes Role Big Data Ecosystem

Courtesy; Syncsort

Security and availability have been hallmarks of the z for decades. Even Syncsort’s top mainframe executive, Harvey Tessler, could point to little unexpected in the latest results “Nothing surprising. At least no big surprises. Expect the usual reliability, security,” he noted. BTW, in mid-November Clearlake Capital Group, L.P. (Clearlake) announced that it had completed the acquisition of Syncsort Incorporated. Apparently no immediate changes are being planned.

The 2015 study also confirmed a few more recent trends that DancingDinosaur has long suspected. More than two-thirds (67 percent) of respondents cited integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe.

Similarly, the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe. That, in fact, may be the most surprising response. Mainframe shops (or more likely the line-of-business managers they work with) are notorious for moving data off the mainframe for analytics, usually to distributed x86 platforms. The study showed respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis.

Many of the respondents no doubt will continue to do so, but it makes little sense in 2015 with a modern z System running a current configuration. In truth, it makes little sense from either a performance or a cost standpoint to move data off the z to perform analytics elsewhere. The z runs Hadoop and Spark natively. With your data and key analytics apps already on the z, why bother incurring both the high overhead and high latency entailed in moving data back and forth to run on what is probably a slower platform anyway.

The only possible reason might be that the mainframe shop doesn’t run Linux on the mainframe at all. That can be easily remedied, however, especially now with the introduction of Ubuntu Linux for the z. C’mon, it’s late 2015; modernize your z for the cloud-mobile-analytics world and stop wasting time and resources jumping back and forth to distributed systems that will run natively on the z today.

More encouraging is the interest of the respondents in big data and analytics. “The survey demonstrates that many big companies are using the mainframe as the back-end transaction hub for their Big Data strategies, grappling with the same data, cost, and management challenges they used it to tackle before, but applying it to more complex use cases with more and dauntingly large and diverse amounts of data,” said Denny Yost, associate publisher and editor-in-chief for Enterprise Systems Media, which partnered with Syncsort on the survey. The results show the respondents’ interest in mainframe’s ability to be a hub for emerging big data analytics platforms also is growing.

On other issues, almost one-quarter of respondents ranked as very important the ability of the mainframe to run other computing platforms such as Linux on an LPAR or z/VM virtual machines as a key strength of the mainframe at their company. Over one-third of respondents ranked as very important the ability of the mainframe to integrate with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of the mainframe at their company.

Maybe more surprising; only 70% on the respondents ranked as very important their organizations use of the mainframe for performing large-scale transaction processing or use of the mainframe for hosting mission-critical applications. Given that the respondents appeared to come from large, traditional mainframe shops you might have expected those numbers to be closer to 85-90%. Go figure.

When asked to rank their organization’s use of the mainframe to supplement or replace non-mainframe servers (i.e. RISC or x86-based servers) just 10% of the respondents considered it important. Clearly the hybrid mainframe-based data center is not a priority with these respondents.

So, what are they looking to improve in the next 12 months? The respondents’ top three initiatives are:

  1. Meeting Security and Compliance Requirements
  2. Reducing CPU usage and related costs
  3. Meeting Service Level Agreements (SLAs)

These aren’t the most ambitious goals DancingDinosaur has ever encountered but they should be quite achievable in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Latest IBM Initiatives Drive Power Advantages over x86

November 20, 2015

This past week IBM announced a multi-year strategic collaboration between it and Xilinx that aims to enable higher performance and energy-efficient data center applications through Xilinx FPGA-enabled workload acceleration on IBM POWER-based systems. The goal is to deliver open acceleration infrastructures, software, and middleware to address applications like machine learning, network functions virtualization (NFV), genomics, high performance computing (HPC), and big data analytics. In the process, IBM hopes to put x86 systems at an even greater price/performance disadvantage.

CAPI-640x419

Courtesy of IBM

At the same time IBM and several fellow OpenPOWER Foundation members revealed new technologies, collaborations and developer resources to enable clients to analyze data more deeply and at high speed. The new offerings center on the tight integration of IBM’s open and licensable POWER processors with accelerators and dedicated high performance x86e processors optimized for computationally intensive software code. The accelerated POWER-based offerings come at a time when many companies are seeking the best platform for Internet of Things, machine learning, and other performance hungry applications.

The combination of collaborations and alliances are clearly aimed at establishing Power as the high performance leader for the new generation of workloads. Noted IBM, independent software vendors already are leveraging IBM Flash Storage attached to CAPI to create very large memory spaces for in-memory processing of analytics, enabling the same query workloads to run with a fraction of the number of servers compared to commodity x86 solutions.  These breakthroughs enable POWER8-based systems to continue where the promise of Moore’s Law falls short, by delivering performance gains through OpenPOWER ecosystem-driven, full stack innovation. DancingDinosaur covered efforts to expand Moore’s Law on the z a few weeks back here.

The new workloads present different performance challenges. To begin, we’re talking about heterogeneous workloads that are becoming increasingly prevalent, forcing data centers to turn to application accelerators just to keep up with the demands for throughput and latency at low power. The Xilinx All Programmable FPGAs promise to deliver the power efficiency that makes accelerators practical to deploy throughout the data center. Just combine IBM’s open and licensable POWER architecture with Xilinx FPGAs to deliver compelling performance, performance/watt, and lower total cost of ownership for this new generation of data centers workloads.

As part of the IBM and Xilinx strategic collaboration, IBM Systems Group developers will create solution stacks for POWER-based servers, storage, and middleware systems with Xilinx FPGA accelerators for data center architectures such as OpenStack, Docker, and Spark. IBM will also develop and qualify Xilinx accelerator boards for IBM Power Systems servers. Xilinx is developing and will release POWER-based versions of its leading software defined SDAccel™ Development Environment and libraries for the OpenPOWER developer community.

But there is more than this one deal. IBM is promising new products, collaborations and further investments in accelerator-based solutions on top of the POWER processor architecture.  Most recently announced were:

The coupling of NVIDIA® Tesla® K80 GPUs, the flagship offering of the NVIDIA Tesla Accelerated Computing Platform, with Watson’s POWER-based architecture to accelerate Watson’s Retrieve and Rank API capabilities to 1.7x of its normal speed. This speed-up can further improve the cost-performance of Watson’s cloud-based services.

On the networking front Mellanox announced the world’s first smart network switch, the Switch-IB 2, capable of delivering an estimated 10x system performance improvement. NEC also announced availability of its ExpEther Technology suited for POWER architecture-based systems, along with plans to leverage IBM’s CAPI technology to deliver additional accelerated computing value in 2016.

Finally, two OpenPOWER members, E4 Computer Engineering and Penguin Computing, revealed new systems based on the OpenPOWER design concept and incorporating IBM POWER8 and NVIDIA Tesla GPU accelerators. IBM also reported having ported a series of key IBM Internet of Things, Spark, Big Data, and Cognitive applications to take advantage of the POWER architecture with accelerators.

The announcements include the names of partners and products but product details were in short supply as were cost and specific performance details. DancingDinosaur will continue to chase those down.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

DancingDinosaur will not post the week of Thanksgiving. Have a delicious holiday.


Follow

Get every new post delivered to your Inbox.

Join 841 other followers

%d bloggers like this: