Posts Tagged ‘hadoop’

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Leads in TBR Private and Hybrid Cloud Surveys

August 4, 2016

IBM has been named number one in private clouds by independent technology market research firm Technology Business Research (TBR) as well as number one in TBR’s hybrid cloud environments survey. Ironically, as fast as IBM has been trying to distance itself from its legacy platform heritage it brings an advantage when it comes to clouds for some customers. “A footprint in legacy IT solutions and management is a strong predictor of private cloud vendor success, as private cloud solutions are typically the first step toward hybrid IT environments,” wrote TBR Cloud Senior Analyst Cassandra Mooshian.

1800FLOWERS Taps IBM Commerce Cloud

Courtesy of IBM: 1800 FLOWERS Taps IBM Cloud

Coming out on top of IBM’s 2Q16 financials reported here, were the company’s strategic initiatives, mainly cloud, analytics, and mobile, which generated positive revenue results. The TBR reports provide welcome reinforcement for IBM strategy doubters. As reported by IBM, the annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent while security revenue increased 18 percent.

The TBR report also noted IBM leadership in overall vendor adoption for private cloud and in select private cloud segments due to its broad cloud and IT services portfolio, its variety of deployment options, and accompanying integration and optimization support. As a result, the company’s expertise and knowledge of both cloud and legacy technology make it easier for customers to opt for an IBM migration path to both private and hybrid clouds.

TBR also specifically called out of IBM cloud-friendly capabilities, including the comprehensive portfolio of cloud and hardware assets with security; cloud professional services that can span a customer’s entire IT environment; and a vertical approach to cloud combined with Watson technology. As for hybrid clouds, Kelsey Mason, Cloud Analyst at TBR, noted in the announcement: “Hybrid integration is the next stage in cloud adoption and will be the end state for many enterprise IT environments.” Enterprise hybrid adoption, TBR observed, now matches public adoption of a year ago, which it interprets as signaling a new level of maturity in companies’ cloud strategies.

What really counts, however, are customers who vote with their checkbooks.  Here IBM has been racking up cloud wins. For example, Pratt & Whitney, a United Technologies Corp. company in July announced it will move the engine manufacturer’s business, engineering, and manufacturing enterprise systems to a fully managed and supported environment on the IBM Cloud infrastructure.

Said Brian Galovich, vice president and chief information officer, Pratt & Whitney, in the published announcement:  “Working with IBM and moving our three enterprise systems to a managed cloud service will give us the ability to scale quickly and meet the increased demands for computing services, data processing and storage based on Pratt & Whitney’s forecasted growth over the next decade.

Also in July, Dixons Carphone Group, Europe’s largest telecommunications retail and services company as the result of a 2014 merger, announced plans to migrate to the IBM Cloud from IBM datacenters in the United Kingdom to integrate two distinct infrastructures and enable easy scaling to better manage the peaks and valleys of seasonal shopping trends. Specifically, the company expects to migrate about 2,500 server images from both enterprises with supporting database and middleware components from both infrastructures to an IBM hybrid cloud platform that comprises a private IBM Cloud with bare metal servers for production workloads and public IBM Cloud platform for non-production workloads.

As a merged company it saw an opportunity to consolidate the infrastructures by leveraging cloud solutions for flexibility, performance and cost savings. After assessing the long-term values and scalability of multiple cloud providers, the company turned to IBM Cloud for a smooth transition to a hybrid cloud infrastructure. “We can trust IBM Cloud to seamlessly integrate the infrastructures of both companies into one hybrid cloud that will enable us to continue focusing on other parts of the business,” said David Hennessy, IT Director, Dixons Carphone, in the announcement.

As IBM’s 2Q16 report makes clear, once both these companies might have bought new IBM hardware platforms but that’s not the world today. At least they didn’t opt for AWS or Azure.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Gets Serious about Linux on z Systems

February 12, 2016

 

It has taken the cloud, open source, and mobile for IBM to finally, after more than a decade of Linux on z, for the company to turn it into the agile development machine it should have been all along. Maybe z data centers weren’t ready back then, maybe they aren’t all that ready now, but it is starting to happen.

Primary_LinuxONE_LeftAngle-1 (1)

LinuxONE Rockhopper, Refreshed for Hybrid Cloud Innovation

In March, IBM will make its IBM Open Platform available for the IBM LinuxONE (IOP) portfolio available at no cost. IOP includes a broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase and more, as well as Apache Hadoop 2.7.1. Continuing its commitment to contributing back to the open source community, IBM has optimized the Open Managed Runtime project (OMR) for LinuxONE. Now IBM innovations in virtual machine technology for new dynamic scripting languages will be brought to enterprise-grade strength.

It doesn’t stop there. IBM has ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. IBM expects to begin contributing code to the Go community this summer.

Back in December IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services, including the Watson Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

Following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This will be closely tied to Canonical’s Ubuntu port to the z expected this summer.

Also, through new work by SUSE to collaborate on technologies in the OpenStack space, SUSE tools will be employed to manage public, private, and hybrid clouds running on LinuxONE.  Open source, OpenStack, open-just-about-everything appears to be the way IBM is pushing the z.

At a presentation last August on Open Source & ISV Ecosystem Enablement for LinuxONE and IBM z, Dale Hoffman, Program Director, IBM’s Linux SW Ecosystem & Innovation Lab, introduced the three ages of mainframe development; our current stage being the third.

  1. Traditional mainframe data center, 1964–2014 includes • Batch • General Ledger • Transaction Systems • Client Databases • Accounts payable / receivable • Inventory, CRM, ERP Linux & Java
  2. Internet Age, 1999–2014 includes–• Server Consolidation • Oracle Consolidation • Early Private Clouds • Email • Java®, Web & eCommerce
  3. Cloud/Mobile/Analytics (CAMSS2) Age, 2015–2020 includes– • On/Off Premise, Hybrid Cloud • Big Data & Analytics • Enterprise Mobile Apps • Security solutions • Open Source LinuxONE and IBM z ecosystem enablement

Hoffman didn’t suggest what comes after 2020 but we can probably imagine: Cognitive Computing, Internet of Things, Blockchain. At least those are trends starting to ramp up now.

He does, however, draw a picture of the state of Linux on the mainframe today:

  • 27% of total installed capacity run Linux
  • Linux core capacity increased 16% from 2Q14 to 2Q15
  • 40% of customers have Linux cores
  • 80% of the top 100 customers (in terms of installed MIPS) run Linux on the mainframe
  • 67% of new accounts run Linux

To DancingDinosaur, this last point about the high percentage of new z accounts running Linux speaks to where the future of the z is heading.

Maybe as telling are the following:

  • 64% of companies participate in Open Source projects
  • 78% of companies run on open source
  • 88% of companies to increase open source contributions in the next 2-3 year
  • 47% to release internal tools & projects as OSS
  • 53% expect to reduce barriers to employee participation in open source
  • 50% report that more than half of their engineers are working on open source projects
  • 66% of companies build software on open source

Remember when open source and Linux first appeared for z, data center managers were shocked at the very concept. It was anti-capitalist at the very least, maybe even socialist or communist. Look at the above percentages; open source has gotten about as mainstream as it gets.

It will be interesting to see how quickly developers move to LinuxONE for their CAMSS projects. IBM hasn’t said anything about the pricing of the refreshed Rockhopper model or about the look and feel of the tools. Until the developers know, DancingDinosaur expects they will continue to work on the familiar x86 tools they are using now.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem Continues Surge in 4Q15

January 22, 2016

DancingDinosaur follows technology, not financial investments, so you’d be an idiot if you considered what follows as investment advice. It is not.  Still, as one who has built a chunk of his career around the mainframe, it is good to see the z System continuing to remain in the black and beating the sexier Power lineup although I do follow both closely. See the latest IBM financials here.

  ibm-z13

The IBM z13 System

 Specifically, as IBM reported on Tuesday, revenues from z Systems mainframe server products increased 16 percent compared with the year-ago period (up 21 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS (millions of instructions per second), increased 28 percent.  Revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency).

Almost as good, revenues from Power Systems were up 4 percent compared with the 2014 period (up 8 percent adjusting for currency). Power revenues have been up most of the year although they got a little blurry in the accounting.

In the storage market, which is getting battered by software defined storage (SDS) on one hand and cloud-based storage on the other, IBM reported revenues from System Storage decreased 11 percent (down 7 percent adjusting for currency). The storage revenues probably won’t bounce back fast, at least not without IBM bringing out radically new storage products. That storage rival EMC got acquired by Dell should be some kind of signal that the storage market as the traditional enterprise players knew it is drastically different. For now object storage, SDS, and even Flash won’t replace the kind of revenue IBM used to see from DS8000 disk systems or TS enterprise tape libraries loaded with mechanical robotics.

Getting more prominence is IBM’s strategic initiative. This has been a company priority all year. Strategic initiatives include cloud, mobile, analytics, security, IoT, and cognitive computing. Q4 revenues, as reported by IBM, from these strategic imperatives — cloud, analytics, and engagement — increased 10 percent year-to-year (up 16 percent adjusting for currency).  For the full year, revenues from strategic imperatives increased 17 percent (up 26 percent adjusting for currency and the divested System x business) to $28.9 billion and now represents 35 percent of total IBM consolidated revenue.

For the full year, total cloud revenues (public, private and hybrid) increased 43 percent (up 57 percent adjusting for currency and the divested System x business) to $10.2 billion.  Revenues for cloud delivered as a service — a subset of the total cloud revenue — increased 50 percent to $4.5 billion; and the annual as-a-service run rate increased to $5.3 billion from $3.5 billion in the fourth quarter of 2014.

Meanwhile, revenues from business analytics increased 7 percent (up 16 percent adjusting for currency) to $17.9 billion.  Revenues from mobile more than tripled and from security increased 5 percent (up 12 percent adjusting for currency).

Commenting on IBM latest financial was Timothy Prickett Morgan, who frequently writes on IBM’s platforms. Citing Martin Schroeter, IBM’s chief financial officer, statements to analyst, Morgan suggested that low profit margins, which other financial analysts complained about, put pressure on the System z13 product line that launched early in the year. After a fast start, apparently, the z13 is now experiencing a slowdown in the upgrade cycle. It’s at this point that DancingDinosaur usually expects to see a new z, typically a business class version of the latest mainframe, the z13 in this case, but that does not appear to be in the offing. About the closest IBM got to that was the RockHopper model of the LinuxOne, a z optimized only for Linux, cloud, mobile, and analytics.

Morgan also noted that IBM added about 50 new mainframe customers for the year on an installed base of about 6,000 active customers. DancingDinosaur has been tracking that figure for years and it has not fluctuated much in recent years. And am never sure how to count the handful of IT shops that run a z in the IBM cloud.  But 5000-6000 active z shops still sounds about right.

Power Systems, which has also grown four quarters in a row, and was up 8 percent at constant currency. This has to be a relief to the company, which has committed over $1 billion to Power. IBM attributes some of this growth to its enthusiastic embrace of Linux on Power8, but Morgan complains of having no sense of how much of the Power Systems pie is driven by scale-out Linux machines intended to compete against Intel Xeon servers. Power also is starting to get some boost from the OpenPOWER Foundation, members that started to ship products in the past few months. It’s probably minimal revenue now but over time it should grow.

For those of us who are counting on z and Power to be around for a while longer, the latest financials should be encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort’s 2015 State of the Mainframe: Little Has Changed

November 30, 2015

Syncsort’s annual survey of almost 200 mainframe shops found that 83 percent of respondents cited security and availability as key strengths of the mainframe. Are you surprised? You can view the detailed results here for yourself.

synsort mainframes Role Big Data Ecosystem

Courtesy; Syncsort

Security and availability have been hallmarks of the z for decades. Even Syncsort’s top mainframe executive, Harvey Tessler, could point to little unexpected in the latest results “Nothing surprising. At least no big surprises. Expect the usual reliability, security,” he noted. BTW, in mid-November Clearlake Capital Group, L.P. (Clearlake) announced that it had completed the acquisition of Syncsort Incorporated. Apparently no immediate changes are being planned.

The 2015 study also confirmed a few more recent trends that DancingDinosaur has long suspected. More than two-thirds (67 percent) of respondents cited integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe.

Similarly, the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe. That, in fact, may be the most surprising response. Mainframe shops (or more likely the line-of-business managers they work with) are notorious for moving data off the mainframe for analytics, usually to distributed x86 platforms. The study showed respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis.

Many of the respondents no doubt will continue to do so, but it makes little sense in 2015 with a modern z System running a current configuration. In truth, it makes little sense from either a performance or a cost standpoint to move data off the z to perform analytics elsewhere. The z runs Hadoop and Spark natively. With your data and key analytics apps already on the z, why bother incurring both the high overhead and high latency entailed in moving data back and forth to run on what is probably a slower platform anyway.

The only possible reason might be that the mainframe shop doesn’t run Linux on the mainframe at all. That can be easily remedied, however, especially now with the introduction of Ubuntu Linux for the z. C’mon, it’s late 2015; modernize your z for the cloud-mobile-analytics world and stop wasting time and resources jumping back and forth to distributed systems that will run natively on the z today.

More encouraging is the interest of the respondents in big data and analytics. “The survey demonstrates that many big companies are using the mainframe as the back-end transaction hub for their Big Data strategies, grappling with the same data, cost, and management challenges they used it to tackle before, but applying it to more complex use cases with more and dauntingly large and diverse amounts of data,” said Denny Yost, associate publisher and editor-in-chief for Enterprise Systems Media, which partnered with Syncsort on the survey. The results show the respondents’ interest in mainframe’s ability to be a hub for emerging big data analytics platforms also is growing.

On other issues, almost one-quarter of respondents ranked as very important the ability of the mainframe to run other computing platforms such as Linux on an LPAR or z/VM virtual machines as a key strength of the mainframe at their company. Over one-third of respondents ranked as very important the ability of the mainframe to integrate with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of the mainframe at their company.

Maybe more surprising; only 70% on the respondents ranked as very important their organizations use of the mainframe for performing large-scale transaction processing or use of the mainframe for hosting mission-critical applications. Given that the respondents appeared to come from large, traditional mainframe shops you might have expected those numbers to be closer to 85-90%. Go figure.

When asked to rank their organization’s use of the mainframe to supplement or replace non-mainframe servers (i.e. RISC or x86-based servers) just 10% of the respondents considered it important. Clearly the hybrid mainframe-based data center is not a priority with these respondents.

So, what are they looking to improve in the next 12 months? The respondents’ top three initiatives are:

  1. Meeting Security and Compliance Requirements
  2. Reducing CPU usage and related costs
  3. Meeting Service Level Agreements (SLAs)

These aren’t the most ambitious goals DancingDinosaur has ever encountered but they should be quite achievable in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM LinuxONE Can Uberize x86-Based IT

November 13, 2015

Uberization—industry disruption caused by an unlikely competitor—emerged as a dominant concern of C-suite executives in a recently announced IBM-Institute of Business Value study. According to the study, the percentage of C-suite leaders who expect to contend with competition from outside their industry increased from 43% in 2013 to 54% today.

IBM Csuite Study_Tiles_10_30_2 competition data

These competitors, future Ubers, aren’t just resulting from new permutations of old industries; they also are coming from digital invaders with totally different business models. Consider IBM LinuxONE, a powerful open source Linux z13 mainframe supported by two open communities, the Open Mainframe Project and the Linux Foundation. For the typical mass market Linux shop, usually an x86-based data center, LinuxONE can deliver a standard Linux distribution with both KVM and Ubuntu as part of a new pricing model that offers a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores.

Talk about disruptive; plus it brings scalability, reliability, high performance, and rock-solid security of the latest mainframe. LinuxONE can handle 8000 virtual servers in a single system, tens of thousands of containers. Try doing that with an x86 machine or even a dozen.

Customers of traditional taxi companies or guests at conventional hotels have had to rethink their transportation or accommodation options in the face of Uberization and the arrival of other disruptive alternatives like Airbnb. So too, x86 platform shops will have to rethink their technology platform options. On either a per-workload basis or a total cost of ownership (TCO) basis, the mainframe has been cost competitive for years. Now with the Uberization of the Linux platform by LinuxONE and IBM’s latest pricing options for it, the time to rethink an x86 platform strategy clearly has arrived. Many long-held misconceptions about the mainframe will have to be dropped or, at least, updated.

The biggest risk to businesses used to come from a new rival with a better or cheaper offering, making it relatively simple to alter strategies. Today, entrenched players are being threatened by new entrants with completely different business models, as well as smaller, more agile players unencumbered by legacy infrastructure. Except for the part of being smaller, IBM’s LinuxONE definitely meets the criteria as a threatening disruptive entrant in the Linux platform space.

IBM even is bring new business models to the effort too, including hybrid cloud and a services-driven approach as well as its new pricing. How about renting a LinuxONE mainframe short term? You can with one of IBM’s new pricing options: just rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Try that with enterprise-class x86 machines.

The introduction of support for both KVM and Ubuntu on the z platform opens even more possibilities. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make LinuxONE very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs and containers. And don’t forget a broader range of tools, including an expanded set of open-source and industry tools and software, including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Coker.

Deon Newman, VP of Marketing for IBM z Systems, can recite the LinuxONE scalability stats off the top of his head: The entry-level, single-frame LinuxONE server, named Rockhopper, starts at 80 virtual Linux machines, and hundreds and hundreds of containers while the high-end double-frame server, Emperor, features six IFLs that support up to 350 virtual machines and can scale all the way to 8,000 virtual machines. On the Emperor server, you can literally have hundreds of thousands of containers on a single platform. Newman deliberately emphasizes that LinuxONE machines are servers.  x86 server users take note. LinuxONE definitely is not your father’s mainframe.

In the latest C-suite study all C-suite executives—regardless of role—identified for the first time technology as the most important external force impacting their enterprise. These executives believe cloud computing, mobile solutions, the Internet of Things, and cognitive computing are the technologies most likely to revolutionize or Uberize their business.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

 

 

Syncsort Brings z System Integration Software to Open Source Tools

October 13, 2015

In a series of announcements last month, Syncsort integrated its DMX-h data integration software with Apache Kafka, an open distributed messaging system. This will enable mainframe shops to tap DMX-h’s easy-to-use GUI to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

Spark graphic

Courtesy of IBM

Syncsort also delivered an open source contribution of an IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform. Not stopping there, Syncsort is integrating the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark too. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, on premise or in the cloud.

Said Tendü Yoğurtçu, General Manager of Syncsort’s big data business, in the latest announcement: “We are seeing increased demand for real-time analytics in industries such as healthcare, financial services, retail, and telecommunications.” With these announcements, Syncsort sees itself delivering the next generation streaming ETL and Internet of Things data integration platform.

Of course, the Syncsort offer should be unnecessary for most z System users except those that are long term Syncsort shops or are enamored of Syncsort’s GUI.  IBM already  offers Spark native on z/OS and Linux on z so there is no additional cost.  BTW, Syncsort itself was just acquired. What happens with its various products remains to be seen.

Still  IBM has been on a 12-year journey to expand mainframe workloads—Linux to Hadoop and Spark and beyond—the company has been urging mainframe shops as fast as fast as possible to become fully engaged in big data, open source, and more. The Syncsort announcements come at a precipitous time; mainframe data centers can more easily participate in the hottest use cases: real-time data analytics, streaming data analytics across diverse data sources, and more at the time when the need for such analytics is increasing.

Apache Spark and some of these other technologies should already be a bit familiar to z System data centers; Apache Kafka will be less familiar. DancingDinosaur noted Spark and others here, when LinuxOne was introduced.

To refresh, Apache Spark consists of a fast engine for large-scale data processing that provides over 80 high-level operators to make it easy to build parallel apps or use them interactively from the Scala, Python, and R shells. It also offers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.  As noted above Syncsort offers an open source version of the IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform.

Spark already has emerged as one of the most active big data open source projects, initially as a fast memory-optimized processing engine for machine learning and now as the single compute platform for all types of workloads including real-time data processing, interactive queries, social graph analysis, and others. Given Spark’s success, there is a growing need to securely access data from a diverse set of sources, including mainframes, and to transform the data into a format that is easily understandable by Spark.

Apache Kafka, essentially an enterprise service bus, is less widely known. Apache Kafka brings a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system. Kafka is often used in place of traditional message brokers like JMS and AMQP because of its higher throughput, reliability and replication. Syncsort has integrated its data integration software with Apache Kafka’s distributed messaging system to enable users to leverage DMX-h’s GUI as part of an effort to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

According to Matei Zaharia, creator of Apache Spark and co-founder & CTO of Databricks: “Organizations look to Spark to enable a variety of use cases, including streaming data analytics across diverse data sources”.  He continues: “Syncsort has recognized the importance of Spark in the big data ecosystem for real-time streaming applications and is focused on making it easy to bring diverse data sets into Spark.” IBM certainly recognizes this too, and the z System is the right platform for making all of this happen.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM z System After Moore’s Law

October 2, 2015

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

third dimension chip

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future.  This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 885 other followers

%d bloggers like this: