Posts Tagged ‘analytics’

IBM Jumps into the Next Gen Server Party with POWER9

February 15, 2018

IBM re-introduced its POWER9 lineup of servers  this week starting with 2-socket and 4-socket systems and more variations coming in the months ahead as IBM, along with the rest of the IT vendor community grapples with how to address changing data center needs. The first, the AC922, arrived last fall. DancingDinosaur covered it here. More, the S922/S914/S924 and H922/H924/L922, are promised later this quarter.

The workloads organizations are running these days are changing, often dramatically and quickly. One processor, no matter how capable or flexible or efficient will be unlikely to do the job going forward. It will take an entire family of chips.  That’s as true for Intel and AMR and the other chip players as IBM.

In some ways, IBM’s challenge is even qwerkier. Its chips will not only need to support Linux and Windows, but also IBMi and AIX. IBM simply cannot abandon its IBMi and AIX customer bases. So chips supporting IBMi and AIX are being built into the POWER9 family.

For IBMi the company is promising POWER9 exploitation for:

  • Expanding the secure-ability of IBMi with TLS, secure APIs, and logs for SIEM solutions
  • Expanded Install options with an installation process using USB 3.0 media
  • Encryption and compression for cloud storage
  • Increasing the productivity of developers and administrators

This may sound trivial to those who have focused on the Linux world and work with x86 systems too, but it is not for a company still mired in productive yet aging IBMi systems.

IBM also is promising POWER9 goodies for AIX, its legacy Unix OS, including:

  • AIX Security: PowerSC and PowerSC MFA updates for malware intrusion prevention and strong authentication
  • New workload acceleration with shared memory communications over RDMA (SMC-R)
  • Improved availability: AIX Live Update enhancements; GDR 1.2; PowerHA 7.2
  • Improved Cloud Mgmt: IBM Cloud PowerVC Manager for SDI; Import/Export;
  • AIX 7.2 native support for POWER9 – e.g. enabling NVMe

Again, if you have been running Linux on z or LinuxONE this may sound antiquated, but AIX has not been considered state-of-the-art for years. NVMe alone gives is a big boost.

But despite all the nice things IBM is doing for IBMi and AIX, DancingDinosaur believes the company clearly is betting POWER9 will cut into Intel x86 sales. But that is not a given. Intel is rolling out its own family of advanced x86 Xeon machines under the Skylake code name. Different versions will be packaged and tuned to different workloads. They are rumored, at the fully configured high end, to be quite expensive. Just don’t expect POWER9 systems to be cheap either.

And the chip market is getting more crowded. As Timothy Prickett Morgan, analyst at The Next Platform noted, various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s POWER9 family. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the POWER9 will have to fight for every sale IBM wants.

Morgan went on: IBM differentiated the hardware and the pricing with its NVLink versions, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Where the Power8 chip had the advantage over the Intel’s Haswell and Broadwell Xeon E5 processors when it came to memory capacity and memory bandwidth per socket, and could meet or beat the Xeons when it came to performance on some workloads that is not yet apparent with the POWER9.

With the POWER9, however, IBM will likely charge a little less for companies buying its Linux-only variants, observes Morgan, effectively enabling IBM to win Linux deals, particularly where data analytics and open source databases drive the customer’s use case. Similarly, some traditional simulation and modeling workloads in the HPC and machine learning areas are ripe for POWER9.

POWER9 is not one chip. Packed into the chip are next-generation NVIDIA NVLink and OpenCAPI to provide significantly faster performance for attached GPUs. The PCI-Express 4.0 interconnect will be twice the speed of PCI-Express 3.0. The open POWER9 architecture also allows companies to mix a wide range of accelerators to meet various needs. Meanwhile, OpenCAPI can unlock coherent FPGAs to support varied accelerated storage, compute, and networking workloads. IBM also is counting on the 300+ members of the OpenPOWER Foundation and OpenCAPI Consortium to launch innovations for POWER9. Much is happening: Stay tuned to DancingDinosaur

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Boosts DevOps with ADDI on Z

February 9, 2018

IBM’s Application Discovery and Delivery Intelligence (ADDI) is an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so you can quickly discover and understand interdependencies and impacts of change. You can use this intelligence to transform and renew these applications faster than ever. Capitalize on time-tested mainframe code to engage the API economy. Accelerate application transformation of your IBM Z hybrid cloud environment and more.

Formerly, ADDI was known as EZSource. Back then EZSource was designed to expedite digital transformations by unlocking core business logic and apps. Specifically it enabled the IT team to pinpoint specific mainframe code in preparation for leveraging IT through a hybrid cloud strategy. In effect it enabled the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enabled enterprise DevOps, which was necessary to keep up with the pace of changes overtaking existing business processes.

This wasn’t easy when EZSource initially arrived and it still isn’t although the intelligence built into ADDI makes it easier now.  Originally it was intended to help the mainframe data center team to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data, and schedule interdependencies
  • Aid in sizing the change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people came onboarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Today, IBM describes Application Discovery and Delivery Intelligence (ADDI), its follow-up to EZSource, as an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so your team can quickly discover and understand interdependencies and impacts of any change. In theory you should be able to use this intelligence to transform and renew these applications more efficiently and productively. In short, it should allow you to leverage time-tested mainframe code to engage with the API economy and accelerate the application transformation on your IBM Z and hybrid cloud environment.

More specifically, it promises to enable your team to analyze a broad range of IBM and non-IBM programing languages, databases, workload schedulers, and environments. Enterprise application portfolios were built over decades using an ever-evolving set of technologies, so you need a tool with broad support, such as ADDI, to truly understand the relationships between application components and accurately determine the impacts of potential changes.

In practice, it integrates with mainframe environments and tools via a z/OS agent to automatically synchronize application changes. Without keeping your application analysis synchronized with the latest changes that your developers made, according to IBM, your analysis can get out of date and you risk missing critical changes.

In addition, it provides visual analysis integrated with leading IDEs. Data center managers are petrified of changing applications that still work, fearing they will inadvertently break it or slow performance. When modifying complex applications, you need to be able to quickly navigate the dependencies between application components and drill down to see relevant details. After you understand the code, you can then effectively modify it at much lower risk. The integration between ADDI and IBM Developer for z (IDz) combines the leading mainframe IDE with the application understanding and analytics capabilities you need to safely and efficiently modify the code.

It also, IBM continues, cognitively optimizes your test suites.  When you have a large code base to maintain and manyf tests to run, you must run the tests most optimally. ADDI correlates code coverage data and code changes with test execution records to enable you to identify which regression tests are the most critical, allowing you to optimize time and resources while reducing risk. It exposes poorly tested or complex code and empowers the test teams with cognitive insights that turns awareness of trends into mitigation of future risks.

Finally, ADDI intelligently identifies performance degradations before they hit production. It correlates runtime performance data with application discovery data and test data to quickly pinpoint performance degradation and narrow down the code artifacts to those that are relevant to the cause of bad performance. This enables early detection of performance issues and speeds resolution.

What’s the biggest benefit of ADDI on the Z? It enables your data center to play a central role in digital transformation, a phrase spoken by every c-level executive today as a holy mantra. But more importantly, it will keep your mainframe relevant.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Halts Losing Quarterly Slide

January 25, 2018

With all respects to Casey at Bat author Ernest Thayer, joy may have returned to Mudville. IBM finally broke its 22 consecutive quarters losing streak and posted positive results in 4Q 17.  Fourth-quarter revenue of $22.5 billion, up 4 percent but that was just the start.

Watson and Weather Co. track flu

IBM is counting on its strategic imperatives to come through big and they did in 2017. Full-year strategic imperatives revenue of $36.5 billion, up 11 percent; represents 46 percent of IBM revenue. Similarly, IBM is making some gains in the highly competitive cloud business where IBM is fighting to position itself among the top ranks of formidable cloud players—Google, Amazon, and Microsoft. IBM did quite respectably in the cloud, posting $17 billion in cloud revenue, up 24 percent year to year.

DancingDinosaur readers will be interested to know that some of IBM’s various business segments, which have been a steady drain on IBM revenue turned things around in the 4th quarter. For example, Systems (systems hardware and operating systems software) saw revenues of $3.3 billion, up 32 percent driven by growth in IBM Z, Power Systems, and storage. That’s important to readers charged with planning their organization’s future with the Z or Power machines. They now can be confident that IBM mightn’t the sell the business tomorrow as it did with the x86 systems.

So where might IBM go in the future. “Our strategic imperatives revenue again grew at a double-digit rate and now represents 46 percent of our total revenue, and we are pleased with our overall revenue growth in the quarter.” said Ginni Rometty, IBM chairman, president, and CEO.  She then continued: “During 2017, we established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.”

Added James Kavanaugh, IBM CFO: “Over the past several years we have invested aggressively in technology and our people to reposition IBM.  2018 will be all about reinforcing IBM’s leadership position,” he continued, “in key high-value segments of the IT industry, including cloud, AI, security and blockchain.”

IBM has done well in some business and technology segments. Specifically, the company reported gains in revenues from analytics, up 9 percent, mobile, up 23 percent, and security, up a whopping 132 percent.

Other segments have not done as well. Technology Services & Cloud Platforms (includes infrastructure services, technical support services, and integration software) continue to lose money. A number of investment analysts are happy with IBM’s financials but are not optimistic about what they portend for IBM’s future.

For instance, Bert Hochfeld, long/short equity, growth, event-driven, research analyst, writes in Seeking Alpha, “the real reason why strategic imperatives and cloud showed relatively robust growth last quarter has nothing to do with IBM’s pivots and everything to do with the success of IBM’s mainframe cycle. IBM’s Z system achieved 71% growth last quarter compared to 62% in the prior quarter. New Z Systems are being delivered with pervasive encryption, they are being used to support hybrid cloud architectures, and they are being used to support Blockchain solutions… Right now, the mainframe performance is above the prior cycle (z13) and consistent with the z12 cycle a few years ago. And IBM has enjoyed some reasonable success with its all-flash arrays in the storage business. Further, the company’s superscalar offering, Power9, is having success and, as many of its workloads are used for AI, its revenues get counted as part of strategic initiatives. But should investors count on a mainframe cycle and a high-performance computer cycle in making a long-term investment decision regarding IBM shares?

He continued: “IBM management has suggested that some of the innovations in the current product range including blockchain, cryptography, security and reliability will make this cycle different, and perhaps longer, then other cycles. The length of the mainframe cycle is a crucial component in management’s earnings estimate. It needs to continue at elevated levels at least for another couple of quarters. While that is probably more likely, is it really prudent to base an investment judgement on the length of a mainframe cycle?

Of course, many DancingDinosaur readers are basing their career and employment decisions on the mainframe or Power Systems. Let’s hope this quarter’s success encourages them; it sure beats 22 consecutive quarters of revenue declines.

Do you remember how Thayer’s poem ends? With the hopes and dreams of Mudville riding on him, it is the bottom of the 9th; Casey takes a mighty swing and… strikes out! Let’s hope this isn’t IBM.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort Survey Unveils 5 Ways Z Users Are Saving Money

January 9, 2018

Syncsort Inc. recently completed its year-end 2017 State-of-the-Mainframe annual survey of IT professionals. Over In the past year, the organizations surveyed increased their spending for mainframe capacity, new mainframe applications, and mainframe data analytics. The IBM z/OS mainframe remains an important focus in organizations, with the majority of respondents reporting that the mainframe serves as the hub for business-critical applications by providing high-volume transaction and database processing.

More interestingly, Syncsort notes high number of respondents indicated they’ll use the mainframe to run revenue-generating services over the next 12 months, another clear indication that the mainframe remains integral to the business.

However, the survey also reflects concerns over the high cost of the mainframe. In effect, mainframe optimization, cost reduction, and spending remain at the forefront, with many organizations looking to leverage zIIP engines to offload general processor cycles, which maximize resources, delays or avoids hardware upgrades, and lowers monthly software charges.

At the same time some organizations are looking at mainframe optimization to fund strategic projects, such as enhanced mainframe data analytics to support better business decisions for meeting SLAs as well as security and compliance initiatives. All of this may relieve pressure to jump to a lower cost platform (x86) in the hope of reducing spending.

But apparently it is not enough in a number of cases. Despite the focus on optimization, the survey notes, nearly 20% of respondents plan to move off the mainframe completely in 2018. DancingDinosaur, however spent decades writing mainframe-is-dead pieces and this invariably takes longer, costs more, often much more, than expected, and sometimes is never fully achieved. The cost of building a no-fail, scalable, and secure business platform has proven to be extremely difficult.

However costly as the mainframe is, you can get it up running dependably for less than you will end up paying to cobble together bare metal x86 boxes. But if you try, please let me know and I will check back with you next year to publicize your success. One exception might be if you opt for a 100% cloud solution; again, let me know if it works and how much you save; I’ll make you a hero.

In the meantime, here are five ways respondents expect to save money by streamlining operations through mainframe-based optimization:

  1. This year organizations aim to redirect budget dollars to strategic projects such as mainframe data analytics. Optimization will primarily focus on general processor usage by leveraging zIIP engines and using MSU optimization tools. Some organizations will take it a step further, and target some candidate workloads to be moved off of the mainframe (possibly to a hybrid cloud) to ensure sufficient capacity remains for business critical applications.
  1. Big data analytics for operational intelligence, security, and compliance will continue to grow and emerge as a critical effort, and ensuring that IT services are delivered effectively to meet SLAs. Mainframe data sources will be critical in helping to address these challenges.
  1. Integration of mainframe data with modern analytics tools will become pervasive and critically important as organizations look to exploit this abundance of information for enhanced visibility. Integrating mainframe machine data will not only provide enhanced visualization but will enable correlation with data sources from other platforms. Additionally, new analytics technologies, like Splunk, will make mainframe application data more readily available to business analysts who typically aren’t mainframe experts while addressing the diminishing pool of mainframe talent by putting rich, easy tools into the hands of newer staff.
  1. SMF and z/OS log data will play an increased role in addressing security exposures, fulfilling audit requirements, and addressing compliance mandates, a key initiative for IT executives and IT organizations. Here think pervasive encryption on Z. Overall, organizations are looking at leveraging analytics platforms for security and compliance. Along with SMF and other z/OS log data they will look to Splunk, Elastic, and Hadoop.
  1. Data movement across the variety of platforms in distributed enterprises presents important challenges that must be secured, monitored, and performed efficiently. With over half of mainframe organizations still lacking full visibility this must become a priority for organizations.

Over the years, DancingDinosaur writes up every opportunity to lower mainframe costs or optimize operations. Find some of these here, here, and here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Q Network Promises to Commercialize Quantum

December 14, 2017

The dash to quantum computing is well underway and IBM is preparing to be one of the leaders. When IBM gets there it will find plenty of company. HPE, Dell/EMC, Microsoft and more are staking out quantum claims. In response IBM is speeding the build-out of its quantum ecosystem, the IBM Q Network, which it announced today.

IBM’s 50 qubit system prototype

Already IBM introduced its third generation of quantum computers in Nov., a prototype 50 qubit system. IBM promises online access to the IBM Q systems by the end of 2017, with a series of planned upgrades during 2018. IBM is focused on making available advanced, scalable universal quantum computing systems to clients to explore practical applications.

Further speeding the process, IBM is building a quantum computing ecosystem of big companies and research institutions. The result, dubbed IBM Q Network, will consist of a worldwide network of individuals and organizations, including scientists, engineers, business leaders, and forward thinking companies, academic institutions, and national research labs enabled by IBM Q. Its mission: advancing quantum computing and launching the first commercial applications.

Two particular goals stand out: Engage industry leaders to combine quantum computing expertise with industry-oriented, problem-specific expertise to accelerate development of early commercial uses. The second: expand and train the ecosystem of users, developers, and application specialists that will be essential to the adoption and scaling of quantum computing.

The key to getting this rolling is the groundwork IBM laid with the IBM Q Experience, which IBM initially introduced in May of 2016 as a 5 cubit system. The Q Experience (free) upgrade followed with a 16-qubit upgrade in May, 2017. The IBM effort to make available a commercial universal quantum computer for business and science applications has increased with each successive rev until today with a prototype 50 cubit system delivered via the IBM Cloud platform.

IBM opened public access to its quantum processors over a year ago  to serve as an enablement tool for scientific research, a resource for university classrooms, and a catalyst for enthusiasm. Since then, participants have run more than 1.7M quantum experiments on the IBM Cloud.

To date IBM was pretty easy going about access to the quantum computers but now that they have a 20 cubit system and 50 cubit system coming the company has become a little more restrictive about who can use them. Participation in the IBM Q Network is the only way to access these advanced systems, which involves a commitment of money, intellectual property, and agreement to share and cooperate, although IBM implied at any early briefing that it could be flexible about what was shared and what could remain an organization’s proprietary IP.

Another reason to participate in the Quantum Experience is QISKit, an open source quantum computing SDK anyone can access. Most DancingDinosaur readers, if they want to participate in IBM’s Q Network will do so as either partners or members. Another option, a Hub, is really targeted for bigger, more ambitious early adopters. Hubs, as IBM puts it, provide access to IBM Q systems, technical support, educational and training resources, community workshops and events, and opportunities for joint work.

The Q Network has already attracted some significant interest for organizations at every level and across a variety of industry segments. These include automotive, financial, electronics, chemical, and materials players from across the globe. Initial participants include JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University, and University of Melbourne.

As noted at the top, other major players are staking out their quantum claims, but none seem as far along or as comprehensive as IBM:

  • Dell/EMC is aiming to solve complex, life-impacting analytic problems like autonomous vehicles, smart cities, and precision medicine.
  • HPE appears to be focusing its initial quantum efforts on encryption.
  • Microsoft, not surprisingly, expects to release a new programming language and computing simulator designed for quantum computing.

As you would expect, IBM also is rolling out IBM Q Consulting to help organizations envision new business value through the application of quantum computing technology and provide customized roadmaps to help enterprises become quantum-ready.

Will quantum computing actually happen? Your guess is as good as anyone’s. I first heard about quantum physics in high school 40-odd years ago. It was baffling but intriguing then. Today it appears more real but still nothing is assured. If you’re willing to burn some time and resources to try it, go right ahead. Please tell DancingDinosaur what you find.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM’s POWER9 Races to AI

December 7, 2017

IBM is betting the future of its Power Systems on artificial intelligence (AI). The company introduced its newly designed POWER9 processor publicly this past Tuesday. The new machine, according to IBM, is capable of shortening the training of deep learning frameworks by nearly 4x, allowing enterprises to build more accurate AI applications, faster.

IBM engineer tests the POWER9

Designed for the post-CPU era, the core POWER9 building block is the IBM Power Systems AC922. The AC922, notes IBM, is the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 is designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica.

More than a CPU under the AC922 cover

Depending on your sense of market timing, POWER9 may be coming at the best or worst time for IBM.  Notes industry observer Timothy Prickett Morgan, The Next Platform: “The server market is booming as 2017 comes to a close, and IBM is looking to try to catch the tailwind and lift its Power Systems business.”

As Morgan puts it, citing IDC 3Q17 server revenue figures, HPE and Dell are jockeying for the lead in the server space, and for the moment, HPE (including its H3C partnership in China) has the lead with $3.32 billion in revenues, compared to Dell’s $3.07 billion, while Dell was the shipment leader, with 503,000 machines sold in Q3 2017 versus HPE’s 501,400 machines shipped. IBM does not rank in the top five shippers but thanks in part to the Z and big Power8 boxes, IBM still holds the number three server revenue generator spot, with $1.09 billion in sales for the third quarter, according to IDC. The z system accounted for $673 million of that, up 63.8 percent year-on year due mainly to the new Z. If you do the math, Morgan continued, the Power Systems line accounted for $420.7 million in the period, down 7.2 percent from Q3 2016. This is not surprising given that customers held back knowing Power9 systems were coming.

To get Power Systems back to where it used to be, Morgan continued, IBM must increase revenues by a factor of three or so. The good news is that, thanks to the popularity of hybrid CPU-GPU systems, which cost around $65,000 per node from IBM, this isn’t impossible. Therefore, it should take fewer machines to rack up the revenue, even if it comes from a relatively modest number of footprints and not a huge number of Power9 processors. More than 90 percent of the compute in these systems is comprised of GPU accelerators, but due to bookkeeping magic, it all accrues to Power Systems when these machines are sold. Plus IBM reportedly will be installing over 10,000 such nodes for the US Department of Energy’s Summit and Sierra supercomputers in the coming two quarters, which should provide a nice bump. And once IBM gets the commercial Power9 systems into the field, sales should pick up again, Morgan expects.

IBM clearly is hoping POWER9 will cut into Intel x86 sales. But that may not happen as anticipated. Intel is bringing out its own advanced x86 Xeon machine, Skylake, rumored to be quite expensive. Don’t expect POWER9 systems to be cheap either. And the field is getting more crowded. Morgan noted various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s Power9 system. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the Power9 will have to fight for every sale IBM wants and take nothing for granted.

No doubt POWER9 presents a good case and has a strong backer in Google, but even that might not be enough. Still, POWER9 sits at the heart of what is expected to be the most powerful data-intensive supercomputers in the world, the Summit and Sierra supercomputers, expected to knock off the world’s current fastest supercomputers from China.

Said Bart Sano, VP of Google Platforms: “Google is excited about IBM’s progress in the development of the latest POWER technology;” adding “the POWER9 OpenCAPI bus and large memory capabilities allow further opportunities for innovation in Google data centers.”

This really is about deep learning, one of the latest hot buzzwords today. Deep learning emerged as a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data. IBM designed the POWER9 chip to manage free-flowing data, streaming sensors, and algorithms for data-intensive AI and deep learning workloads on Linux.  Are your people ready to take advantage of POWER9?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

BMC’s 12th Annual Mainframe Survey Shows Z Staying Power

November 17, 2017

ARM processors are invading HPC and supercomputer segments. The Power9 is getting closer and closer to general commercial availability. IBM unveiled not one but two new quantum computers. Meanwhile, the Z continues to roll right along without skipping a beat, according to BMC’s 12th mainframe survey.

There is no doubt that the computing landscape is changing dramatically and will continue to change. Yet mainframe shops appear to be taking it all in stride. As Mark Wilson reported on the recently completed SHARE Europe conference in the UK, citing the keynote delivered by Compuware’s CEO Chris O’Malley: “By design, the post-modern mainframe is the most future ready platform in the world: the most reliable, securable, scalable, and cost efficient. Unsurprisingly, the mainframe remains the dominant, growing, and vital backbone for the worldwide economy. However, outdated processes and tools ensnared in an apathetic culture doggedly resistant to change, prevent far too many enterprises from unleashing its unique technical virtues and business value.”  If you doubt we are entering the post-modern mainframe era just look at the LinuxONE Emperor II or the z14.

Earlier this month BMC released its 12th annual mainframe survey. Titled 5 Myths Busted, you can find the report here.  See these myths right below:

  • Myth 1: Organizations have fully optimized mainframe availability
  • Myth 2: The mainframe is in maintenance mode; no one is modernizing
  • Myth 3: Executives are planning to replace their mainframes
  • Myth 4: Younger IT professionals are pessimistic about mainframe careers
  • Myth 5: People working on the mainframe today are all older

Everyone from prestigious executives like O’Malley to a small army of IBMers to lowly bloggers and analysts like DancingDinosaur have been pounding away at discrediting these myths for years. And this isn’t the first survey to thoroughly discredit mainframe skeptics.

The mainframe is growing: 48% of respondents saw MIPS growth in the last 12 months, over 50% of respondents forecast MIPS growth in the next 12 months, and 71% of large shops (10,000 MIPS or more) experienced MIPS growth in the last year. Better yet, these same shops forecast more growth in the next 12 months.

OK, the top four priorities of respondents remained the same this year. The idea that mainframe shops, however, are fully optimized and just cruising is dead wrong. Survey respondents still have a list of to-do of priorities:

  1. Cost reduction/optimization
  2. Data privacy/compliance
  3. Availability
  4. Application modernization

Maybe my favorite myth is that younger people have given up on the mainframe. BMC found that 53% of respondents are under age 50 and of this group, (age 30-49 with under 10 years of experience) overwhelmingly report a very positive view of the the mainframe future. The majority went so far as to say they see the workload of their mainframe growing and also view the mainframe as having a strong position of growth in the industry overall. This is reinforced by the growth of IBM’s Master of the Mainframe competition, which attracts young people in droves, over 85,000 to date, to work with the so-called obsolete mainframe.

And the mainframe, both the Z and the LinuxONE, is packed with technology that will continue to attract young people: Linux, Docker, Kubernetes, Java, Spark, and support for a wide range of both relational databases like DB2 and NoSQL databases like MongoDB. They use this technology to do mobile, IoT, blockchain, and more. Granted most mainframe shops are not ready yet to run these kinds of workloads. IBM, however, even introduced new container pricing for the new Z to encourage such workloads.

John McKenny, BMC’s VP of Strategy, has noticed growing interest in new workloads. “Yes, they continue to be mainly transactional applications but they are aimed to support new digital workloads too, such as doing business with mobile devices,” he noted.  Mobility and analytics, he added, are used increasingly to improve operations, and just about every mainframe shop has some form of cloud computing, often multiple clouds.

The adoption of Linux on the mainframe a decade ago imediatey put an end to the threat posed by x86. Since then, IBM has become a poster child for open source and a slew of new technologies, from Java to Hadoop to Spark to whatever comes next. Although traditional mainframe data centers have been slow to adopt these new technologies some are starting, and that along with innovative machines like the z14 and LinuxONE Emperor ll are what, ultimately, will keep the mainframe young and competitive.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM 3Q17 Results Break Consecutive Quarters Losing Streak

November 2, 2017

DancingDinosaur generally does not follow the daily gyrations of IBM’s stock, assuming that readers like you are not really active investors in the company’s stock. That is not to say, however, that you don’t have an important, even critical interest in the company’s fortunes.  As users of Z or Power systems, you want to know that IBM has the means to continue to invest in and advance your preferred platform.  And a 20+ consecutive quarters losing streak doesn’t exactly inspire confidence.

What is interesting about IBM’s latest 3Q17 financials, which ends the string of consecutive revenue losses, is the performance of the Z and storage, two things most of us are concerned with.

Blockchain simplifies near real-time clearing and settlement

Here is what Martin Schroeter, IBM Senior Vice President and Chief Financial Officer said to the investment analysts he briefs: In Systems, we had strong growth driven by the third consecutive quarter of growth in storage, and a solid launch of our new z14 mainframe, now just called Z, which was available for the last two weeks of the quarter.

DancingDinosaur has followed the mainframe for several decades at least, and the introduction of a new mainframe always boosts revenue for the next quarter or two. The advantages were apparent on Day 1 when the machine was introduced. As DancingDinosaur wrote: You get this encryption automatically, virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box.

A few months later IBM introduced a new LinuxOne mainframe, the Emperor II. The new LinuxOne doesn’t yet offer pervasive encryption but provides Secure Service Containers. As it was described here at that time: Through the Secure Service Container data can be protected against internal threats at the system level even from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats.

Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use. Again, it will likely take a few quarters for LinuxONE shops and other Linux shops to seek out the Emperor II and Secure Service Containers.

Similarly, in recent weeks, IBM has been bolstering its storage offerings. As Schroeter noted, storage, including Spectrum storage and Flash, have been experiencing a few positive quarters and new products should help to continue that momentum. For example, products like IBM Spectrum Protect Plus promises to make data protection available in as little as one hour.

Or the IBM FlashSystem 900, introduced at the end of October promises to deliver efficient, ultra dense flash with CAPEX and OPEX savings due to 3x more capacity in a 2U enclosure. It also offers to maximize efficiency using inline data compression with no application performance impact as it achieves consistent 95 microsecond response times.

But probably the best 3Q news came from the continuing traction IBM’s strategic imperatives are gaining. Here these imperatives—cloud, security, cognitive computing—continue to make a serious contribution to IBM revenue. Third-quarter cloud revenues increased 20 percent to $4.1 billion.  Cloud revenue over the last 12 months was $15.8 billion, including $8.8 billion delivered as-a-service and $7.0 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions.  The annual exit run rate for as-a-service revenue increased to $9.4 billion from $7.5 billion in the third quarter of 2016.  In the quarter, revenues from analytics increased 5 percent.  Revenues from mobile increased 7 percent and revenues from security increased 51 percent. Added Schroeter: Revenue from our strategic imperatives over the last 12 months was also up 10% to $34.9 billion, and now represents 45% of IBM.

OK, so IBM is no longer a $100 + billion company and hasn’t been for some time. Maybe in a few years if blockchain and the strategic imperatives continue to grow and quantum catches fire it may be back over the $100 billion mark, but not sure how much it matters.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Brings the Mainframe to AWS

October 6, 2017

IBM talks about the power of the cloud for the mainframe and has turned Bluemix into a cloud development and deployment platform for open systems. Where’s the Z?

Now Compuware has made for the past several years quarterly advances in its mainframe tooling, which are now  available through AWS. Not only have those advances made mainframe management and operations more intuitive and graphical through a string of Topaz releases, but with AWS it is now more accessible from anywhere. DancingDinosaur has been reporting on Compuware’s string of Topaz advances for two years, here, here, and here.

By tapping the power of both the cloud and the mainframe, enterprises can deploy Topaz to their global development workforce in minutes, accelerating the modernization of their mainframe environments. As Compuware noted: mainframe shops now have the choice of deploying Topaz on-premise or on AWS. By leveraging the cloud, they can deploy Topaz more quickly, securely, and scale without capital costs while benefiting from new Topaz features as soon as the company delivers them.

To make Topaz work on AWS Compuware turned to Amazon AppStream 2.0 technology, which provides for global development, test, and ops teams with immediate and secure cloud access to Compuware’s entire innovative mainframe Agile/DevOps solution stack, mainly Topaz. Amazon AppStream 2.0 is a fully managed, secure application streaming service that allows users to stream desktop applications from AWS to any device running a web browser.

Cloud-based deployment of Topaz, Compuware notes, allows for significantly faster implementation, simple administration, a virtual integrated development environment (IDE), adaptive capacity, and immediate developer access to software updates. The last of these is important, since Compuware has been maintaining a quarterly upgrade release schedule, in effect delivering new capabilities every 90 days.

Compuware is in the process of patenting technology to offer an intuitive, streamlined configuration menu that leverages AWS best practices to make it easy for mainframe admins to quickly configure secure connectivity between Topaz on AWS and their mainframe environment. It also enables the same connectivity to their existing cross-platform enterprise DevOps toolchains running on-premise, in the cloud, or both. The upshot: organizations can deploy Topaz across their global development workforce in minutes, accelerating the modernization of their mainframe environments.

Using Topaz on AWS, notes Compuware, mainframe shops can benefit in a variety of ways, specifically:

  • Modify, test and debug COBOL, PL/I, Assembler and other mainframe code via an Eclipse-based virtual IDE
  • Visualize complex and/or undocumented application logic and data relationships
  • Manage source code and promote artifacts through the DevOps lifecycle
  • Perform common tasks such as job submission, review, print and purge
  • Leverage a single data editor to discover, visualize, edit, compare, and protect mainframe files and data

The move to the Eclipse-based IDE presents a giant step for traditional mainframe shops trying to modernize. Eclipse is a leading open source IDE with IBM as a founding member. In addition to Eclipse, Compuware also integrates with other modern tools, including Jenkins, SonarSource, Altassian. Jenkins is an open source automation server written in Java that helps to automate the non-human part of software development process with continuous integration while facilitating technical aspects of continuous delivery. SonarSource enables visibility into mainframe application quality. Atlassian develops products for software developers, project managers, and content management and is best known for Jira, its issue tracking application.

Unlike many mainframe ISVs, Compuware has been actively partnering with various innovative vendors to extend the mainframe’s tool footprint and bring the kind of tools to the mainframe that young developers, especially Millennials, want. Yes, it is possible to access the sexy REST-based Web and mobile tools through IBM’s Bluemix, but for mainframe shops it appears kludgy. By giving its mainframe customers access through AWS to advanced tools, Compuware improves on this. And AWS beats Bluemix in terms of cloud penetration and low cost.

All mainframe ISVs should make their mainframe products accessible through the cloud if they want to keep their mainframe products relevant. IBM has its cloud; of course there is AWS, Microsoft has Azure, and Google rounds out the top four. These and others will keep cloud economics competitive for the foreseeable future. Hope to see you in the cloud.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: