Posts Tagged ‘mainframe’

IBM Spotlights Blockchain and Hyperledger Fabric at IBM InterCONNECT

March 23, 2017

IBM announced earlier this week Hyperledger Fabric v 1.0 beta, with security for regulated industries, governance tools, and over 1,000 transactions per second possible.  This is represents the first enterprise-ready blockchain service based on the Linux Foundation’s open source Hyperledger Fabric version 1.0. The service enables developers to quickly build and host security-rich production blockchain networks on the IBM Cloud and underpinned by IBM LinuxONE.

Maersk and IBM transform global trade with blockchain

LinuxONE, a dedicated z-based Linux system with as much security as any commercial platform is likely to have, should play a central role in blockchain networks. The machine also delivers all the itys the z is renowned for: scalability, availability, flexibility, manageability, and more.

The Linux Foundation’s open source Hyperledger Fabric v1.0 is being developed by members of the Hyperledger consortium alongside other open source blockchain technologies. The Hyperledger consortium’s Technical Steering Committee recently promoted Fabric from incubator to active state, and it is expected to be available in the coming weeks. It is designed to provide a framework for enterprise-grade blockchain networks that can transact at over 1,000 transactions per second.

Safety and security is everything with blockchain, which means blockchain networks are only as safe as the infrastructures on which they reside, hence the underpinning on LinuxONE. In addition, IBM’s High Security Business Network brings an extremely secure Linux infrastructure that, according to IBM, integrates security from the hardware up through the software stack, specifically designed for enterprise blockchains by providing:

  • Protection from insider attacks – helps safeguard entry points on the network and fight insider threats from anyone with system administrator credentials
  • The industry’s highest certified level of isolation for a commercial system- Evaluation Assurance Level certification of EAL5+ is critical in highly regulated industries such as government, financial services and healthcare, to prevent the leakage of information from one party’s environment to another
  • Secure Service Containers – to help protect code throughout the blockchain application and effectively encapsulating the blockchain into a virtual appliance, denying access even to privileged users
  • Tamper-responsive hardware security modules –to protect encrypted data for storage of cryptographic keys. These modules are certified to FIPS 140-2 Level 4, the highest level of security certification available for cryptographic modules
  • A highly auditable operating environment – comprehensive , immutable log data supports forensics, audit, and compliance

IBM also announced today the first commercially available blockchain governance tools, and new open-source developer tools that automate the steps it takes to build with the Hyperledger Fabric, reportedly speeding the process from weeks to days.

The new blockchain governance tools also make it easy to set up a blockchain network and assign roles and levels of visibility from a single dashboard. They help network members set rules, manage membership, and enforce network compliance once the network is up and running.

This seems straightforward enough. Once setup is initiated, members can determine the rules of the blockchain and share consent when new members request to join the network. In addition, the deployment tool assigns each network a Network Trust Rating of 1 to 100. New network members can view this before joining and determine whether or not they can trust the network enough to participate. Organizations can also take steps to improve their Trust Ratings before moving into production.

To make it easier for developers to translate business needs from concept to actual code, IBM Blockchain includes a new open-source developer tools for the Hyperledger Fabric called Fabric Composer. Fabric Composer promises to help users model business networks, create APIs that integrate with the blockchain network and existing systems of record, and quickly build a user interface. Fabric Composer also automates tasks that traditionally could take weeks, allowing developers to complete them in minutes instead.

IBM Blockchain for Hyperledger Fabric v1.0 is now available through a beta program on IBM Bluemix. Hyperledger Fabric also is available on Docker Hub as an IBM-certified image available for download at no cost.

At this point, IBM has over 25 publicly named Blockchain projects underway. They address everything from carbon asset management to consumer digital ID, post trade derivatives processing, last mile shipping, supply chain food safety, provenance, securities lending, and more seemingly are being added nearly weekly.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Introduces First Universal Commercial Quantum Computers

March 9, 2017

A few years ago DancingDinosaur first encountered the possibility of quantum computing. It was presented as a real but distant possibility. This is not something I need to consider I thought at the time.  By the time it is available commercially I will be long retired and probably six feet under. Well, I was wrong.

This week IBM unveiled its IBM Q quantum systems. IBM Q will be leading Watson and blockchain to deliver the most advanced set of services on the IBM Cloud platform. There are organizations using it now, and DancingDinosaur continues to be living and working still.

IBM Quantum Computing scientists Hanhee Paik (left) and Sarah Sheldon (right) examine the hardware inside an open dilution fridge at the IBM Q Lab

As IBM explains: While technologies that currently run on classical (or conventional) computers, such as Watson, can help find patterns and insights buried in vast amounts of existing data, quantum computers will deliver solutions to multi-faceted problems where patterns cannot be seen because the data doesn’t exist and the possibilities that you need to explore are too enormous to ever be processed by conventional computers.

Just don’t retire your z or Power system in favor on an IBM Q yet. As IBM explained at a recent briefing on the quantum computing the IBM Q universal quantum computers will be able to do any type of problem that conventional computers do today. However, many of today’s workloads, like on-line transaction processing, data storage, and web serving will continue to run more efficiently on conventional systems. The most powerful quantum systems of the next decade will be a hybrid of quantum computers with conventional computers to control logic and operations on large amounts of data.

The most immediate use cases will involve molecular dynamics, drug design, and materials. The new quantum machine, for example, will allow the healthcare industry to design more effective drugs faster and at less cost and the chemical industry to develop new and improved materials.

Another familiar use case revolves around optimization in finance and manufacturing. The problem here comes down to computers struggling with optimization involving an exponential number of possibilities. Quantum systems, noted IBM, hold the promise of more accurately finding the most profitable investment portfolio in the financial industry, the most efficient use of resources in manufacturing, and optimal routes for logistics in the transportation and retail industries.

To refresh the basics of quantum computing.  The challenges invariably entail exponential scale. You start with 2 basic ideas; 1) the uncertainty principle, which states that attempting to observe a state in general disturbs it while obtaining only partial information about the state. Or 2) where two systems can exist in an entangled state, causing them to behave in ways that cannot be explained by supposing that each has some state of its own. No more zero or 1 only.

The basic unit of quantum computing is the qubit. Today IBM is making available a 5 qubit system, which is pretty small in the overall scheme of things. Large enough, however, to experiment and test some hypotheses; things start getting interesting at 20 qubits. An inflexion point, IBM researchers noted, occurs around 50 qubits. At 50-100 qubits people can begin to do some serious work.

This past week IBM announced three quantum computing advances: the release of a new API for the IBM Quantum Experience that enables developers and programmers to begin building interfaces between IBM’s existing 5 qubit cloud-based quantum computer and conventional computers, without needing a deep background in quantum physics. You can try the 5 qubit quantum system via IBM’s Quantum Experience on Bluemix here.

IBM also released an upgraded simulator on the IBM Quantum Experience that can model circuits with up to 20 qubits. In the first half of 2017, IBM plans to release a full SDK on the IBM Quantum Experience for users to build simple quantum applications and software programs. Only the publically available 5 qubit quantum system with a web-based graphical user interface now; soon to be upgraded to more qubits.

 IBM Research Frontiers Institute allows participants to explore applications for quantum computing in a consortium dedicated to making IBM’s most ambitious research available to its members.

Finally, the IBM Q Early Access Systems allows the purchase of access to a dedicated quantum system hosted and managed by IBM. Initial system is 15+ qubits, with a fast roadmap promised to 50+ qubits.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “We believe that quantum computing promises to be the next major technology that has the potential to drive a new era of innovation across industries.”

Are you ready for quantum computing? Try it today on IBM’s Quantum Experience through Bluemix. Let me know how it works for you.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM and Northern Trust Collaborate on Blockchain for Private Equity Markets

March 3, 2017

At a briefing for IT analysts, IBM laid out how it sees blockchain working in practice. Surprisingly, the platform for the Hyperledger effort was not x86 but LinuxONE due to its inherent security.  As the initiative grows the z-based LinuxONE can also deliver the performance, scalability, and reliability the effort eventually will need too.

IBM describes its collaboration with Northern Trust and other key stakeholders as the first commercial deployment of blockchain technology for the private equity market. Although as the private equity market stands now the infrastructure supporting private equity has seen little innovation in recent years even as investors seek greater transparency, security, and efficiency. Enter the open LinuxONE platform, the Hyperledger fabric, and Unigestion, a Geneva, Switzerland-based asset manager with $20 billion in assets under management.

IBM Chairman and CEO Ginni Rometty discusses how cognitive technology and innovations such as Watson and blockchain have the potential to radically transform the financial services industry at Sibos 2016 in Geneva, Switzerland on Weds., September 28, 2016. (Feature Photo Service)

IBM Chairman and CEO Ginni Rometty discusses  blockchain at Sibos

The new initiative, as IBM explains it, promises a new and comprehensive way to access and visualize data.  Blockchain captures and stores information about every transaction and investment as meta data. It also captures details about relevant documents and commitments. Hyperledger itself is a logging tool that creates an immutable record.

The Northern Trust effort connects business logic, legacy technology, and blockchain technology using a combination of Java/JavaScript and IBMs blockchain product. It runs on IBM Bluemix (cloud) using IBM’s Blockchain High Security Business Network. It also relies on key management to ensure record/data isolation and enforce geographic jurisdiction. In the end it facilitates managing the fund lifecycle more efficiently than the previous primarily paper-based process.

More interesting to DancingDinosaur is the selection of the z through LinuxONE and blockchain’s use of storage.  To begin with blockchain is not really a database. It is more like a log file, but even that is not quite accurate because “it is a database you play as a team sport,” explained Arijit Das, Senior Vice President, FinTech Solutions, at the analyst briefing. That means you don’t perform any of the usual database functions; no deletes or updates, just appends.

Since blockchain is an open technology, you actually could do it on any x86 Linux machine, but DancingDinosaur readers probably wouldn’t want to do that. Blockchain essentially ends up being a distributed group activity and LinuxONE is unusually well optimized for the necessary security. It also brings scalability, reliability, and high performance along with the rock-solid security of the latest mainframe. In general LinuxONE can handle 8000 virtual servers in a single system and tens of thousands of containers. Try doing that with an x86 machine or even dozens.   You can read more on LinuxONE that DancingDinosaur wrote when it was introduced here and here.

But you won’t need near that scalability with the private equity application, at least at first. Blockchain gets more interesting when you think about storage. Blockchain has the potential to generate massive numbers of files fast, but that will only happen when it is part of, say, a supply chain with hundreds, or more likely, thousands of participating nodes on the chain and those nodes are very active. More likely for private equity trading, certainly at the start, blockchain will handle gigabytes of data and maybe only megabytes at first. This is not going to generate much revenue for IBM storage. A little bit of flash could probably do the trick.

Today, current legal and administrative processes that support private equity are time consuming and expensive, according to Peter Cherecwich, president of Corporate & Institutional Services at Northern Trust. They lack transparency while inefficient market practices leads to lengthy, duplicative and fragmented investment and administration processes. Northern Trust’s solution based on blockchain and Hyperledger, however, promises to deliver a significantly enhanced and efficient approach to private equity administration.

Just don’t expect to see overnight results. In fact, you can expect more inefficiency since the new blockchain/Hyperledger-based system is running in parallel with the disjointed manual processes. Previous legacy systems remain; they are not yet being replaced. Still, IBM insists that blockchain is an ideal technology to bring innovation to the private equity market, allowing Northern Trust to improve traditional business processes at each stage to deliver greater transparency and efficiency. Guess we’ll just have to wait and watch.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Launches New IoT Collaborative Initiative

February 23, 2017

Collaboration partners can pull hundreds of millions of dollars in new revenue from IoT, according to IBM’s recent IoT announcement. Having reached what it describes as a tipping point with IoT innovation the company now boasts of having over 6,000 clients and partners around the world, many of whom are now wanting to join in its new global Watson IoT center to co-innovate. Already Avnet, BNP Paribas, Capgemini, and Tech Mahindra will collocate development teams at the IBM Munich center to work on IoT collaborations.

new-ibm-watson-iot-center

IBM Opens New Global Center for Watson IoT

The IBM center also will act as an innovation space for the European IoT standards organization EEBus.  The plan, according to Harriet Green, General Manager, IBM Watson IoT, Cognitive Engagement and Education (pictured above left), calls for building a new global IoT innovation ecosystem that will explore how cognitive and IoT technologies will transform industries and our daily lives.

IoT and more recently cognitive are naturals for the z System, and POWER Systems have been the platform for natural language processing and cognitive since Watson won Jeopardy three years ago. With the latest enhancements IBM has brought to the z in the form of on-premises cognitive and machine learning the z should assume an important role as it gathers, stores, collects, and processes IoT data for cognitive analysis. DancingDinosaur first reported on this late in 2014 and again just last week. As IoT and cognitive workloads ramp up on z don’t be surprised to see monthly workload charges rise.

Late last year IBM announced that car maker BMW will collocate part of its research and development operations at IBM’s new Watson IoT center to help reimagine the driving experience. Now, IBM is announcing four more companies that have signed up to join its special industry “collaboratories” where clients and partners work together with 1,000 Munich-based IBM IoT experts to tap into the latest design thinking and push the boundaries of the possible with IoT.

Let’s look at the four newest participants starting with Avnet. According to IBM, an IT distributor and global IBM partner, Avnet will open a new joint IoT Lab within IBM’s Watson IoT HQ to develop, build, demonstrate and sell IoT solutions powered by IBM Watson. Working closely with IBM’s leading technologists and IoT experts, Avnet also plans to enhance its IoT technical expertise through hands-on training and on-the-job learning. Avnet’s team of IoT and analytics experts will also partner with IBM on joint business development opportunities across multiple industries including smart buildings, smart homes, industry, transportation, medical, and consumer.

As reported by BNP Paribas, Consorsbank, its retail digital bank in Germany, will partner with IBM´s new Watson IoT Center. The company will collocate a team of solution architects, developers and business development personnel at the Watson facility. Together with IBM’s experts, they will explore how IoT and cognitive technologies can drive transformation in the banking industry and help innovate new financial products and services, such as investment advice.

Similarly, global IT consulting and technology services provider Capgemini will collocate a team of cognitive IoT experts at the Watson center. Together they will help customers maximize the potential of Industry 4.0 and develop and take to market sector-specific cognitive IoT solutions. Capgemini plans a close link between its Munich Applied Innovation Exchange and IBM’s new Customer Experience zones to collaborate with clients in an interactive environment.

Finally, the Indian multinational provider of enterprise and communications IT and networking technology Tech Mahindra, is one of IBM’s Global System Integrators with over 3,000 specialists focused on IBM technology around the world. The company will locate a team of six developers and engineers within the Watson IoT HQ to help deliver on Tech Mahindra’s vision of generating substantial new revenue based on IBM’s Watson IoT platform. Tech Mahindra will use the center to co-create and showcase new solutions based on IBM’s Watson IoT platform for Industry 4.0 and Manufacturing, Precision Farming, Healthcare, Insurance and Banking, and automotive.

To facilitate connecting the z to IoT IBM offers a simple recipe. It requires 4 basic ingredients and 4 steps: Texas Instrument’s SensorTag, a Bluemix account, IBM z/OS Connect Enterprise Edition, and a back-end service like CICS.  Start by exposing an existing z Systems application as a RESTful AP. This is where the z/OS Connect Edition comes in.  Then enable your SensorTag device to Watson IoT Quick Start. From there connect the Cloud to your on-premises Hybrid Cloud.  Finally, enable the published IoT data to trigger a RESTful API. Sounds pretty straightforward but—full disclosure—Dancing Dinosaur has not tried it due to lacking the necessary pieces. If you try it, please tell DancingDinosaur how it works (info@radding.net). Good luck.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM On-Premises Cognitive Means z Systems Only

February 16, 2017

Just in case you missed the incessant drumbeat coming out of IBM, the company committed to cognitive computing. But that works for z data centers since IBM’s cognitive system is available on-premises only for the z. Another z first: IBM just introduced Machine Learning (key for cognitive) for the private cloud starting with the z.

ibm-congitive-graphic

There are three ways to get IBM cognitive computing solutions: the IBM Cloud, Watson, or the z System, notes Donna Dillenberger, IBM Fellow, IBM Enterprise Solutions. The z, however, is the only platform that IBM supports for cognitive computing on premises (sorry, no Power). As such, the z represents the apex of programmatic computing, at least as IBM sees it. It also is the only IBM platform that supports cognitive natively; mainly in the form of Hadoop and Spark, both of which are programmatic tools.

What if your z told you that a given strategy had a 92% of success. It couldn’t do that until now with IBM’s recently released cognitive system for z.

Your z system today represents the peak of programmatic computing. That’s what everyone working in computers grew up with, going all the way back to Assembler, COBOL, and FORTRAN. Newer languages and operating systems have arrived since; today your mainframe can respond to Java or Linux and now Python and Anaconda. Still, all are based on the programmatic computing model.

IBM believes the future lies in cognitive computing. Cognitive has become the company’s latest strategic imperative, apparently trumping its previous strategic imperatives: cloud, analytics, big data, and mobile. Maybe only security, which quietly slipped in as a strategic imperative sometime 2016, can rival cognitive, at least for now.

Similarly, IBM describes itself as a cognitive solutions and cloud platform company. IBM’s infatuation with cognitive starts with data. Only cognitive computing will enable organizations to understand the flood of myriad data pouring in—consisting of structured, local data but going beyond to unlock the world of global unstructured data; and then to decision tree-driven, deterministic applications, and eventually, probabilistic systems that co-evolve with their users by learning along with them.

You need cognitive computing. It is the only way, as IBM puts it: to move beyond the constraints of programmatic computing. In the process, cognitive can take you past keyword-based search that provides a list of locations where an answer might be located to an intuitive, conversational means to discover a set of confidence-ranked possibilities.

Dillenberger suggests it won’t be difficult to get to the IBM cognitive system on z . You don’t even program a cognitive system. At most, you train it, and even then the cognitive system will do the heavy lifting by finding the most appropriate training models. If you don’t have preexisting training models, “just use what the cognitive system thinks is best,” she adds. Then the cognitive system will see what happens and learn from it, tweaking the models as necessary based on the results and new data it encounters. This also is where machine learning comes in.

IBM has yet to document payback and ROI data. Dillenberger, however, has spoken with early adopters.  The big promised payback, of course, will come from the new insights uncovered and the payback will be as astronomical or meager as you are in executing on those insights.

But there also is the promise of a quick technical payback for z data centers managers. When the data resides on z—a huge advantage for the z—you just run analytics where the data is. In such cases you can realize up to 3x the performance, Dillenberger noted.  Even if you have to pull data from some other location too you still run faster, maybe 2x faster. Other z advantages include large amounts of memory, multiple levels of cache, and multiple I/O processors get at data without impacting CPU performance.

When the data and IBM’s cognitive system resides on the z you can save significant money. “ETL consumed huge amounts of MIPS. But when the client did it all on the z, it completely avoided the costly ETL process,” Dillenberger noted. As a result, that client reported savings of $7-8 million dollars a year by completely bypassing the x-86 layer and ETL and running Spark natively on the z.

As Dillenberger describes it, cognitive computing on the z is here now, able to deliver a payback fast, and an even bigger payback going forward as you execute on the insights it reveals. And you already have a z, the only on-premises way to IBM’s Cognitive System.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Unveils Enhanced Repackaged Spectrum Storage

February 9, 2017

IBM appears to be gaining traction with its growing Spectrum SDS family of storage products re-introduced this week. According to the company, 87 of the Fortune Global 100 use IBM Spectrum Storage. That breaks down to all 10 of the top 10 telecommunications companies and all 20 of the top 20 banks. In addition, 18 of the top 20 energy companies, 9 of the top 10 global healthcare companies, and 8 of the top 10 automobile manufacturers adopted Spectrum storage. In addition, IBM notes, 80 organizations pick IBM Spectrum storage every week.

Of course, that hasn’t been enough to turn incessant red ink into black. According to IBM’s 2016 year-end financials, systems (systems hardware—including storage—and operating systems software), posted revenues of $2.5 billion, down 12.5 percent. You can see DancingDinosaur’s report on the latest IBM financials here. Although IBM called out the z for gross profit margins improvements driven by z Systems performance there was nary a word about storage. Will follow upcoming quarterly reports to see if this increased traction translates into actual positive revenue. Stay tuned.

Over the shoulder shot of a group of business colleagues in a meeting around a conference table

IBM introduces Spectrum Computing, 6/16

The announcements this week included:

  • IBM Spectrum Storage Suite
  • IBM Spectrum Virtualize
  • IBM Spectrum Control
  • IBM Spectrum Accelerate
  • IBM Cloud Object Storage

IBM Spectrum Storage isn’t completely new. DancingDinosaur first covered the Spectrum storage introduction in mid-February, 2015. Actually IBM began offering SDS products in 2014 and gained some kudos for it from IDC. The latest announcement really amounts to a repackaging of the products as the IBM Spectrum Storage Suite along with a variety of enhancements, some of which are quite interesting.

For example, IBM Cloud Object Storage software allows new use cases and enables a standalone object store managed by IBM Spectrum Control. It also adds a new storage tier behind IBM Spectrum Scale and a primary pool target behind IBM Spectrum Protect in the form of a cloud container. IBM also continues its innovative licensing arrangement by which you pay for your storage capacity and then can allocate and re-allocate that capacity freely.

Spectrum Cloud Object storage also introduces unified NFS/Object access. This allows companies to store data in a file system structure on object storage using NFS access capability and access data stored as files via either a file or object interface. It has been optimized for scalability and file-to-object migration as well as being able to scale to millions of users and buckets/containers. Finally, it now supports IPv6 management of devices and all nodes in configuration.

IBM Spectrum Virtualize Software also is interesting. For example, it now supports Supermicro SuperServer 2028U-TRTP+ in addition to existing support for Lenovo System x3650 M5. IBM envisions service providers and enterprises deploying Supermicro servers to build new services based on IBM Spectrum Virtualize software to deliver virtualized storage services at a lower price point. Take note: both of these are 2u x86 boxes. They can also offer disaster recovery as a service for clients with SVC.

Finally, IBM has enhanced Spectrum Control in V5.2.13. Among the new capabilities: improved storage insights through new cloud-based storage analytics for Dell EMC VNX, VNXe, and VMAX. This should enable users to improve application performance and reduce storage costs. It also will extend capacity planning views include external storage for IBM Spectrum Scale’s transparent cloud tiering. For on-premises software the latest Spectrum Control offers new support for Dell EMC VNXe file storage.

Overall, the new Spectrum Control should simplify the life of storage managers. “IBM Spectrum Control gives me one pane of glass to manage spinning disk, file system clusters, and object storage, “said Bob Oesterlin, Sr. Principal Storage Engineer, Nuance, as reported by IBM. The ability to span IBM storage as well as that of other vendors should prove a winner.

Combined with other capabilities, such as Spectrum Accelerate V11.5.4’s data-at-rest encryption, ability to flexibly encrypt existing hot data in minutes without disruption, and support for standard key management tools (IBM Security Key Lifecycle Manager and SafeNet KeySecure) will add to the appeal of the enhanced IBM Spectrum Storage Suite. Will it be enough to turn IBM Systems’ red ink to black? We’ll all just have to watch the next few quarterly reports to know.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Arcati 2017 Mainframe Survey—Cognitive a No-Show

February 2, 2017

DancingDinosaur checks into Arcati’s annual mainframe survey every few years. You can access a copy of the 2017 report here.  Some of the data doesn’t change much, a few percentage points here or there. For example, 75% of the respondents consider the mainframe too expensive. OK, people have been saying that for years.

On the other hand, 65% of the respondents’ mainframes are involved with web services. Half also run Java-based mainframe apps, up from 30% last year, while 17% more are planning to run Java with their mainframe this year. Similarly, 35% of respondents report running Linux on the mainframe, up from 22% last year. Again, 13% of the respondents expect to add Linux this year.  Driving this is the advantageous cost and management benefits that result from consolidating distributed Linux workloads on the z. Yes, things are changing.

linuxone-5558_d_ibm_linuxone_social_tile_990_550_4_081515

The biggest surprise for DancingDinosaur, however, revolved around IBM’s latest strategic initiatives, especially cognitive computing and blockchain.  Other strategic initiatives may include, depending on who is briefing you at the moment—security, data analytics, cloud, hybrid cloud, and mobile. These strategic imperatives, especially cognitive computing, are expected to drive IBM’s revenue. In the latest statement, reported last week in DancingDinosaur, strategic imperatives amounted to 41% of revenue.  Cloud revenue and Cloud-as-a-service also rose considerably, 35% and 61% respectively.

When DancingDinosaur searched the accompanying Arcati vendor report (over 120 vendors with brief descriptions) for cognitive only GT Software came up. IBM didn’t even mention cognitive in its vendor listing, which admittedly was skimpy. The case was the same with Blockchain; only one vendor, Atos, mentioned it and nothing about blockchain in the IBM listing. More vendors, however, noted supporting one or some of the other supposed strategic initiatives.

Overall, the Arcati survey is quite positive about the mainframe. The survey found that 50 percent of sites viewed their mainframe as a legacy system (down from last year’s 62 percent). However, 22 percent (up from 16 percent last year) viewed mainframe as strategic, with 28 percent (up from 22 percent) viewing mainframes as both strategic and legacy.

Reinforcing the value of the mainframe, the survey found 78 percent of sites experienced some kind of increase in capacity. With increased demand for mainframe resources (data and processing), it should not be surprising that respondents report an 81 percent an increase in technology costs. Yet, 38 percent of sites report their people costs have decreased or stayed the same.

Unfortunately, the survey also found that 70 percent of respondents thought there were a cultural barrier between mainframe and other IT professionals. That did not discourage respondents from pointing out the mainframe advantages: 100 percent highlighted the benefit of the mainframe’s availability, 83 percent highlighted security, 75 percent identified scalability, and 71 percent picked manageability as a mainframe benefit.

Also, social media runs on the mainframe. Respondents found social media (Facebook, Twitter, YouTube) useful for their work on the mainframe. Twenty-seven percent report using social (up slightly from 25 percent last year) with the rest not using it at all despite IBM offering Facebook pages dedicated to IMS, CICS, and DB2. DancingDinosaur, only an occasional FB visitor, will check it out and report.

In terms of how mainframes are being used, the Arcati survey found that 25 percent of sites are planning to use Big Data; five percent of sites have adopted it for DevOps while 48 percent are planning to use mainframe DevOps going forward. Similarly, 14 percent of respondents already are reusing APIs while another 41 percent are planning to.

Arcati points out another interesting thought: The survey showed a 55:45 percent split in favor of distributed systems. So, you might expect the spend on the two types of platform to be similar. Yet, the survey found that 87 percent of an organization’s IT spend was going to distributed systems! Apparently mainframes aren’t as expensive as people think. Or put it another way, the cost of owning and operating distributed systems with mainframe-caliber QoS amounts to a lot more than people are admitting.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Cheers Beating Estimates But Losing Streak Continues

January 26, 2017

It has been 19 quarters since IBM reported positive revenue in its quarterly reports but the noises coming out of IBM with the latest 4Q16 and full year 2016 financials are upbeat due to the company beating analyst consensus revenue estimates and its strategic initiatives are starting to generate serious revenue.   Although systems revenues were down again (12%) the accountants at least had something positive to say about the z: “gross profit margins improved driven by z Systems performance.”

ezsource-dashboard

EZSource: Dashboard visualizes changes to mainframe code

IBM doesn’t detail which z models were contributing but you can guess they would be the LinuxONE models (Emperor and Rock Hopper) and the z13. DancingDinosaur expects z performance to improve significantly in 2017 when a new z, which had been heavily hinted in the 3Q2016 results reported here, is expected to ship.

With it latest financials IBM is outright crowing about its strategic initiatives: Fourth-quarter cloud revenues increased 33 percent.  The annual exit run rate for cloud as-a-service revenue increased to $8.6 billion from $5.3 billion at year-end 2015.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 16 percent and revenues from security increased 7 percent.

For the full year, revenues from strategic imperatives increased 13 percent.  Cloud revenues increased 35 percent to $13.7 billion.  The annual exit run rate for cloud as-a-service revenue increased 61 percent year to year.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 34 percent and from security increased 13 percent.

Of course, cognitive computing is IBM’s strategic imperative darling for the moment, followed by blockchain. Cognitive, for which IBM appears to use an expansive definition, is primarily a cloud play as far as IBM is concerned.  There is, however, a specific role for the z, which DancingDinosaur will get into in a later post. Blockchain, on the other hand, should be a natural z play.  It is, essentially, extremely secure OLTP on steroids.  As blockchain scales up it is a natural to drive z workloads.

As far as IBM’s financials go the strategic imperatives indeed are doing well. Other business units, however, continue to struggle.  For instance:

  • Global Business Services (includes consulting, global process services and application management) — revenues of $4.1 billion, down 4.1 percent.
  • Systems (includes systems hardware and operating systems software), remember, this is where z and Power platforms reside — revenues of $2.5 billion, down 12.5 percent. But as noted above, gross profit margins improved, driven by z Systems performance.
  • Global Financing (includes financing and used equipment sales) — revenues of $447 million, down 1.5 percent.

A couple of decades ago, when this blogger first started covering IBM and the mainframe as a freelancer writing for any technology publication that would pay real money IBM was struggling (if $100 billion behemoths can be thought to be struggling). The buzz among the financial analysts who followed the company was that IBM should be broken up into its parts and sold off.  IBM didn’t take that advice, at least not exactly, but it did begin a rebound that included laying off tons of people and the sale of some assets. Since then it invested heavily in things like Linux on z and open systems.

In December IBM SVP Tom Rosamilia talked about new investments in z/OS and z software like DB2 and CICS and IMS, and the best your blogger can tell he is still there. (Rumors suggest Rosamilia is angling for Rometty’s job in two years.)  If the new z does actually arrive in 2017 and key z software is refreshed then z shops can rest easy, at least for another few quarters.  But whatever happens, you can follow it here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Continues Mainframe Software Renaissance

January 19, 2017

While IBM focuses on its strategic imperatives, especially cognitive computing (which are doing quite well according to the latest statement that came out today–will take up next week), Compuware is fueling a mainframe software renaissance on its own. It’s latest two announcements brings Java-like unit testing to COBOL code via its Topaz product set and automate and intelligently optimize the processing of batch jobs through its acquisition of MVS Solutions. Both modernize and simplify the processes around legacy mainframe coding thus the reference to mainframe software renaissance.

compuware-total-test-graphic-process-flow-diagram

Let’s start with Compuware’s Topaz set of graphical tools. Since they are GUI-based even novice developers can immediately validate and troubleshoot whatever changes, either intended or inadvertent, they made to the existing COBOL applications.  Compuware’s aim for Topaz for Total Test is to eliminate any notion that such applications are legacy code and therefore cannot be updated as frequently and with the same confidence as other types of applications. Basically, mainframe DevOps.

By bringing fast, developer-friendly unit testing to COBOL applications, the new test tool also enables enterprises to deliver better customer experiences—since to create those experiences, IT needs its Agile/DevOps processes to encompass all platforms, from the mainframe to the cloud.  As a result z shops can gain increased digital agility along with higher quality, lower costs, and dramatically reduced dependency on the specialized knowledge of mainframe veterans aging out of the active IT workforce. In fact, the design of the Topaz tools enables z data centers to rapidly introduce the z to novice mainframe staff, which become productive virtually from the start—another cost saver.

Today in 2017 does management still need to be reminded of the importance of the mainframe. Probably, even though many organizations—among them the world’s largest banks, insurance companies, retailers and airlines—continue run their business on mainframe applications, and recent surveys clearly indicate that situation is unlikely to change anytime soon. However, as Compuware points out, the ability of enterprises to quickly update those applications in response to ever-changing business imperatives is daily being hampered by manual, antiquated development and testing processes; the ongoing loss of specialized COBOL programming knowledge; and the risk and associated fear of introducing even the slightest defect into core mainframe systems of record. The entire Topaz design approach from the very first tool, was to make mainframe code accessible to novices. That has continued every quarter for the past two years.

This is not just a DancingDinosaur rant. IT analyst Rich Ptak from Ptak Associates also noted: “By eliminating a long-standing constraint to COBOL Compuware provides enterprise IT the ability to deliver more digital capabilities to the business at greater speed and with less risk.”

Gartner in its latest Predicts 2017, chimes in with its DevOps equivalent of your mother’s reminder to brush your teeth after each meal: Application leaders in IT organizations should adopt a continuous quality culture that includes practices to manage technical debt and automate tests focused on unit and API testing. It should also automate test lab operations to provide access to production-like environments, and enable testing of deployment through the use of DevOps pipeline tools.” OK mom; everybody got the message.

The acquisition of MVS Solutions, Compuware’s fourth in the last year, adds to the company’s collection of mainframe software tools that promise agile, DevOps and millennial-friendly management of the IBM z platform—a continuation of its efforts to make the mainframe accessible to novices. DancingDinosaur covered these acquisition in early December here.

Batch processing accounts for the majority of peak mainframe workloads at large enterprises, providing essential back-end digital capabilities for customer-, employee- and partner-facing mobile, cloud, and web applications. As demands on these back-end mainframe batch processes intensify in terms of scale and performance, enterprises are under increasing pressure to ensure compliance with SLAs and control costs.

These challenges are exacerbated by the fact that responsibility for batch management is rapidly being shifted from platform veterans with decades of experience in mainframe operations to millennial ops staff who are unfamiliar with batch management. They also find native IBM z Systems management tools arcane and impractical, which increases the risk of critical batch operations being delayed or even failing. Run incorrectly, the batch workloads risk generating excessive peak utilization costs.

The solution, notes Compuware, lies in its new ThruPut Manager, which promises automatic, intelligent optimized batch processing. In the process it:

  • Provides immediate, intuitive insight into batch processing that even inexperienced operators can readily understand
  • Makes it easy to prioritize batch processing based on business policies and goals
  • Ensures proper batch execution by verifying that jobs have all the resources they need and proactively managing resource contention between jobs
  • Reduces the organizations’ IBM Monthly Licensing Charges (MLC) by minimizing rolling four-hour average (R4HA) processing peaks while avoiding counter-productive soft capping

Run in conjunction with Strobe, Compuware’s mainframe application performance management tool, ThruPut Manager also makes it easier to optimize batch workload and application performance as part of everyday mainframe DevOps tasks. ThruPut promises to lead to more efficiency and greater throughput resulting in a shorter batch workload and reduced processing capacity. These benefits also support better cross-platform DevOps, since distributed and cloud applications often depend on back-end mainframe batch processing.

Now, go out an hire some millenials and bring fresh blood into the mainframe. (Watch for DancingDinosaur’s upcoming post on why the mainframe is cool again.)

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces New DS8880 All-Flash Arrays

January 13, 2017

Yesterday IBM introduced three new members of the DS8000 line, each an all-flash product.  The new, all-flash storage products are designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical.

ibm-flash-ds8888-mainframe-ficon

IBM envisions these boxes for more than the z’s core OLTP workloads. According to the company, they are built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions are designed to support cognitive workloads, which can be used to uncover trends and patterns that help improve decision-making, customer service, and ROI. ERP and financial transactions certainly constitute conventional OLTP but the cognitive workloads are more analytical and predictive.

The three products:

  • IBM DS8884 F
  • IBM DS8886 F
  • IBM DS8888 F

The F signifies all-flash.  Each was designed with High-Performance Flash Enclosures Gen2. IBM did not just slap flash into existing hard drive enclosures.  Rather, it reports undertaking a complete redesign of the flash-to-z interaction. As IBM puts it: through deep integration between the flash and the z, IBM has embedded software that facilitates data protection, remote replication, and optimization for midrange and large enterprises. The resulting new microcode is ideal for cognitive workloads on z and Power Systems requiring the highest availability and system reliability possible. IBM promises that the boxes will deliver superior performance and uncompromised availability for business-critical workloads. In short, fast enough to catch bad guys before they leave the cash register or teller window. Specifically:

  • The IBM DS8884 F—labelled as the business class offering–boasts the lowest entry cost for midrange enterprises (prices starting at $90,000 USD). It runs an IBM Power Systems S822, which is a 6-core POWER8 processor per S822 with 256 GB Cache (DRAM), 32 Fibre channel/FICON ports, and 6.4 – 154 TB of flash capacity.
  • The IBM DS8886 F—the enterprise class offering for large organizations seeking high performance– sports a 24-core POWER8 processor per S824. It offers 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4 – 614.4 TB of flash capacity. That’s over one-half petabyte of high performance flash storage.
  • The IBM DS8888 F—labelled an analytics class offering—promises the highest performance for faster insights. It runs on the IBM Power Systems E850 with a 48-core POWER8 processor per E850. It also comes with 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4TB – 1.22 PB of flash capacity. Guess crossing the petabyte level qualifies it as an analytics and cognitive device along with the bigger processor complex

As IBM emphasized in the initial briefing, it engineered these storage devices to surpass the typical big flash storage box. For starters, IBM bypassed the device adapter to connect the z directly to the high performance storage controller. IBM’s goal was to reduce latency and optimize all-flash storage, not just navigate a simple replacement by swapping new flash for ordinary flash or, banish the thought, HDD.

“We optimized the data path,” explained Jeff Barber IBM systems VP for HE Storage BLE (DS8, DP&R and SAN). To that end, IBM switched from a 1u to a 4u enclosure, runs on shared-nothing clusters, and boosted throughput performance. The resulting storage, he added, “does database better than anyone; we can run real-time analytics.”  The typical analytics system—a shared system running Hadoop, won’t even come close to these systems, he added. With the DS8888, you can deploy a real-time cognitive cluster with minimal latency flash.

DancingDinosaur always appreciates hearing from actual users. Working through a network of offices, supported by a team of over 850 people, Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (such as electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research on solutions capable of managing these applications –which included both Hitachi and EMC –the organization deployed the IBM DS8886 along with DB2 for z/OS data server software to provide an integrated data backup and restore system. (Full disclosure: DancingDinosaur has not verified this customer story.)

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

For full details and specs on these products, click here

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: