Posts Tagged ‘cognitive computing’

IBM Gets Serious About Open Data Science (ODS) with Anaconda

April 21, 2017

As IBM rapidly ramps up cognitive systems in various forms, its two remaining platforms, z System and POWER, get more and more interesting. This week IBM announced it was bringing the Anaconda Open Data Science (ODS) platform to its Cognitive Systems and PowerAI.

Anaconda, Courtesy Pinterest

Specifically, Anaconda will integrate with the PowerAI software distribution for machine learning (ML) and deep learning (DL). The goal: make it simple and fast to take advantage of Power performance and GPU optimization for data-intensive cognitive workloads.

“Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale,” said Bob Picciano, senior vice president of IBM Cognitive Systems. Added Travis Oliphant, co-founder and chief data scientist, Continuum Analytics, which introduced the Anaconda platform: “By optimizing Anaconda on Power, developers will also gain access to the libraries in the PowerAI Platform for exploration and deployment in Anaconda Enterprise.”

With more than 16 million downloads to date, Anaconda has emerged as the Open Data Science platform leader. It is empowering leading businesses across industries worldwide with tools to identify patterns in data, uncover key insights, and transform basic data into the intelligence required to solve the world’s most challenging problems.

As one of the fastest growing fields of AI, DL makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. DL is transforming the businesses of leading consumer Web and mobile application companies, and it is catching on with more traditional business.

IBM developed PowerAI to accelerate enterprise adoption of open-source ML and DL frameworks used to build cognitive applications. PowerAI promises to reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture and is tuned for high performance, according to IBM. With PowerAI, organizations also can realize the benefit of enterprise support on IBM Cognitive Systems HPC platforms used in the most demanding commercial, academic, and hyperscale environments

For POWER shops getting into Anaconda, which is based on Python, is straightforward. You need a Power8 with IBM GPU hardware or a Power8 combined with a Nvidia GPU, in effect a Minsky machine. It’s essentially a developer’s tool although ODS proponents see it more broadly, bridging the gap between traditional IT and lines of business, shifting traditional roles, and creating new roles. In short, they envision scientists, mathematicians, engineers, business people, and more getting involved in ODS.

The technology is designed to run on the user’s desktop but is packaged and priced as a cloud subscription with a base package of 20 users. User licenses range from $500 per year to $30,000 per year depending on which bells and whistles you include. The number of options is pretty extensive.

According to IBM, this started with PowerAI to accelerate enterprise adoption of open-source ML/DL learning frameworks used to build cognitive applications. Overall, the open Anaconda platform brings capabilities for large-scale data processing, predictive analytics, and scientific computing to simplify package management and deployment. Developers using open source ML/DL components can use Power as the deployment platform and take advantage of Power optimization and GPU differentiation for NVIDIA.

Not to be left out, IBM noted growing support for the OpenPOWER Foundation, which recently announced the OpenPOWER Machine Learning Work Group (OPMLWG). The new OPMLWG includes members like Google, NVIDIA and Mellanox to provide a forum for collaboration that will help define frameworks for the productive development and deployment of ML solutions using OpenPOWER ecosystem technology. The foundation has also surpassed 300-members, with new participants such as Kinetica, Red Hat, and Toshiba. For traditional enterprise data centers, the future increasingly is pointing toward cognitive in one form or another.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware-Syncsort-Splunk to Boost Mainframe Security

April 6, 2017

The mainframe has proven to be remarkably secure over the years, racking up the highest security certifications available. But there is still room for improvement. Earlier this week Compuware announced Application Audit, a software tool that aims to transform mainframe cybersecurity and compliance through real-time capture of user behavior.

Capturing user behavior, especially in real-time, is seemingly impossible if you have to rely on the data your collect from the various logs and SMF data.  Compuware’s solution, Application Audit, in conjunction with Syncsort and Splunk, fully captures and analyzes start-to-finish mainframe application user behavior.

As Compuware explains: Most enterprises still rely on disparate logs and SMF data from security products such as RACF, CA-ACF2 and CA-Top Secret to piece together user behavior.  This is too slow if you want to capture bad behavior while it’s going on. Some organization try to apply analytics to these logs but that also is too slow. By the time you have collected enough logs to deduce who did what and when the damage may have been done.  Throw in the escalating demands of cross-platform enterprise cybersecurity and increasingly burdensome global compliance mandates you haven’t a chance without an automated tool optimized for this.

Fortunately, the mainframe provides rich and comprehensive session data you can run through and analyze with Application Audit and in conjunction with the organization’s security information and event management (SIEM) systems to more quickly and effectively see what really is happening. Specifically, it can:

  • Detect, investigate, and respond to inappropriate behavior by internal users with access
  • Detect, investigate, and respond to hacked or illegally accessed user accounts
  • Support criminal/legal investigations with complete and credible forensics
  • Fulfill compliance mandates regarding protection of sensitive data

IBM, by the way, is not ignoring the advantages of analytics for z security.  Back in February you read about IBM bringing its cognitive system to the z on DancingDinosaur.  IBM continues to flog cognitive on z for real-time analytics and security; promising to enable faster customer insights, business insights, and systems insights with decisions based on real-time analysis of both current and historical data delivered on an analytics platform designed for availability, optimized for flexibility, and engineered with the highest levels of security. Check out IBM’s full cognitive for z pitch.

The data Compuware and Syncsort collect with Application Audit is particularly valuable for maintaining control of privileged mainframe user accounts. Both private- and public-sector organizations are increasingly concerned about insider threats to both mainframe and non-mainframe systems. Privileged user accounts can be misused by their rightful owners, motivated by everything from financial gain to personal grievances, as well as by malicious outsiders who have illegally acquired the credentials for those accounts. You can imagine what havoc they could wreak.

In addition, with Application Audit Compuware is orchestrating a number of players to deliver the full security picture. Specifically, through collaboration with CorreLog, Syncsort and Splunk, Compuware is enabling enterprise customers to integrate Application Audit’s mainframe intelligence with popular SIEM solutions such as Splunk, IBM QRadar, and HPE Security ArcSight ESM. Additionally, Application Audit provides an out-of-the-box Splunk-based dashboard that delivers value from the start. As Compuware explains, these integrations are particularly useful for discovering and addressing security issues associated with today’s increasingly common composite applications, which have components running on both mainframe and non-mainframe platforms. SIEM integration also ensures that security, compliance and other risk management staff can easily access mainframe-related data in the same manner as they access data from other platforms.

“Effective IT management requires effective monitoring of what is happening for security, cost reduction, capacity planning, service level agreements, compliance, and other purposes,” noted Stu Henderson, Founder and President of the Henderson Group in the Compuware announcement. “This is a major need in an environment where security, technology, budget, and regulatory pressures continue to escalate.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Changes the Economics of Cloud Storage

March 31, 2017

Storage tiering used to be simple: active data went to your best high performance storage, inactive data went to low cost archival storage, and cloud storage filled in for one or whatever else was needed. Unfortunately, today’s emphasis on continuous data analytics, near real-time predictive analytics, and now cognitive has complicated this picture and the corresponding economics of storage.

In response, last week IBM unveiled new additions to the IBM Cloud Object Storage family. The company is offering clients new choices for archival data and a new pricing model to more easily apply intelligence to unpredictable data patterns using analytics and cognitive tools.

Analytics drive new IBM cloud storage pricing

By now, line of business (LOB) managers, having been exhorted to leverage big data and analytics for years, are listening. More recently, the analytics drumbeat has expanded to include not just big data but sexy IoT, predictive analytics, machine learning, and finally cognitive science. The idea of keeping data around for a few months and parking it in a long term archive to never be looked at again until it is finally deleted permanently just isn’t happening as it was supposed to (if it ever did). The failure to permanently remove expired data can become costly from a storage standpoint as well as risky from an e-discovery standpoint.

IBM puts it this way: Businesses typically have to manage across three types of data workloads: “hot” for data that’s frequently accessed and used; “cool” for data that’s infrequently accessed and used; and “cold” for archival data. Cold storage is often defined as cheaper but slower. For example, if a business uses cold storage, it typically has to wait to retrieve and access that data, limiting the ability to rapidly derive analytical or cognitive insights. As a result, there is a tendency to store data in more expensive hot storage.

IBM’s new cloud storage offering, IBM Cloud Object Storage Flex (Flex), uses a “pay as you use” model of storage tiers potentially lowering the price by 53 percent compared to AWS S3 IA1 and 75 percent compared to Azure GRS Cool Tier.2 (See footnotes at the bottom of the IBM press release linked to above. However IBM is not publishing the actual Flex storage prices.) Flex, IBM’s new cloud storage service, promises simplified pricing for clients whose data usage patterns are difficult to predict. Flex promises organizations will benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.

Of course, you could just lower the cost of storage by permanently removing unneeded data.  Simply insist that the data owners specify an expiration date when you set up the storage initially. When the date arrives in 5, 10, 15 years automatically delete the data. At least that’s how I was taught eons ago. Of course storage costs orders of magnitude less now although storage volumes are orders of magnitude greater and near real-time analytics weren’t in the picture.

Without the actual rates for the different storage tiers you cannot determine how much Storage Flex may save you.  What it will do, however, is make it more convenient to perform analytics on archived data you might otherwise not bother with.  Expect this issue to come up increasingly as IoT ramps up and you are handling more data that doesn’t need hot storage beyond the first few minutes of its arrival.

Finally, the IBM Cloud Object Storage Cold Vault (Cold Vault) service gives clients access to cold storage data on the IBM Cloud and is intended to lead the category for cold data recovery times among its major competitors. Cold Vault joins its existing Standard and Vault tiers to complete a range of IBM cloud storage tiers that are available with expanded expertise and methods via Bluemix and through the IBM Bluemix Garages.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Open POWER-Open Compute-POWER9 at Open Compute Summit

March 16, 2017

Bryan Talik, President, OpenPOWER Foundation provides a detailed rundown on the action at the Open Compute  Summit held last week in Santa Clara. After weeks of writing about Cognitive, Machine Learning, Blockchain, and even quantum computing, it is a nice shift to conventional computing platforms that should still be viewed as strategic initiatives.

The OpenPOWER, Open Compute gospel was filling the air in Santa Clara.  As reported, Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explained, “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

Added Adam Smith, CEO of Alpha Data:  “Open standards and communities lead to rapid innovation…We are proud to support the latest advances of OpenPOWER accelerator technology featuring Xilinx FPGAs.”

John Zannos, Canonical OpenPOWER Board Chair chimed in: For 2017, the OpenPOWER Board approved four areas of focus that include machine learning/AI, database and analytics, cloud applications and containers. The strategy for 2017 also includes plans to extend OpenPOWER’s reach worldwide and promote technical innovations at various academic labs and in industry. Finally, the group plans to open additional application-oriented workgroups to further technical solutions that benefits specific application areas.

Not surprisingly, some members even see collaboration as the key to satisfying the performance demands that the computing market craves. “The computing industry is at an inflection point between conventional processing and specialized processing,” according to Aaron Sullivan, distinguished engineer at Rackspace. “

To satisfy this shift, Rackspace and Google announced an OCP-OpenPOWER server platform last year, codenamed Zaius and Barreleye G2.  It is based on POWER9. At the OCP Summit, both companies put on a public display of the two products.

This server platform promises to improve the performance, bandwidth, and power consumption demands for emerging applications that leverage machine learning, cognitive systems, real-time analytics and big data platforms. The OCP players plan to continue their work alongside Google, OpenPOWER, OpenCAPI, and other Zaius project members.

Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explains: “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

This Zaius and Barreleye G@ server platforms promise to advance the performance, bandwidth and power consumption demands for emerging applications that leverage the latest advanced technologies. These latest technologies are none other than the strategic imperatives–cognitive, machine learning, real-time analytics–IBM has been repeating like a mantra for months.

Open Compute Projects also were displayed at the Summit. Specifically, as reported: Google and Rackspace, published the Zaius specification to Open Compute in October 2016, and had engineers to explain the specification process and to give attendees a starting point for their own server design.

Other Open Compute members, reportedly, also were there. Inventec showed a POWER9 OpenPOWER server based on the Zaius server specification. Mellanox showcased ConnectX-5, its next generation networking adaptor that features 100Gb/s Infiniband and Ethernet. This adaptor supports PCIe Gen4 and CAPI2.0, providing a higher performance and a coherent connection to the POWER9 processor vs. PCIe Gen3.

Others, reported by Talik, included Wistron and E4 Computing, which showcased their newly announced OCP-form factor POWER8 server. Featuring two POWER8 processors, four NVIDIA Tesla P100 GPUs with the NVLink interconnect, and liquid cooling, the new platform represents an ideal OCP-compliant HPC system.

Talik also reported IBM, Xilinx, and Alpha Data showed their line ups of several FPGA adaptors designed for both POWER8 and POWER9. Featuring PCIe Gen3, CAPI1.0 for POWER8 and PCIe Gen4, CAPI2.0 and 25G/s CAPI3.0 for POWER9 these new FPGAs bring acceleration to a whole new level. OpenPOWER member engineers were on-hand to provide information regarding the CAPI SNAP developer and programming framework as well as OpenCAPI.

Not to be left out, Talik reported that IBM showcased products it previously tested and demonstrated: POWER8-based OCP and OpenPOWER Barreleye servers running IBM’s Spectrum Scale software, a full-featured global parallel file system with roots in HPC and now widely adopted in commercial enterprises across all industries for data management at petabyte scale.  Guess compute platform isn’t quite the dirty phrase IBM has been implying for months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Launches New IoT Collaborative Initiative

February 23, 2017

Collaboration partners can pull hundreds of millions of dollars in new revenue from IoT, according to IBM’s recent IoT announcement. Having reached what it describes as a tipping point with IoT innovation the company now boasts of having over 6,000 clients and partners around the world, many of whom are now wanting to join in its new global Watson IoT center to co-innovate. Already Avnet, BNP Paribas, Capgemini, and Tech Mahindra will collocate development teams at the IBM Munich center to work on IoT collaborations.

new-ibm-watson-iot-center

IBM Opens New Global Center for Watson IoT

The IBM center also will act as an innovation space for the European IoT standards organization EEBus.  The plan, according to Harriet Green, General Manager, IBM Watson IoT, Cognitive Engagement and Education (pictured above left), calls for building a new global IoT innovation ecosystem that will explore how cognitive and IoT technologies will transform industries and our daily lives.

IoT and more recently cognitive are naturals for the z System, and POWER Systems have been the platform for natural language processing and cognitive since Watson won Jeopardy three years ago. With the latest enhancements IBM has brought to the z in the form of on-premises cognitive and machine learning the z should assume an important role as it gathers, stores, collects, and processes IoT data for cognitive analysis. DancingDinosaur first reported on this late in 2014 and again just last week. As IoT and cognitive workloads ramp up on z don’t be surprised to see monthly workload charges rise.

Late last year IBM announced that car maker BMW will collocate part of its research and development operations at IBM’s new Watson IoT center to help reimagine the driving experience. Now, IBM is announcing four more companies that have signed up to join its special industry “collaboratories” where clients and partners work together with 1,000 Munich-based IBM IoT experts to tap into the latest design thinking and push the boundaries of the possible with IoT.

Let’s look at the four newest participants starting with Avnet. According to IBM, an IT distributor and global IBM partner, Avnet will open a new joint IoT Lab within IBM’s Watson IoT HQ to develop, build, demonstrate and sell IoT solutions powered by IBM Watson. Working closely with IBM’s leading technologists and IoT experts, Avnet also plans to enhance its IoT technical expertise through hands-on training and on-the-job learning. Avnet’s team of IoT and analytics experts will also partner with IBM on joint business development opportunities across multiple industries including smart buildings, smart homes, industry, transportation, medical, and consumer.

As reported by BNP Paribas, Consorsbank, its retail digital bank in Germany, will partner with IBM´s new Watson IoT Center. The company will collocate a team of solution architects, developers and business development personnel at the Watson facility. Together with IBM’s experts, they will explore how IoT and cognitive technologies can drive transformation in the banking industry and help innovate new financial products and services, such as investment advice.

Similarly, global IT consulting and technology services provider Capgemini will collocate a team of cognitive IoT experts at the Watson center. Together they will help customers maximize the potential of Industry 4.0 and develop and take to market sector-specific cognitive IoT solutions. Capgemini plans a close link between its Munich Applied Innovation Exchange and IBM’s new Customer Experience zones to collaborate with clients in an interactive environment.

Finally, the Indian multinational provider of enterprise and communications IT and networking technology Tech Mahindra, is one of IBM’s Global System Integrators with over 3,000 specialists focused on IBM technology around the world. The company will locate a team of six developers and engineers within the Watson IoT HQ to help deliver on Tech Mahindra’s vision of generating substantial new revenue based on IBM’s Watson IoT platform. Tech Mahindra will use the center to co-create and showcase new solutions based on IBM’s Watson IoT platform for Industry 4.0 and Manufacturing, Precision Farming, Healthcare, Insurance and Banking, and automotive.

To facilitate connecting the z to IoT IBM offers a simple recipe. It requires 4 basic ingredients and 4 steps: Texas Instrument’s SensorTag, a Bluemix account, IBM z/OS Connect Enterprise Edition, and a back-end service like CICS.  Start by exposing an existing z Systems application as a RESTful AP. This is where the z/OS Connect Edition comes in.  Then enable your SensorTag device to Watson IoT Quick Start. From there connect the Cloud to your on-premises Hybrid Cloud.  Finally, enable the published IoT data to trigger a RESTful API. Sounds pretty straightforward but—full disclosure—Dancing Dinosaur has not tried it due to lacking the necessary pieces. If you try it, please tell DancingDinosaur how it works (info@radding.net). Good luck.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM On-Premises Cognitive Means z Systems Only

February 16, 2017

Just in case you missed the incessant drumbeat coming out of IBM, the company committed to cognitive computing. But that works for z data centers since IBM’s cognitive system is available on-premises only for the z. Another z first: IBM just introduced Machine Learning (key for cognitive) for the private cloud starting with the z.

ibm-congitive-graphic

There are three ways to get IBM cognitive computing solutions: the IBM Cloud, Watson, or the z System, notes Donna Dillenberger, IBM Fellow, IBM Enterprise Solutions. The z, however, is the only platform that IBM supports for cognitive computing on premises (sorry, no Power). As such, the z represents the apex of programmatic computing, at least as IBM sees it. It also is the only IBM platform that supports cognitive natively; mainly in the form of Hadoop and Spark, both of which are programmatic tools.

What if your z told you that a given strategy had a 92% of success. It couldn’t do that until now with IBM’s recently released cognitive system for z.

Your z system today represents the peak of programmatic computing. That’s what everyone working in computers grew up with, going all the way back to Assembler, COBOL, and FORTRAN. Newer languages and operating systems have arrived since; today your mainframe can respond to Java or Linux and now Python and Anaconda. Still, all are based on the programmatic computing model.

IBM believes the future lies in cognitive computing. Cognitive has become the company’s latest strategic imperative, apparently trumping its previous strategic imperatives: cloud, analytics, big data, and mobile. Maybe only security, which quietly slipped in as a strategic imperative sometime 2016, can rival cognitive, at least for now.

Similarly, IBM describes itself as a cognitive solutions and cloud platform company. IBM’s infatuation with cognitive starts with data. Only cognitive computing will enable organizations to understand the flood of myriad data pouring in—consisting of structured, local data but going beyond to unlock the world of global unstructured data; and then to decision tree-driven, deterministic applications, and eventually, probabilistic systems that co-evolve with their users by learning along with them.

You need cognitive computing. It is the only way, as IBM puts it: to move beyond the constraints of programmatic computing. In the process, cognitive can take you past keyword-based search that provides a list of locations where an answer might be located to an intuitive, conversational means to discover a set of confidence-ranked possibilities.

Dillenberger suggests it won’t be difficult to get to the IBM cognitive system on z . You don’t even program a cognitive system. At most, you train it, and even then the cognitive system will do the heavy lifting by finding the most appropriate training models. If you don’t have preexisting training models, “just use what the cognitive system thinks is best,” she adds. Then the cognitive system will see what happens and learn from it, tweaking the models as necessary based on the results and new data it encounters. This also is where machine learning comes in.

IBM has yet to document payback and ROI data. Dillenberger, however, has spoken with early adopters.  The big promised payback, of course, will come from the new insights uncovered and the payback will be as astronomical or meager as you are in executing on those insights.

But there also is the promise of a quick technical payback for z data centers managers. When the data resides on z—a huge advantage for the z—you just run analytics where the data is. In such cases you can realize up to 3x the performance, Dillenberger noted.  Even if you have to pull data from some other location too you still run faster, maybe 2x faster. Other z advantages include large amounts of memory, multiple levels of cache, and multiple I/O processors get at data without impacting CPU performance.

When the data and IBM’s cognitive system resides on the z you can save significant money. “ETL consumed huge amounts of MIPS. But when the client did it all on the z, it completely avoided the costly ETL process,” Dillenberger noted. As a result, that client reported savings of $7-8 million dollars a year by completely bypassing the x-86 layer and ETL and running Spark natively on the z.

As Dillenberger describes it, cognitive computing on the z is here now, able to deliver a payback fast, and an even bigger payback going forward as you execute on the insights it reveals. And you already have a z, the only on-premises way to IBM’s Cognitive System.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Arcati 2017 Mainframe Survey—Cognitive a No-Show

February 2, 2017

DancingDinosaur checks into Arcati’s annual mainframe survey every few years. You can access a copy of the 2017 report here.  Some of the data doesn’t change much, a few percentage points here or there. For example, 75% of the respondents consider the mainframe too expensive. OK, people have been saying that for years.

On the other hand, 65% of the respondents’ mainframes are involved with web services. Half also run Java-based mainframe apps, up from 30% last year, while 17% more are planning to run Java with their mainframe this year. Similarly, 35% of respondents report running Linux on the mainframe, up from 22% last year. Again, 13% of the respondents expect to add Linux this year.  Driving this is the advantageous cost and management benefits that result from consolidating distributed Linux workloads on the z. Yes, things are changing.

linuxone-5558_d_ibm_linuxone_social_tile_990_550_4_081515

The biggest surprise for DancingDinosaur, however, revolved around IBM’s latest strategic initiatives, especially cognitive computing and blockchain.  Other strategic initiatives may include, depending on who is briefing you at the moment—security, data analytics, cloud, hybrid cloud, and mobile. These strategic imperatives, especially cognitive computing, are expected to drive IBM’s revenue. In the latest statement, reported last week in DancingDinosaur, strategic imperatives amounted to 41% of revenue.  Cloud revenue and Cloud-as-a-service also rose considerably, 35% and 61% respectively.

When DancingDinosaur searched the accompanying Arcati vendor report (over 120 vendors with brief descriptions) for cognitive only GT Software came up. IBM didn’t even mention cognitive in its vendor listing, which admittedly was skimpy. The case was the same with Blockchain; only one vendor, Atos, mentioned it and nothing about blockchain in the IBM listing. More vendors, however, noted supporting one or some of the other supposed strategic initiatives.

Overall, the Arcati survey is quite positive about the mainframe. The survey found that 50 percent of sites viewed their mainframe as a legacy system (down from last year’s 62 percent). However, 22 percent (up from 16 percent last year) viewed mainframe as strategic, with 28 percent (up from 22 percent) viewing mainframes as both strategic and legacy.

Reinforcing the value of the mainframe, the survey found 78 percent of sites experienced some kind of increase in capacity. With increased demand for mainframe resources (data and processing), it should not be surprising that respondents report an 81 percent an increase in technology costs. Yet, 38 percent of sites report their people costs have decreased or stayed the same.

Unfortunately, the survey also found that 70 percent of respondents thought there were a cultural barrier between mainframe and other IT professionals. That did not discourage respondents from pointing out the mainframe advantages: 100 percent highlighted the benefit of the mainframe’s availability, 83 percent highlighted security, 75 percent identified scalability, and 71 percent picked manageability as a mainframe benefit.

Also, social media runs on the mainframe. Respondents found social media (Facebook, Twitter, YouTube) useful for their work on the mainframe. Twenty-seven percent report using social (up slightly from 25 percent last year) with the rest not using it at all despite IBM offering Facebook pages dedicated to IMS, CICS, and DB2. DancingDinosaur, only an occasional FB visitor, will check it out and report.

In terms of how mainframes are being used, the Arcati survey found that 25 percent of sites are planning to use Big Data; five percent of sites have adopted it for DevOps while 48 percent are planning to use mainframe DevOps going forward. Similarly, 14 percent of respondents already are reusing APIs while another 41 percent are planning to.

Arcati points out another interesting thought: The survey showed a 55:45 percent split in favor of distributed systems. So, you might expect the spend on the two types of platform to be similar. Yet, the survey found that 87 percent of an organization’s IT spend was going to distributed systems! Apparently mainframes aren’t as expensive as people think. Or put it another way, the cost of owning and operating distributed systems with mainframe-caliber QoS amounts to a lot more than people are admitting.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Cheers Beating Estimates But Losing Streak Continues

January 26, 2017

It has been 19 quarters since IBM reported positive revenue in its quarterly reports but the noises coming out of IBM with the latest 4Q16 and full year 2016 financials are upbeat due to the company beating analyst consensus revenue estimates and its strategic initiatives are starting to generate serious revenue.   Although systems revenues were down again (12%) the accountants at least had something positive to say about the z: “gross profit margins improved driven by z Systems performance.”

ezsource-dashboard

EZSource: Dashboard visualizes changes to mainframe code

IBM doesn’t detail which z models were contributing but you can guess they would be the LinuxONE models (Emperor and Rock Hopper) and the z13. DancingDinosaur expects z performance to improve significantly in 2017 when a new z, which had been heavily hinted in the 3Q2016 results reported here, is expected to ship.

With it latest financials IBM is outright crowing about its strategic initiatives: Fourth-quarter cloud revenues increased 33 percent.  The annual exit run rate for cloud as-a-service revenue increased to $8.6 billion from $5.3 billion at year-end 2015.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 16 percent and revenues from security increased 7 percent.

For the full year, revenues from strategic imperatives increased 13 percent.  Cloud revenues increased 35 percent to $13.7 billion.  The annual exit run rate for cloud as-a-service revenue increased 61 percent year to year.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 34 percent and from security increased 13 percent.

Of course, cognitive computing is IBM’s strategic imperative darling for the moment, followed by blockchain. Cognitive, for which IBM appears to use an expansive definition, is primarily a cloud play as far as IBM is concerned.  There is, however, a specific role for the z, which DancingDinosaur will get into in a later post. Blockchain, on the other hand, should be a natural z play.  It is, essentially, extremely secure OLTP on steroids.  As blockchain scales up it is a natural to drive z workloads.

As far as IBM’s financials go the strategic imperatives indeed are doing well. Other business units, however, continue to struggle.  For instance:

  • Global Business Services (includes consulting, global process services and application management) — revenues of $4.1 billion, down 4.1 percent.
  • Systems (includes systems hardware and operating systems software), remember, this is where z and Power platforms reside — revenues of $2.5 billion, down 12.5 percent. But as noted above, gross profit margins improved, driven by z Systems performance.
  • Global Financing (includes financing and used equipment sales) — revenues of $447 million, down 1.5 percent.

A couple of decades ago, when this blogger first started covering IBM and the mainframe as a freelancer writing for any technology publication that would pay real money IBM was struggling (if $100 billion behemoths can be thought to be struggling). The buzz among the financial analysts who followed the company was that IBM should be broken up into its parts and sold off.  IBM didn’t take that advice, at least not exactly, but it did begin a rebound that included laying off tons of people and the sale of some assets. Since then it invested heavily in things like Linux on z and open systems.

In December IBM SVP Tom Rosamilia talked about new investments in z/OS and z software like DB2 and CICS and IMS, and the best your blogger can tell he is still there. (Rumors suggest Rosamilia is angling for Rometty’s job in two years.)  If the new z does actually arrive in 2017 and key z software is refreshed then z shops can rest easy, at least for another few quarters.  But whatever happens, you can follow it here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces New DS8880 All-Flash Arrays

January 13, 2017

Yesterday IBM introduced three new members of the DS8000 line, each an all-flash product.  The new, all-flash storage products are designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical.

ibm-flash-ds8888-mainframe-ficon

IBM envisions these boxes for more than the z’s core OLTP workloads. According to the company, they are built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions are designed to support cognitive workloads, which can be used to uncover trends and patterns that help improve decision-making, customer service, and ROI. ERP and financial transactions certainly constitute conventional OLTP but the cognitive workloads are more analytical and predictive.

The three products:

  • IBM DS8884 F
  • IBM DS8886 F
  • IBM DS8888 F

The F signifies all-flash.  Each was designed with High-Performance Flash Enclosures Gen2. IBM did not just slap flash into existing hard drive enclosures.  Rather, it reports undertaking a complete redesign of the flash-to-z interaction. As IBM puts it: through deep integration between the flash and the z, IBM has embedded software that facilitates data protection, remote replication, and optimization for midrange and large enterprises. The resulting new microcode is ideal for cognitive workloads on z and Power Systems requiring the highest availability and system reliability possible. IBM promises that the boxes will deliver superior performance and uncompromised availability for business-critical workloads. In short, fast enough to catch bad guys before they leave the cash register or teller window. Specifically:

  • The IBM DS8884 F—labelled as the business class offering–boasts the lowest entry cost for midrange enterprises (prices starting at $90,000 USD). It runs an IBM Power Systems S822, which is a 6-core POWER8 processor per S822 with 256 GB Cache (DRAM), 32 Fibre channel/FICON ports, and 6.4 – 154 TB of flash capacity.
  • The IBM DS8886 F—the enterprise class offering for large organizations seeking high performance– sports a 24-core POWER8 processor per S824. It offers 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4 – 614.4 TB of flash capacity. That’s over one-half petabyte of high performance flash storage.
  • The IBM DS8888 F—labelled an analytics class offering—promises the highest performance for faster insights. It runs on the IBM Power Systems E850 with a 48-core POWER8 processor per E850. It also comes with 2 TB Cache (DRAM), 128 Fibre channel/FICON ports, and 6.4TB – 1.22 PB of flash capacity. Guess crossing the petabyte level qualifies it as an analytics and cognitive device along with the bigger processor complex

As IBM emphasized in the initial briefing, it engineered these storage devices to surpass the typical big flash storage box. For starters, IBM bypassed the device adapter to connect the z directly to the high performance storage controller. IBM’s goal was to reduce latency and optimize all-flash storage, not just navigate a simple replacement by swapping new flash for ordinary flash or, banish the thought, HDD.

“We optimized the data path,” explained Jeff Barber IBM systems VP for HE Storage BLE (DS8, DP&R and SAN). To that end, IBM switched from a 1u to a 4u enclosure, runs on shared-nothing clusters, and boosted throughput performance. The resulting storage, he added, “does database better than anyone; we can run real-time analytics.”  The typical analytics system—a shared system running Hadoop, won’t even come close to these systems, he added. With the DS8888, you can deploy a real-time cognitive cluster with minimal latency flash.

DancingDinosaur always appreciates hearing from actual users. Working through a network of offices, supported by a team of over 850 people, Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (such as electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research on solutions capable of managing these applications –which included both Hitachi and EMC –the organization deployed the IBM DS8886 along with DB2 for z/OS data server software to provide an integrated data backup and restore system. (Full disclosure: DancingDinosaur has not verified this customer story.)

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

For full details and specs on these products, click here

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: