Posts Tagged ‘Big Data’

GAO Blames Z for Government Inefficiency

October 19, 2018

Check out the GAO report from May 2016 here.  The Feds spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M). In a related report, the IRS reported it used assembly language code and COBOL, both developed in the 1950s, for IMF and IDRS. Unfortunately, the GAO conflates the word “mainframe” to refer to outdated UNISYS mainframes with the modern, supported, and actively developed IBM Z mainframes, notes Ross Mauri, IBM general manager, Z systems.

Mainframes-mobile in the cloud courtesy of Compuware

The GAO repeatedly used “mainframe” to refer to outdated UNISYS mainframes alongside the latest advanced IBM Z mainframes.  COBOL, too, maintains active skills and training programs at many institutions and receives investment across many industries. In addition to COBOL, the IBM z14 also runs Java, Swift, Go, Python and other open languages to enable modern application enhancement and development. Does the GAO know that?

The GAO uses the word “mainframe” to refer to outdated UNISYS mainframes as well as modern, supported, and actively developed IBM Z mainframes. In a recent report, the GAO recommends moving to supported modern hardware. IBM agrees. The Z, however, does not expose mainframe investments to a rise in procurement and operating costs, nor to skilled staff issues, Mauri continued.

Three investments the GAO reviewed in the operations and maintenance clearly appear as legacy investments facing significant risks due to their reliance on obsolete programming languages, outdated hardware, and a shortage of staff with critical skills. For example, IRS reported that it used assembly language code and COBOL (both developed in the 1950s) for IMF and IDRS. What are these bureaucrats smoking?

The GAO also seems confused over the Z and the cloud. IBM Cloud Private is designed to run on Linux-based Z systems to take full advantage of the cloud through open containers while retaining the inherent benefits of Z hardware—security, availability,  scalability, reliability; all the ities enterprises have long relied on the z for. The GAO seems unaware that the Z’s automatic pervasive encryption immediately encrypts everything at rest or in transit. Furthermore, the GAO routinely addresses COBOL as a deficiency while ISVs and other signatories of the Open Letter consider it a modern, optimized, and actively supported programming language.

The GAO apparently isn’t even aware of IBM Cloud Private. IBM Cloud Private is compatible with leading IT systems manufacturers and has been optimized for IBM Z. All that you need to get started with the cloud is the starter kit available for IBM OpenPOWER LC (Linux) servers, enterprise Power Systems, and Hyperconverged Systems powered by Nutanix. You don’t even need a Z; just buy a low cost OpenPOWER LC (Linux) server online and configure it as desired.

Here is part of the letter that Compuware sent to the GAO, Federal CIOs, and members of Congress. It’s endorsed by several dozen members of the IT industry. The full letter is here:

In light of a June 2018 GAO report to the Internal Revenue Service suggesting the agency’s mainframe- and COBOL-based systems present significant risks to tax processing, we the mainframe IT community—developers, scholars, influencers and inventors—urge the IRS and other federal agencies to:

  • Reinvest in and modernize the mainframe platform and the mission-critical applications which many have long relied upon.
  • Prudently consider the financial risks and opportunity costs associated with rewriting and replacing proven, highly dependable mainframe applications, for which no “off-the-shelf” replacement exists.
  • Understand the security and performance requirements of these mainframe applications and data and the risk of migrating to platforms that were never designed to meet such requirements.

The Compuware letter goes on to state: In 2018, the mainframe is still the world’s most reliable, performant and securable platform, providing the lowest cost high-transaction system of record. Regarding COBOL it notes that since 2017 IBM z14 supports COBOL V6.2, which is optimized bi-monthly.

Finally, about attracting new COBOL workers: COBOL is as easy to work with it as any other language. In fact, open source Zowe has demonstrated appeal to young techies, providing solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. What don’t they get?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Can IBM find a place for Watson?

September 7, 2018

After beating 2 human Jeopardy game champions three times in a row in 2011 IBM’s Watson has been hard pressed to come up with a comparable winning streak. Initially IBM appeared to expect its largest customers to buy richly configured Power Servers to run Watson on prem. When they didn’t get enough takers the company moved Watson to the cloud where companies could lease it for major knowledge-driven projects. When that didn’t catch on IBM started to lease Watson’s capabilities by the drink, promising to solve problems in onesies and twosies.

Jeopardy champs lose to Watson

Today Watson is promising to speed AI success through IBM’s Watson Knowledge Catalog. As IBM puts it: IBM Watson Knowledge Catalog powers intelligent, self-service discovery of data, models, and more; activating them for artificial intelligence, machine learning, and deep learning. Access, curate, categorize and share data, knowledge assets, and their relationships, wherever they reside.

DancingDinosaur has no doubt that Watson is stunning technology and has been rooting for its success since that first Jeopardy round seven years ago. Over that time, Watson and IBM have become a case study in how not to price, package, and market powerful yet expensive technology. The Watson Knowledge Catalog is yet another pricing and packaging experiment.

Based on the latest information online, Watson Knowledge Catalog is priced according to number of provisioned catalogs and discovery connections. There are two plans available: Lite and Professional. The Lite plan allows 1 catalog and 5 free discovery connections while the Professional plan provides unlimited of both. Huh? This statement begs for clarification and there probably is a lot of information and fine print required to answer the numerous questions the above description raises, but life is too short for DancingDinosaur to rummage around on the Watson Knowledge Catalog site to look for answers. Doesn’t this seem like something Watson itself should be able to clarify with a single click?

But no, that is too easy. Instead IBM takes the high road, which DancingDinosaur calls the education track.  Notes Jay Limburn, Senior Technical Staff Member and IBM Offering Manager: there are two main challenges that might impede you from realizing the true value of your data and slowing your journey to adopting artificial intelligence (AI). They are 1) inefficient data management and 2) finding the right tools for all data users.

Actually, the issues start even earlier. In attempting AI most established organizations start at a disadvantage, notes IBM. For example:

  • Most enterprises do not know what and where their data is
  • Data science and compliance teams are handicapped by the lack of data accessibility
  • Enterprises with legacy data are even more handicapped than digitally savvy startups
  • AI projects will expose problems with limited data and poor quality; many will simply fail just due to that.
  • The need to differentiate through monetization increases in importance with AI

These are not new. People have been whining about this since the most rudimentary data mining attempts were made decades ago. If there is a surprise it is that they have not been resolved by now.

Or maybe they finally have with the IBM Watson Knowledge Catalog. As IBM puts it, the company will deliver what promises to be the ultimate data Catalog that actually illuminates data:

  • Knows what data your enterprise has
  • Where it resides
  • Where it came from
  • What it means
  • Provide quick access to it
  • Ensure protection of use
  • Exploit Machine Learning for intelligence and automation
  • Enable data scientists, data engineers, stewards and business analysts
  • Embeddable everywhere for free, with premium features available in paid editions

OK, after 7 years Watson may be poised to deliver and it has little to do with Jeopardy but with a rapidly growing data catalog market. According to a Research and Markets report, the data catalog market is expected to grow from $210 million in 2017 to $620 million by 2022. How many sales of the Professional version gets IBM a leading share.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Travelport and IBM launch industry AI travel platform

August 24, 2018

Uh oh, if you have been a little sloppy with travel expenses, it’s time to clean up your travel act before AI starts monitoring your reimbursed travel. IBM and Travelport are teaming up to offer the industry’s first AI-based travel platform to intelligently manage corporate travel spend while leveraging IBM Watson capabilities to unlock previously unavailable data insights.

As IBM explains it, the new travel platform will be delivered via the IBM Cloud and exploits IBM Watson capabilities to intelligently track, manage, predict and analyze travel costs to fundamentally change how companies manage and optimize their travel programs. Typically, each work group submits its own travel expenses and reconciliation and reimbursement can be handled by different groups.

With annual global business travel spend estimated to reach a record $1.2 trillion this year, as projected by the Global Business Travel Association, corporate travel managers need new ways to reduce costs. That requires consolidating and normalizing all the information. Currently for businesses to get a full picture of travel patterns a travel manager might have to sift through data silos from travel agencies, cards, expense systems, and suppliers for end-to-end visibility of spend and compliance across all travel subcategories.  This, however, is usually undertaken from an historical view rather than in real time, which is one reason why reimbursement can take so long. As an independent contractor, DancingDinosaur generally has to submit travel expenses at the end of the project and wait forever for payment.

IBM continues: The new platform, dubbed Travel Manager,  features advanced artificial intelligence, and provides cognitive computing and predictive data analytics using what-if type scenarios, while integrated with travel and expense data to help travel management teams, procurement category managers, business units, finance, and human resource departments optimize their travel program, control spend, and enhance the end-traveler experience.  Maybe they will even squeeze independent contractors into the workflow.

The special sauce in all of this results from how IBM combines data with Travelport, a travel commerce platform on its own, to produce IBM Travel Manager as an AI platform that oversees corporate travel expenses. In the process, IBM Travel Manager gives users complete, unified access to previously siloed information, which, when combined with travel data from the Travelport global distribution system (GDS), can then be used to create real-time predictive analytics recommending how, say, adjustments in travel booking behavior patterns can positively impact a company’s travel budget.

Travelport, itself, is a heavyweight in the travel industry. It relies on technology to make the experience of buying and managing travel better. Through its travel commerce platform it provides distribution, technology, payment and other capabilities for the $7 trillion global travel and tourism industry. The platform facilitates travel commerce by connecting the world’s leading travel providers with online and offline travel buyers in a proprietary (B2B) travel marketplace.

The company helps with all aspects of the travel supply chain from airline merchandising, hotel content and distribution, mobile commerce to B2B payments. Last year its platform processed over $83 billion of travel spend, helping its customers maximize the value of every trip.

IBM Travel Manager combines and normalizes data from diverse sources, allowing for more robust insights and benchmarking than other reporting solutions. It also taps AI to unlock previously unavailable insights from multiple internal and external data sources. The product is expected to be commercially available to customers through both IBM and Travelport.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Introduces a Reference Architecture for On-Premise AI

June 22, 2018

This week IBM announced an AI infrastructure Reference Architecture for on-premises AI deployments. The architecture promises to address the challenges organizations face experimenting with AI PoCs, growing into multi-tenant production systems, and then expanding to enterprise scale while integrating into an organization’s existing IT infrastructure.

The reference architecture includes, according to IBM, a set of integrated software tools built on optimized, accelerated hardware for the purpose of enabling organizations to jump start. AI and Deep Learning projects, speed time to model accuracy, and provide enterprise-grade security, interoperability, and support.  IBM’s graphic above should give you the general picture.

Specifically, IBM’s AI reference architecture should support iterative, multi-stage, data-driven processes or workflows that entail specialized knowledge, skills, and, usually, a new compute and storage infrastructure. Still, these projects have many attributes that are familiar to traditional CIOs and IT departments.

The first of these is that the results are only as good as the data going into it, and model development is dependent upon having a lot of data and the data being in the format expected by the deep learning framework. Surprised? You have been hearing this for decades as GIGO (Garbage In Garbage Out).  The AI process also is iterative; repeatedly looping through data sets and tunings to develop more accurate models and then comparing new data in the model to the original business or technical requirements to refine the approach.  In this sense, AI reference model is no different than IT 101, an intro course for wannabe IT folks.

But AI doesn’t stay simplistic for long. As the reference architecture puts it, AI is a sophisticated, complex process that requires specialized software and infrastructure. That’s where IBM’s PowerAI Platform comes in. Most organizations start with small pilot projects bound to a few systems and data sets but grow from there.

As projects grow beyond the first test systems, however, it is time to bulk up an appropriate storage and networking infrastructure. This will allow it to sustain growth and eventually support a larger organization.

The trickiest part of AI and the part that takes inspired genius to conceive, test, and train is the model. The accuracy and quality of a trained AI model are directly affected by the quality and quantity of data used for training. The data scientist needs to understand the problem they are trying to solve and then find the data needed to build a model that solves the problem.

Data for AI is separated into a few broad sets; the data used to train and test the models and data that is analyzed by the models and the archived data that may be reused. This data can come from many different sources such as traditional organizational data from ERP systems, databases, data lakes, sensors, collaborators and partners, public data, mobile apps, social media, and legacy data. It may be structured or unstructured in many formats such as file, block, object, Hadoop Distributed File Systems (HDFS), or something else.

Many AI projects begin as a big data problem. Regardless of how it starts, a large volume of data is needed, and it inevitably needs preparation, transformation, and manipulation. But it doesn’t stop there.

AI models require the training data to be in a specific format; each model has its own and usually different format. Invariably the initial data is nowhere near those formats. Preparing the data is often one of the largest organizational challenges, not only in complexity but also in the amount of time it takes to transform the data into a format that can be analyzed. Many data scientists, notes IBM, claim that over 80% of their time is spent in this phase and only 20% on the actual process of data science. Data transformation and preparation is typically a highly manual, serial set of steps: identifying and connecting to data sources, extracting to a staging server, tagging the data, using tools and scripts to manipulate the data. Hadoop is often a significant source of this raw data, and Spark typically provides the analytics and transformation engines used along with advanced AI data matching and traditional SQL scripts.

There are two other considerations in this phase: 1) data storage and access and the speed of execution. For this—don’t be shocked—IBM recommends Spectrum Scale to provide multi-protocol support with a native HDFS connector, which can centralize and analyze data in place rather than wasting time copying and moving data. But you may have your preferred platform.

IBM’s reference architecture provides a place to start. A skilled IT group will eventually tweak IBM’s reference architecture, making it their own.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Is Your Enterprise Ready for AI?

May 11, 2018

According to IBM’s gospel of AI “we are in the midst of a global transformation and it is touching every aspect of our world, our lives, and our businesses.”  IBM has been preaching its gospel of AI of the past year or longer, but most of its clients haven’t jumped fully aboard. “For most of our clients, AI will be a journey. This is demonstrated by the fact that most organizations are still in the early phases of AI adoption.”

AC922 with NIVIDIA Tesla V100 and Enhanced NVLink GPUs

The company’s latest announcements earlier this week focus POWER9 squarely on AI. Said Tim Burke, Engineering Vice President, Cloud and Operating System Infrastructure, at Red Hat. “POWER9-based servers, running Red Hat’s leading open technologies offer a more stable and performance optimized foundation for machine learning and AI frameworks, which is required for production deployments… including PowerAI, IBM’s software platform for deep learning with IBM Power Systems that includes popular frameworks like Tensorflow and Caffe, as the first commercially supported AI software offering for [the Red Hat] platform.”

IBM insists this is not just about POWER9 and they may have a point; GPUs and other assist processors are taking on more importance as companies try to emulate the hyperscalers in their efforts to drive server efficiency while boosting power in the wake of declines in Moore’s Law. ”GPUs are at the foundation of major advances in AI and deep learning around the world,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. [Through] “the tight integration of IBM POWER9 processors and NVIDIA V100 GPUs made possible by NVIDIA NVLink, enterprises can experience incredible increases in performance for compute- intensive workloads.”

To create an AI-optimized infrastructure, IBM announced the latest additions to its POWER9 lineup, the IBM Power Systems LC922 and LC921. Characterized by IBM as balanced servers offering both compute capabilities and up to 120 terabytes of data storage and NVMe for rapid access to vast amounts of data. IBM included HDD in the announcement but any serious AI workload will choke without ample SSD.

Specifically, these new servers bring an updated version of the AC922 server, which now features recently announced 32GB NVIDIA V100 GPUs and larger system memory, which enables bigger deep learning models to improve the accuracy of AI workloads.

IBM has characterized the new models as data-intensive machines and AI-intensive systems, LC922 and LC921 Servers with POWER9 processors. The AC922, arrived last fall. It was designed for the what IBM calls the post-CPU era. The AC922 was the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 was designed to drive demonstrable performance improvements across popular AI frameworks such as TensorFlow and Caffe.

In the post CPU era, where Moore’s Law no longer rules, you need to pay as much attention to the GPU and other assist processors as the CPU itself, maybe even more so. For example, the coherence and high-speed of the NVLink enables hash tables—critical for fast analytics—on GPUs. As IBM noted at the introduction of the new machines this week: Hash tables are fundamental data structure for analytics over large datasets. For this you need large memory: small GPU memory limits hash table size and analytic performance. The CPU-GPU NVLink2 solves 2 key problems: large memory and high-speed enables storing the full hash table in CPU memory and transferring pieces to GPU for fast operations; coherence enables new inserts in CPU memory to get updated in GPU memory. Otherwise, modifications on data in CPU memory do not get updated in GPU memory.

IBM has started referring to the LC922 and LC921 as big data crushers. The LC921 brings 2 POWER9 sockets in a 1U form factor; for I/O it comes with both PCIe 4.0 and CAPI 2.0.; and offers up to 40 cores (160 threads) and 2TB RAM, which is ideal for environments requiring dense computing.

The LC922 is considerably bigger. It offers balanced compute capabilities delivered with the P9 processor and up to 120TB of storage capacity, again advanced I/O through PCIe 4.0/CAPI 2.0, and up to 44 cores (176 threads) and 2TB RAM. The list price, notes IBM is ~30% less.

If your organization is not thinking about AI your organization is probably in the minority, according to IDC.

  • 31 percent of organizations are in [AI] discovery/evaluation
  • 22 percent of organizations plan to implement AI in next 1-2 years
  • 22 percent of organizations are running AI trials
  • 4 percent of organizations have already deployed AI

Underpinning both servers is the IBM POWER9 CPU. The POWER9 enjoys a nearly 5.6x improved CPU to GPU bandwidth vs x86, which can improve deep learning training times by nearly 4x. Even today companies are struggling to cobble together the different pieces and make them work. IBM learned that lesson and now offers a unified AI infrastructure in PowerAI and Power9 that you can use today.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Introduces Skinny Z Systems

April 13, 2018

Early this week IBM unveiled two miniaturized mainframe models, dubbed skinny mainframes, it said are easier to deploy in a public or private cloud facility than their more traditional, much bulkier predecessors. Relying on all their design tricks, IBM engineers managed to pack each machine into a standard 19-inch rack with space to spare, which can be used for additional components.

Z14 LinuxONE Rockhopper II, 19-inch rack

The first new mainframe introduced this week, also in a 19-inch rack, is the Z14 model ZR1. You can expect subsequent models to increment the model numbering.  The second new machine is the LinuxONE Rockhopper II, also in a 19-inch rack.

In the past, about a year after IBM introduced a new mainframe, say the z10, it was introduced what it called a Business Class (BC) version. The BC machines were less richly configured, less expandable but delivered comparable performance with lower capacity and a distinctly lower price.

In a Q&A analyst session IBM insisted the new machines would be priced noticeably lower, as were the BC-class machines of the past. These are not comparable to the old BC machines. Instead, they are intended to attract a new group of users who face new challenges. As such, they come cloud-ready. The 19-inch industry standard, single-frame design is intended for easy placement into existing cloud data centers alongside other components and private cloud environments.

The company, said Ross Mauri, General Manager IBM Z, is targeting the new machines toward clients seeking robust security with pervasive encryption, cloud capabilities and powerful analytics through machine learning. Not only, he continued, does this increase security and capability in on-premises and hybrid cloud environments for clients, IBM will also deploy the new systems in IBM public cloud data centers as the company focuses on enhancing security and performance for increasingly intensive data loads.

In terms of security, the new machines will be hard to beat. IBM reports the new machines capable of processing over 850 million fully encrypted transactions a day on a single system. Along the same lines, the new mainframes do not require special space, cooling or energy. They do, however, still provide IBM’s pervasive encryption and Secure Service Container technology, which secures data serving at a massive scale.

Ross continued: The new IBM Z and IBM LinuxONE offerings also bring significant increases in capacity, performance, memory and cache across nearly all aspects of the system. A complete system redesign delivers this capacity growth in 40 percent less space and is standardized to be deployed in any data center. The z14 ZR1 can be the foundation for an IBM Cloud Private solution, creating a data-center-in-a-box by co-locating storage, networking and other elements in the same physical frame as the mainframe server.  This is where you can utilize that extra space, which was included in the 19-inch rack.

The LinuxONE Rockhopper II can also accommodate a Docker-certified infrastructure for Docker EE with integrated management and scale tested up to 330,000 Docker containers –allowing developers to build high-performance applications and embrace a micro-services architecture.

The 19-inch rack, however, comes with tradeoffs, notes Timothy Green writing in The Motley Fool. Yes, it takes up 40% less floor space than the full-size Z14, but accommodates only 30 processor cores, far below the 170 cores supported by a full size Z14, , which fills a 24-inch rack. Both new systems can handle around 850 million fully encrypted transactions per day, a fraction of the Z14’s full capacity. But not every company needs the full performance and capacity of the traditional mainframe. For companies that don’t need the full power of a Z14 mainframe, notes Green, or that have previously balked at the high price or massive footprint of full mainframe systems, these smaller mainframes may be just what it takes to bring them to the Z. Now IBM needs to come through with the advantageous pricing they insisted they would offer.

The new skinny mainframe are just the latest in IBM’s continuing efforts to keep the mainframe relevant. It began over a decade ago with porting Linux to the mainframe. It continued with Hadoop, blockchain, and containers. Machine learning and deep learning are coming right along.  The only question for DancingDinosaur is when IBM engineers will figure out how to put quantum computing on the Z and squeeze it into customers’ public or private cloud environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Shouldn’t Forget Its Server Platforms

April 5, 2018

The word coming out of IBM brings a steady patter about cognitive, Watson, and quantum computing, for which IBM predicted quantum would be going mainstream within five years. Most DancingDinosaur readers aren’t worrying about what’s coming in 2023 although maybe they should. They have data centers to run now and are wondering where they are going to get the system horsepower they will need to deliver IoT or Blockchain or any number of business initiatives clamoring for system resources today or tomorrow and all they’ve got are the z14 and the latest LinuxONE. As powerful as they were when first announced, do you think that will be enough tomorrow?

IBM’s latest server, the Z

Timothy Prickett Morgan, analyst at The Next Platform, apparently isn’t so sure. He writes in a recent piece how Google and the other hyperscalers need to add serious power to today’s server options. The solution involves “putting systems based on IBM’s Power9 processor into production.” This shouldn’t take anybody by surprise; almost as soon as IBM set up the Open Power consortium Rackspace, Google, and a handful of others started making noises about using Open POWER for a new type of data center server. The most recent announcements around Power9, covered here back in Feb., promise some new options with even more coming.

Writes Morgan: “Google now has seven applications that have more than 1 billion users – adding Android, Maps, Chrome, and Play to the mix – and as the company told us years ago, it is looking for any compute, storage, and networking edge that will allow it to beat Moore’s Law.” Notice that this isn’t about using POWER9 to drive down Intel’s server prices; Google faces a more important nemesis, the constraints of Moore’s Law.

Google has not been secretive about this, at least not recently. To its credit Google is making its frustrations known at appropriate industry events:  “With a technology trend slowdown and growing demand and changing demand, we have a pretty challenging situation, what we call a supply-demand gap, which means the supply on the technology side is not keeping up with this phenomenal demand growth,” explained Maire Mahony, systems hardware engineer at Google and its key representative at the OpenPower Foundation that is steering the Power ecosystem. “That makes it hard to for us to balance that curve we call performance per TCO dollar. This problem is not unique to Google. This is an industry-wide problem.” True, but the majority of data centers, even the biggest ones, don’t face looming multi-billion user performance and scalability demands.

Morgan continued: “Google has absolutely no choice but to look for every edge. The benefits of homogeneity, which have been paramount for the first decade of hyperscaling, no longer outweigh the need to have hardware that better supports the software companies like Google use in production.”

This isn’t Intel’s problem alone although it introduced a new generation of systems, dubbed Skylake, to address some of these concerns. As Morgan noted recently, “various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers.” So can AMD’s Epyc X86 processors. Similarly, the Open Power consortium offers an alternative in POWER9.

Morgan went on: IBM differentiated the hardware with its NVLink versions and, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Still, it is not apparent to Morgan how POWER9 will compete.

Success may come down to a battle of vendor ecosystems. As Morgan points out: aside from the POWER9 system that Google co-engineered with Rackspace Hosting, the most important contributions that Google has made to the OpenPower effort is to work with IBM to create the OPAL firmware, the OpenKVM hypervisor, and the OpenBMC baseboard management controller, which are all crafted to support little endian Linux, as is common on x86.

Guess this is the time wade into the endian morass. Endian refers to the byte ordering that is used, and IBM chips and a few others do them in reverse of the x86 and Arm architectures. The Power8 chip and its POWER9 follow-on support either mode, big or little endian. By making all of these changes, IBM has made the Power platform more palatable to the hyperscalers, which is why Google, Tencent, Alibaba, Uber, and PayPal all talk about how they make use of Power machinery, particularly to accelerate machine learning and generic back-end workloads. But as quickly as IBM jumped on the problem recently after letting it linger for years, it remains one more complication that must be considered. Keep that in mind when a hyperscaler like Google talks about performance per TCO dollar.

Where is all this going? Your guess is as good as any. The hyperscalers and the consortia eventually should resolve this and DancingDinosaur will keep watching. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Boosts AI at Think

March 23, 2018

Enterprise system vendors are racing to AI along with all the others. Writes Jeffrey Burt, an analyst at The Next Platform, “There continues to be an ongoing push among tech vendors to bring artificial intelligence (AI) and its various components – including deep learning and machine learning – to the enterprise. The technologies are being rapidly adopted by hyperscalers and in the HPC space, and enterprises stand to reap significant benefits by also embracing them.” Exactly what those benefits are still need to be specifically articulated and, if possible, quantified.

IBM Think Conference this week

For enterprise data centers running the Z or Power Systems, the most obvious quick payoff will be fast, deeper, more insightful data analytics along with more targeted guidance on actions to take in response. After that there still remains the possibility of more automation of operations but the Z already is pretty thoroughly automated and optimized. Just give it your operational and performance parameters and it will handle the rest.  In addition, vendors like Compuware and Syncsort have been making the mainframe more graphical and intuitive. The days of needing deep mainframe experience or expertise have passed. Even x86 admins can quickly pick up a modern mainframe today.

In a late 2016 study by Accenture that modeled the impact of AI for 12 developed economies. The research compared the size of each country’s economy in 2035 in a baseline scenario, which shows expected economic growth under current assumptions and an AI scenario reflecting expected growth once the impact of AI has been absorbed into the economy. AI was found to yield the highest economic benefits for the United States, increasing its annual growth rate from 2.6 percent to 4.6 percent by 2035, translating to an additional USD $8.3 trillion in gross value added (GVA). In the United Kingdom, AI could add an additional USD $814 billion to the economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9 percent. Japan has the potential to more than triple its annual rate of GVA growth by 2035, and Finland, Sweden, the Netherlands, Germany and Austria could see their growth rates double. You can still find the study here.

Also coming out of Think this week was the announcement of an expanded Apple-IBM partnership around AI and machine learning (ML). The resulting AI service is intended for corporate developers to build apps themselves. The new service, Watson Services for Core ML, links Apple’s Core ML tools for developers that it unveiled last year with IBM’s Watson data crunching service. Core ML helps coders build machine learning-powered apps that more efficiently perform calculations on smartphones instead of processing those calculations in external data centers. It’s similar to other smartphone-based machine learning tools like Google’s TensorFlow Lite.

The goal is to help enterprises reimagine the way they work through a combination of Core ML and Watson Services to stimulate the next generation of intelligent mobile enterprise apps. Take the example of field technicians who inspect power lines or machinery. The new AI field app could feed images of electrical equipment to Watson to train it to recognize the machinery. The result would enable field technicians to scan the electrical equipment they are inspecting on their iPhones or iPads and automatically detect any anomalies. The app would eliminate the need to send that data to IBM’s cloud computing data centers for processing, thus reducing the amount of time it takes to detect equipment issues to near real-time.

Apple’s Core ML toolkit could already be used to connect with competing cloud-based machine learning services from Google, Amazon, and Microsoft to create developer tools that more easily link the Core ML service with Watson. For example, Coca-Cola already is testing Watson Services for Core ML to see if it helps its field technicians better inspect vending machines. If you want try it in your shop, the service will be free to developers to use now. Eventually, developers will have to pay.

Such new roll-your-own AI services represent a shift for IBM. Previously you had to work with IBM consulting teams. Now the new Watson developer services are intended to be bought in an “accessible and bite size” way, according to IBM, and sold in a “pay as you go” model without consultants.  In a related announcement at Think, IBM announced it is contributing the core of Watson Studio’s Deep Learning Service as an open source project called Fabric for Deep Learning. This will enable developers and data scientists to work together on furthering the democratization of deep learning.

Ultimately, the democratization of AI is the only way to go. When intelligent systems speak together and share insights everyone’s work will be faster, smarter. Yes, there will need to be ways to compensate distinctively valuable contributions but with over two decades of open source experience, the industry should be able to pretty easily figure that out.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

The Rush to Quantum Computing

March 9, 2018

Are you excited about quantum computing? Are you taking steps to get ready for it? Do you have an idea of what you would like to do with quantum computing or a plan for how to do it? Except for the most science-driven organizations or those with incomprehensively complex challenges to solve DancingDinosaur can’t imagine this is the most pressing IT issue you are facing today.

Yet leading IT-based vendors are making astounding gains in moving quantum computing forward further and faster than the industry was even projecting a few months ago. This past Nov. IBM announced a 50 qubit system. Earlier this month Google announced Bristlecone, which claims to top that. With Bristlecone Google trumps IBM for now with 72 qubits. However, that may not be the most important metric to focus on.

Never heard of quantum supremacy? You are going to hear a lot about it in the coming weeks, months, and even years as the vendors battle for the quantum supremacy title. Here is how Wikipedia defines it: Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers cannot. In computational complexity-theoretic terms, this generally means providing a super-polynomial speedup over the best known or possible classical algorithm. If this doesn’t send you racing to dig out your old college math book you were a better student than DancingDinosaur. In short, supremacy means beating the current best conventional algorithms. But you can’t just beat them; you have to do it using less energy or faster or some way that will demonstrate your approach’s advantage.

The issue resolves around the instability of qubits; the hardware needs to be sturdy to run them. Industry sources note that quantum computers need to keep their processors extremely cold (Kelvin levels of cold) and protect them from external shocks. Even accidental sounds can cause the computer to make mistakes. To operate in even remotely real-world settings, quantum processors also need to have an error rate of less than 0.5 percent for every two qubits. Google’s best came in at 0.6 percent using its much smaller 9-qubit hardware. Its latest blog post didn’t state Bristlecone’s error rate, but Google promised to improve on its previous results. To drop the error rate for any qubit processor, engineers must figure out how software, control electronics, and the processor itself can work alongside one another without causing errors.

50 cubits currently is considered the minimum number for serious business work. IBM’s November announcement, however, was quick to point out that “does not mean quantum computing is ready for common use.” The system IBM developed remains extremely finicky and challenging to use, as are those being built by others. In its 50-qubit system, the quantum state is preserved for 90 microseconds—record length for the industry but still an extremely short period of time.

Nonetheless, 50 qubits have emerged as the minimum number for a (relatively) stable system to perform practical quantum computing. According to IBM, a 50-qubit machine can do things that are extremely difficult to simulate without quantum technology.

The problem touches on one of the attributes of quantum systems.  As IBM explains, where normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena—entanglement and superposition—to process information differently.  Conventional computers store numbers as sequences of 0 and 1 in memory and process the numbers using only the simplest mathematical operations, add and subtract.

Quantum computers can digest 0 and 1 too but have a broader array of tricks. That’s where entanglement and superposition come in.  For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum systems can handle multiple, contradictory states. That can be very helpful if you are trying to solve huge data- and compute-intensive problems like a Monte Carlo simulation. After working at quantum computing for decades the new 50-cubit system finally brings something IBM can offer to businesses which face complex challenges that can benefit from quantum’s superposition capabilities.

Still, don’t bet on using quantum computing to solve serious business challenges very soon.  An entire ecosystem of programmers, vendors, programming models, methodologies, useful tools, and a host of other things have to fall into place first. IBM and Google and others are making stunningly rapid progress. Maybe DancingDinosaur will actually be alive to see quantum computing as just another tool in a business’s problem-solving toolkit.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Dinosaurs Strike Back in IBM Business Value Survey

March 2, 2018

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing.  Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet.  The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset.  Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found:  Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: