Posts Tagged ‘cognitive computing’

IBM AI Toolset Focuses on 9 Industries

October 4, 2018

Recently, IBM introduced new AI solutions and services pre-trained for nine industries and professions including agriculture, customer service, human resources, supply chain, manufacturing, building management, automotive, marketing, and advertising. In each area the amount of data makes it more difficult for managers to keep up due to volume, velocity, and complexity of the data. The solutions generally utilize IBM’s Watson Data Platform.

For example, supply chain companies now should incorporate weather data, traffic reports, and even regulatory reports to provide a fuller picture of global supply issues. Similarly, industrial organizations are seeking to reduce product inspection resource requirements significantly through the use of visual and acoustic inspection capabilities, notes IBM.

Recent IBM research from its Institute for Business Value revealed that 82% of businesses are now considering AI deployments. Why? David Kenny, Senior Vice President, IBM Cognitive Solutions, explains: “As data flows continue to increase, people are overwhelmed by the amount of information [forcing them] to act on it every day, but luckily the information explosion coincides with another key technological advance; artificial intelligence (AI). In the 9 industries targeted by IBM, the company provides the industry-specific algorithms and system training required for making AI effective in each segment.

Let’s look at a selection of these industry segments starting with Customer Service where 77% of top performing organizations report seeing customer satisfaction as a key value driver for AI by giving customer service agents increased ability to respond quickly to questions and complex inquiries. It was first piloted at Deluxe Corporation, which saw improved response times and increased client satisfaction.

Human resources also could benefit from a ready-made AI solution. The average hiring manager flips through hundreds of applicants daily, notes IBM, spending approximately 6 seconds on each resume. This isn’t nearly enough time to make well-considered decisions. The new AI tool for HR analyzes the background of current top performing employees from diverse backgrounds and uses that data to help flag promising applicants.

In the area of industrial equipment, AI can be used to reduce product inspection resource requirements significantly by using AI-driven visual and acoustic inspection capabilities. At a time of intense global competition, manufacturers face a variety of issues that impact productivity including workforce attrition, skills-gaps, and rising raw material costs—all exacerbated by downstream defects and equipment downtime. By combining the Internet of Thing (IoT) and AI, IBM contends, manufacturers can stabilize production costs by pinpointing and predicting areas of loss; such as energy waste, equipment failures, and product quality issues.

In agriculture, farmers can use AI to gather data from multiple sources—weather, IoT-enabled tractors and irrigators, satellite imagery, and more—and see a single, overarching, predictive view of data as it relates to a farm. For the individual grower, IBM notes, this means support for making more informed decisions that help improve yield. Water, an increasingly scarce resource in large swaths of the world, including parts of the U.S., which have been experienced persistent droughts. Just remember the recent wildfires.

Subway hopes AI can increase in restaurant visits by leveraging the connection between weather and quick service (QSR) foot traffic to drive awareness of its $4.99 Foot long promotion via The Weather Channel mobile app. To build awareness and ultimately drive in-store visits to its restaurants Subway reported experiencing a 31% lift in store traffic and a 53% reduction in campaign waste due to AI.

DancingDinosaur had no opportunity to verify any results reported above. So always be skeptical of such results until they are verified to you.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Can IBM find a place for Watson?

September 7, 2018

After beating 2 human Jeopardy game champions three times in a row in 2011 IBM’s Watson has been hard pressed to come up with a comparable winning streak. Initially IBM appeared to expect its largest customers to buy richly configured Power Servers to run Watson on prem. When they didn’t get enough takers the company moved Watson to the cloud where companies could lease it for major knowledge-driven projects. When that didn’t catch on IBM started to lease Watson’s capabilities by the drink, promising to solve problems in onesies and twosies.

Jeopardy champs lose to Watson

Today Watson is promising to speed AI success through IBM’s Watson Knowledge Catalog. As IBM puts it: IBM Watson Knowledge Catalog powers intelligent, self-service discovery of data, models, and more; activating them for artificial intelligence, machine learning, and deep learning. Access, curate, categorize and share data, knowledge assets, and their relationships, wherever they reside.

DancingDinosaur has no doubt that Watson is stunning technology and has been rooting for its success since that first Jeopardy round seven years ago. Over that time, Watson and IBM have become a case study in how not to price, package, and market powerful yet expensive technology. The Watson Knowledge Catalog is yet another pricing and packaging experiment.

Based on the latest information online, Watson Knowledge Catalog is priced according to number of provisioned catalogs and discovery connections. There are two plans available: Lite and Professional. The Lite plan allows 1 catalog and 5 free discovery connections while the Professional plan provides unlimited of both. Huh? This statement begs for clarification and there probably is a lot of information and fine print required to answer the numerous questions the above description raises, but life is too short for DancingDinosaur to rummage around on the Watson Knowledge Catalog site to look for answers. Doesn’t this seem like something Watson itself should be able to clarify with a single click?

But no, that is too easy. Instead IBM takes the high road, which DancingDinosaur calls the education track.  Notes Jay Limburn, Senior Technical Staff Member and IBM Offering Manager: there are two main challenges that might impede you from realizing the true value of your data and slowing your journey to adopting artificial intelligence (AI). They are 1) inefficient data management and 2) finding the right tools for all data users.

Actually, the issues start even earlier. In attempting AI most established organizations start at a disadvantage, notes IBM. For example:

  • Most enterprises do not know what and where their data is
  • Data science and compliance teams are handicapped by the lack of data accessibility
  • Enterprises with legacy data are even more handicapped than digitally savvy startups
  • AI projects will expose problems with limited data and poor quality; many will simply fail just due to that.
  • The need to differentiate through monetization increases in importance with AI

These are not new. People have been whining about this since the most rudimentary data mining attempts were made decades ago. If there is a surprise it is that they have not been resolved by now.

Or maybe they finally have with the IBM Watson Knowledge Catalog. As IBM puts it, the company will deliver what promises to be the ultimate data Catalog that actually illuminates data:

  • Knows what data your enterprise has
  • Where it resides
  • Where it came from
  • What it means
  • Provide quick access to it
  • Ensure protection of use
  • Exploit Machine Learning for intelligence and automation
  • Enable data scientists, data engineers, stewards and business analysts
  • Embeddable everywhere for free, with premium features available in paid editions

OK, after 7 years Watson may be poised to deliver and it has little to do with Jeopardy but with a rapidly growing data catalog market. According to a Research and Markets report, the data catalog market is expected to grow from $210 million in 2017 to $620 million by 2022. How many sales of the Professional version gets IBM a leading share.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Can Zowe Bring Young Developers to the Z

August 31, 2018

Are you ever frustrated by the Z? As powerful as it gets mainframes remain a difficult nut to crack, particularly for newcomers who have grown up with easier technologies. Even Linux on Z is not as simple or straightforward as on other platforms. This poses a problem for Z-based shops that are scrambling to replace retiring mainframers.

IBM – Jon Simon/Feature Photo Service

Shopping via smartphone

Certainly other organizations, mainly mainframe ISVs like Compuware and Syncsort, have succeeded in extending the GUI deeper into the Z but that alone is not enough. It remains too difficult for newcomers to take their newly acquired computer talents and readily apply them to the mainframe. Maybe Zowe can change this.

And here’s how:  Recent surveys show that flexibility, agility and speed are key.  Single platforms are out, multi-platforms, and multi-clouds are in. IBM’s reply: let’s bring things together with the announcement of Zowe, pronounced like joey starting with a z. Zowe represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. Launched with partners CA Technologies and Rocket Software along with the support of the Open Mainframe Project, the goal is to drive innovation for the community of next-generation mainframe developers and enable interoperability and scalability between products. Zowe promotes a faster team on-ramp to mainframe productivity, collaboration, knowledge sharing, and communication.

In short, IBM and partners are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Zowe has four components:

  1. Zowe APIs: z/OS has a set of Representational State Transfer (REST) operating system APIs. These are made available by the z/OS Management Facility (z/OSMF). Zowe uses these REST APIs to submit jobs, work with the Job Entry Subsystem (JES) queue, and manipulate data sets. Zowe Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. Zowe Explorers create an extensible z/OS framework that provides new z/OS REST services to enterprise tools and DevOps processes.
  2. Zowe API Mediation Layer: This layer has several key components, including that API Gateway built using Netflix Zuul and Spring Boot technology to forward API requests to the appropriate corresponding service through the micro-service endpoint UI and the REST API Catalog. This publishes APIs and their associated documentation in a service catalog. There also is a Discovery Service built on Eureka and Spring Boot technology, acting as the central point in the API Gateway. It accepts announcements of REST services while providing a repository for active services.
  3. Zowe Web UI: Named zLUX, the web UI modernizes and simplifies working on the mainframe and allows the user to create modern applications. This is what will enable non-mainframers to work productively on the mainframe. The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode compared to the command-line interface.
  4. Zowe Command Line Interface (CLI): Allows users to interact with z/OS from a variety of other platforms, such as cloud or distributed systems, submit jobs, issue Time Sharing Option (TSO) and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents. With this extensible and scriptable interface, you can tie in mainframes to the latest distributed DevOps pipelines and build in automation.

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services too.

The mainframe may be older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications and for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation while making experienced professionals more efficient. Start your free Zowe trial here. BTW, Zowe’s code will be made available under the open-source Eclipse Public License 2.0.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Travelport and IBM launch industry AI travel platform

August 24, 2018

Uh oh, if you have been a little sloppy with travel expenses, it’s time to clean up your travel act before AI starts monitoring your reimbursed travel. IBM and Travelport are teaming up to offer the industry’s first AI-based travel platform to intelligently manage corporate travel spend while leveraging IBM Watson capabilities to unlock previously unavailable data insights.

As IBM explains it, the new travel platform will be delivered via the IBM Cloud and exploits IBM Watson capabilities to intelligently track, manage, predict and analyze travel costs to fundamentally change how companies manage and optimize their travel programs. Typically, each work group submits its own travel expenses and reconciliation and reimbursement can be handled by different groups.

With annual global business travel spend estimated to reach a record $1.2 trillion this year, as projected by the Global Business Travel Association, corporate travel managers need new ways to reduce costs. That requires consolidating and normalizing all the information. Currently for businesses to get a full picture of travel patterns a travel manager might have to sift through data silos from travel agencies, cards, expense systems, and suppliers for end-to-end visibility of spend and compliance across all travel subcategories.  This, however, is usually undertaken from an historical view rather than in real time, which is one reason why reimbursement can take so long. As an independent contractor, DancingDinosaur generally has to submit travel expenses at the end of the project and wait forever for payment.

IBM continues: The new platform, dubbed Travel Manager,  features advanced artificial intelligence, and provides cognitive computing and predictive data analytics using what-if type scenarios, while integrated with travel and expense data to help travel management teams, procurement category managers, business units, finance, and human resource departments optimize their travel program, control spend, and enhance the end-traveler experience.  Maybe they will even squeeze independent contractors into the workflow.

The special sauce in all of this results from how IBM combines data with Travelport, a travel commerce platform on its own, to produce IBM Travel Manager as an AI platform that oversees corporate travel expenses. In the process, IBM Travel Manager gives users complete, unified access to previously siloed information, which, when combined with travel data from the Travelport global distribution system (GDS), can then be used to create real-time predictive analytics recommending how, say, adjustments in travel booking behavior patterns can positively impact a company’s travel budget.

Travelport, itself, is a heavyweight in the travel industry. It relies on technology to make the experience of buying and managing travel better. Through its travel commerce platform it provides distribution, technology, payment and other capabilities for the $7 trillion global travel and tourism industry. The platform facilitates travel commerce by connecting the world’s leading travel providers with online and offline travel buyers in a proprietary (B2B) travel marketplace.

The company helps with all aspects of the travel supply chain from airline merchandising, hotel content and distribution, mobile commerce to B2B payments. Last year its platform processed over $83 billion of travel spend, helping its customers maximize the value of every trip.

IBM Travel Manager combines and normalizes data from diverse sources, allowing for more robust insights and benchmarking than other reporting solutions. It also taps AI to unlock previously unavailable insights from multiple internal and external data sources. The product is expected to be commercially available to customers through both IBM and Travelport.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

FlashSystem 9100 Includes NVMe and Spectrum Software

July 20, 2018

The new IBM FlashSystem 9100 comes with all the bells and whistles included, especially NVMe and Spectrum Software.  For software, IBM includes its full suite of software-defined capabilities for your data both on-premises and in the cloud and across public and private clouds. It also aims to modernize your infrastructure with new capabilities for private and hybrid clouds as well as optimize operations.

FlashSystem 9100 with new capabilities built-in end-to-end

It also includes AI-assisted, next-generation technology for multi-cloud environments. This should allow you to optimize business critical workloads in an effort to optimize your technology infrastructure and prepare for the era of multi-cloud digitized business now emerging.

The IT infrastructure market is changing so quickly and so radically that technology that might have been still under consideration can no longer make it to the short list. DancingDinosuar, for example, won’t even attempt to create an ROI analysis of hard disk for primary storage. Other than straight-out falsification the numbers couldn’t work.

The driver behind this, besides the advances in technology price/performance and what seems like return to Moore’s Law levels of gains, lies the success of the big hyperscalers, who are able to sustain amazing price and performance levels. DancingDinosaur readers are no hyperscalers but they are capitalizing on hyperscaler gains in the cloud and they can emulate hyperscaler strategies in their data centers wherever possible.

IBM puts it a little more conventionally: As more and more organizations move on to a multi-cloud strategy they are having more data-driven needs such as artificial intelligence (AI), machine learning (ML), and containers, it writes. All of these new needs require a storage solution that is powerful enough to address all the needs while being built on proven technology and support both the existing and evolving data centers. IBM’s response to these issues is the expansion of its FlashSystem to include the new 9100 NVMe end-to-end solution while piling on the software.

Aside from being an all NVMe storage solution, IBM is leveraging several IBM technologies such as IBM Spectrum Virtualize and IBM FlashCore as well as software from IBM’s Spectrum family. This combination of software and technology helps the 9100 store up to 2PB of data in a 2U space (32PB in a larger rack). FlashCore also enables consistent microsecond latency, with IBM quoting performance of 2.5 million IOPS, 34GB/s, and 100μs latency for a single 2U array. For storage, the FlashSystem 9100 uses FlashCore modules with an NVMe interface. These 2.5” drives come in 4.8TB, 9.6TB, and 19.2TB capacities with up to 5:1 compression. The drives leverage 64-Layer 3D TLC NAND and can be configured with as little as four drives per system.   You might not be a hyperscaler but this is the kind of stuff you need if you hope to emulate one.

To do this, IBM packs in the goodies. For starters it is NVMe-accelerated and Multi-Cloud Enabled.  And it goes beyond the usual flash array. This is an NVMe-accelerated Enterprise Flash Array – 100% NVMe end-to-end and includes NVMe IBM FlashCore modules and NVMe industry standard SSD. It also supports physical, virtual and Docker environments.

In addition, the system includes IBM Storage Insights for AI-empowered predictive analytics, storage resource management, and support delivered over the cloud. Also, it offers Spectrum Storage Software for array management, data reuse, modern data protection, disaster recovery, and containerization (how it handles Docker). Plus, IBM adds:

  • IBM Spectrum Virtualize
  • IBM Spectrum Copy Data Management
  • IBM Spectrum Protect Plus
  • IBM Spectrum Virtualize for Public Cloud
  • IBM Spectrum Connect
  • FlashSystem 9100 Multi-Cloud Solutions

And just in case you think you are getting ahead of yourself, IBM is adding what it calls blueprints. As IBM explains them: the blueprints take the form of three pre-validated, cloud-focused solution plans.

  1. Data Reuse, Protection and Efficiency solution leverages the capabilities of IBM Spectrum Protect Plus and IBM Spectrum Copy Data Management (CDM) to provide enhanced data protection features for virtual applications with powerful data copy management and reuse functionality both on premises and in the cloud.
  2. Business Continuity and Data Reuse solution leverages IBM Spectrum Virtualize for Public Cloud to extend data protection and disaster recovery capabilities into the IBM Cloud, as well as all the copy management and data reuse features of IBM Spectrum CDM.
  3. Private Cloud Flexibility and Data Protection solution enables simplified deployment of private clouds, including the technology needed to implement container environments, and all of the capabilities of IBM Spectrum CDM to manage copy sprawl and provide data protection for containerized applications.

The blueprints may be little more than an IBM shopping list that leaves you as confused as before and a little poorer. Still, the FlashSystem 9100, along with all of IBM’s storage solutions, comes with Storage Insights, the company’s enterprise, AI-based predictive analytics, storage resource management, and support platform delivered over the cloud. If you try any blueprint, let me know how it works, anonymously of course.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Is Your Enterprise Ready for AI?

May 11, 2018

According to IBM’s gospel of AI “we are in the midst of a global transformation and it is touching every aspect of our world, our lives, and our businesses.”  IBM has been preaching its gospel of AI of the past year or longer, but most of its clients haven’t jumped fully aboard. “For most of our clients, AI will be a journey. This is demonstrated by the fact that most organizations are still in the early phases of AI adoption.”

AC922 with NIVIDIA Tesla V100 and Enhanced NVLink GPUs

The company’s latest announcements earlier this week focus POWER9 squarely on AI. Said Tim Burke, Engineering Vice President, Cloud and Operating System Infrastructure, at Red Hat. “POWER9-based servers, running Red Hat’s leading open technologies offer a more stable and performance optimized foundation for machine learning and AI frameworks, which is required for production deployments… including PowerAI, IBM’s software platform for deep learning with IBM Power Systems that includes popular frameworks like Tensorflow and Caffe, as the first commercially supported AI software offering for [the Red Hat] platform.”

IBM insists this is not just about POWER9 and they may have a point; GPUs and other assist processors are taking on more importance as companies try to emulate the hyperscalers in their efforts to drive server efficiency while boosting power in the wake of declines in Moore’s Law. ”GPUs are at the foundation of major advances in AI and deep learning around the world,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. [Through] “the tight integration of IBM POWER9 processors and NVIDIA V100 GPUs made possible by NVIDIA NVLink, enterprises can experience incredible increases in performance for compute- intensive workloads.”

To create an AI-optimized infrastructure, IBM announced the latest additions to its POWER9 lineup, the IBM Power Systems LC922 and LC921. Characterized by IBM as balanced servers offering both compute capabilities and up to 120 terabytes of data storage and NVMe for rapid access to vast amounts of data. IBM included HDD in the announcement but any serious AI workload will choke without ample SSD.

Specifically, these new servers bring an updated version of the AC922 server, which now features recently announced 32GB NVIDIA V100 GPUs and larger system memory, which enables bigger deep learning models to improve the accuracy of AI workloads.

IBM has characterized the new models as data-intensive machines and AI-intensive systems, LC922 and LC921 Servers with POWER9 processors. The AC922, arrived last fall. It was designed for the what IBM calls the post-CPU era. The AC922 was the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 was designed to drive demonstrable performance improvements across popular AI frameworks such as TensorFlow and Caffe.

In the post CPU era, where Moore’s Law no longer rules, you need to pay as much attention to the GPU and other assist processors as the CPU itself, maybe even more so. For example, the coherence and high-speed of the NVLink enables hash tables—critical for fast analytics—on GPUs. As IBM noted at the introduction of the new machines this week: Hash tables are fundamental data structure for analytics over large datasets. For this you need large memory: small GPU memory limits hash table size and analytic performance. The CPU-GPU NVLink2 solves 2 key problems: large memory and high-speed enables storing the full hash table in CPU memory and transferring pieces to GPU for fast operations; coherence enables new inserts in CPU memory to get updated in GPU memory. Otherwise, modifications on data in CPU memory do not get updated in GPU memory.

IBM has started referring to the LC922 and LC921 as big data crushers. The LC921 brings 2 POWER9 sockets in a 1U form factor; for I/O it comes with both PCIe 4.0 and CAPI 2.0.; and offers up to 40 cores (160 threads) and 2TB RAM, which is ideal for environments requiring dense computing.

The LC922 is considerably bigger. It offers balanced compute capabilities delivered with the P9 processor and up to 120TB of storage capacity, again advanced I/O through PCIe 4.0/CAPI 2.0, and up to 44 cores (176 threads) and 2TB RAM. The list price, notes IBM is ~30% less.

If your organization is not thinking about AI your organization is probably in the minority, according to IDC.

  • 31 percent of organizations are in [AI] discovery/evaluation
  • 22 percent of organizations plan to implement AI in next 1-2 years
  • 22 percent of organizations are running AI trials
  • 4 percent of organizations have already deployed AI

Underpinning both servers is the IBM POWER9 CPU. The POWER9 enjoys a nearly 5.6x improved CPU to GPU bandwidth vs x86, which can improve deep learning training times by nearly 4x. Even today companies are struggling to cobble together the different pieces and make them work. IBM learned that lesson and now offers a unified AI infrastructure in PowerAI and Power9 that you can use today.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Grows Quantum Ecosystem

April 27, 2018

It is good that you aren’t dying to deploy quantum computing soon because IBM readily admits that it is not ready for enterprise production now or in several weeks or maybe several months. IBM, however, continues to assemble the building blocks you will eventually need when you finally feel the urge to deploy a quantum application that can address a real problem that you need to resolve.

cryostat with prototype of quantum processor

IBM is surprisingly frank about the state of quantum today. There is nothing you can do at this point that you can’t simulate on a conventional or classical computer system. This situation is unlikely to change anytime soon either. For years to come, we can expect hybrid quantum and conventional compute environments that will somehow work together to solve very demanding problems, although most aren’t sure exactly what those problems will be when the time comes. Still at Think earlier this year IBM predicted quantum computing will be mainstream in 5 years.

Of course, IBM has some ideas of where the likely problems to solve will be found:

  • Chemistry—material design, oil and gas, drug discovery
  • Artificial Intelligence—classification, machine learning, linear algebra
  • Financial Services—portfolio optimization, scenario analysis, pricing

It has been some time since the computer systems industry had to build a radically different kind of compute discipline from scratch. Following the model of the current IT discipline IBM began by launching the IBM Q Network, a collaboration with leading Fortune 500 companies and research institutions with a shared mission. This will form the foundation of a quantum ecosystem.  The Q Network will be comprised of hubs, which are regional centers of quantum computing R&D and ecosystem; partners, who are pioneers of quantum computing in a specific industry or academic field; and most recently, startups, which are expected to rapidly advance early applications.

The most important of these to drive growth of quantum are the startups. To date, IBM reports eight startups and it is on the make for more. Early startups include QC Ware, Q-Ctrl, Cambridge Quantum Computing (UK), which is working on a compiler for quantum computing, 1Qbit based in Canada, Zapata Computing located at Harvard, Strangeworks, an Austin-based tool developer, QxBranch, which is trying to apply classical computing techniques to quantum, and Quantum Benchmark.

Startups get membership in the Q network and can run experiments and algorithms on IBM quantum computers via cloud-based access; provide deeper access to APIs and advanced quantum software tools, libraries, and applications; and have the opportunity to collaborate with IBM researchers and technical SMEs on potential applications, as well as with other IBM Q Network organizations. If it hasn’t become obvious yet, the payoff will come from developing applications that solve recognizable problems. Also check out QISKit, a software development kit for quantum applications available through GitHub.

The last problem to solve is the question around acquiring quantum talent. How many quantum scientists, engineers, or programmers do you have? Do you even know where to find them? The young people excited about computing today are primarily interested in technologies to build sexy apps using Node.js, Python, Jupyter, and such.

To find the people you need to build quantum computing systems you will need to scour the proverbial halls of MIT, Caltech, and other top schools that produce physicists and quantum scientists. A scan of salaries for these people reveals $135,000- $160,000, if they are available at all.

The best guidance from IBM on starting is to start small. The industry is still at the building block stage; not ready to throw specific application at real problems. In that case sign up for IBM’s Q Network and get some of your people engaged in the opportunities to get educated in quantum.

When DancingDinosaur first heard about quantum physics he was in a high school science class decades ago. It was intriguing but he never expected to even be alive to see quantum physics becoming real, but now it is. And he’s still here. Not quite ready to sign up for QISKit and take a small qubit machine for a spin in the cloud, but who knows…

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IT Security Enters the Cooperative Era

April 20, 2018

Ever hear of the cybersecurity tech accord?  It was  announced on Tuesday. Microsoft, Facebook, and 32 other companies signed aboard.  Absent from the signing were Apple, Alphabet and Amazon. Also missing was IBM. Actually, IBM was already at the RSA Conference making its own security announcement of an effort to help cybersecurity teams collaborate just like the attackers they’re defending against do via the dark web by sharing information among themselves.

IBM security control center

Tuesday’s Cybersecurity Tech Accord amounted to a promise to work together on cybersecurity issues. Specifically, the companies promise to work against state sponsored cyberattacks. The companies also agreed to collaborate on stronger defense systems and protect against the tampering of their products, according to published reports.

Giving importance to the accord is the financial impact of cybersecurity attacks on businesses and organizations, which is projected to reach $8 trillion by 2022. Other technology leaders, including Cisco, HP, Nokia, Oracle also joined the accord.

A few highly visible and costly attacks were enough to galvanize the IT leaders. In May, WannaCry ransomware targeted more than 300,000 computers in 150 countries, including 48 UK medical facilities. In a bid to help, Microsoft issued patches for old Windows systems, even though it no longer supports them, because so many firms run old software that was vulnerable to the attack, according to published reports. The White House attributed the attack to North Korea.

In June, NotPetya ransomware, which initially targeted computers in Ukraine before spreading, infected computers, locked down their hard drives, and demanded a $300 ransom to be paid in bitcoin. Even victims that paid weren’t able to recover their files, according to reports. The British government said Russia was behind the global cyberattack.

The Cybersecurity Tech Accord is modeled after a digital Geneva Convention, with a long-term goal of updating international law to protect people in times of peace from malicious cyberattacks, according to Microsoft president Brad Smith.

Github’s chief strategy officer Julio Avalos wrote in a separate blog post that “protecting the Internet is becoming more urgent every day as more fundamental vulnerabilities in infrastructure are discovered—and in some cases used by government organizations for cyberattacks that threaten to make the Internet a theater of war.” He continued: “Reaching industry-wide agreement on security principles and collaborating with global technology companies is a crucial step toward securing our future.”

Added Sridhar Muppidi, Co-CTO of IBM Security about the company’s efforts to help cybersecurity teams collaborate like the attackers they’re working against, in a recent published interview: The good guys have to collaborate with each other so that we can provide a better and more secure and robust systems. So we talk about how we share the good intelligence. We also talk about sharing good practices, so that we can then build more robust systems, which are a lot more secure.

It’s the same concept of open source model, where you provide some level of intellectual capital with an opportunity to bring in a bigger community together so that we can take the problem and solve it better and faster. And learn from each other’s mistakes and each other’s advancement so that it can help, individually, each of our offerings. So, end of the day, for a topic like AI, the algorithm is going to be an algorithm. It’s the data, it’s the models, it’s the set of things which go around it which make it very robust and reliable, Muppidi continued.

IBM appears to be practicing what it preaches by facilitating the collaboration of people and machines in defense of cyberspace. Last year at RSA, IBM introduced Watson to the cybersecurity industry to augment the skills of analysts in their security investigations. This year investments and artificial intelligence (AI), according to IBM, were made with a larger vision in mind: a move toward “automation of response” in cybersecurity.

At RSA, IBM also announced the next-generation IBM Resilient Incident Response Platform (IRP) with Intelligent Orchestration. The new platform promises to accelerate and sharpen incident response by seamlessly combining incident case management, orchestration, automation, AI, and deep two-way partner integrations into a single platform.

Maybe DancingDinosaur, which has spent decades acting as an IT-organization-of-one, can finally turn over some of the security chores to an intelligent system, which hopefully will do it better and faster.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Introduces Skinny Z Systems

April 13, 2018

Early this week IBM unveiled two miniaturized mainframe models, dubbed skinny mainframes, it said are easier to deploy in a public or private cloud facility than their more traditional, much bulkier predecessors. Relying on all their design tricks, IBM engineers managed to pack each machine into a standard 19-inch rack with space to spare, which can be used for additional components.

Z14 LinuxONE Rockhopper II, 19-inch rack

The first new mainframe introduced this week, also in a 19-inch rack, is the Z14 model ZR1. You can expect subsequent models to increment the model numbering.  The second new machine is the LinuxONE Rockhopper II, also in a 19-inch rack.

In the past, about a year after IBM introduced a new mainframe, say the z10, it was introduced what it called a Business Class (BC) version. The BC machines were less richly configured, less expandable but delivered comparable performance with lower capacity and a distinctly lower price.

In a Q&A analyst session IBM insisted the new machines would be priced noticeably lower, as were the BC-class machines of the past. These are not comparable to the old BC machines. Instead, they are intended to attract a new group of users who face new challenges. As such, they come cloud-ready. The 19-inch industry standard, single-frame design is intended for easy placement into existing cloud data centers alongside other components and private cloud environments.

The company, said Ross Mauri, General Manager IBM Z, is targeting the new machines toward clients seeking robust security with pervasive encryption, cloud capabilities and powerful analytics through machine learning. Not only, he continued, does this increase security and capability in on-premises and hybrid cloud environments for clients, IBM will also deploy the new systems in IBM public cloud data centers as the company focuses on enhancing security and performance for increasingly intensive data loads.

In terms of security, the new machines will be hard to beat. IBM reports the new machines capable of processing over 850 million fully encrypted transactions a day on a single system. Along the same lines, the new mainframes do not require special space, cooling or energy. They do, however, still provide IBM’s pervasive encryption and Secure Service Container technology, which secures data serving at a massive scale.

Ross continued: The new IBM Z and IBM LinuxONE offerings also bring significant increases in capacity, performance, memory and cache across nearly all aspects of the system. A complete system redesign delivers this capacity growth in 40 percent less space and is standardized to be deployed in any data center. The z14 ZR1 can be the foundation for an IBM Cloud Private solution, creating a data-center-in-a-box by co-locating storage, networking and other elements in the same physical frame as the mainframe server.  This is where you can utilize that extra space, which was included in the 19-inch rack.

The LinuxONE Rockhopper II can also accommodate a Docker-certified infrastructure for Docker EE with integrated management and scale tested up to 330,000 Docker containers –allowing developers to build high-performance applications and embrace a micro-services architecture.

The 19-inch rack, however, comes with tradeoffs, notes Timothy Green writing in The Motley Fool. Yes, it takes up 40% less floor space than the full-size Z14, but accommodates only 30 processor cores, far below the 170 cores supported by a full size Z14, , which fills a 24-inch rack. Both new systems can handle around 850 million fully encrypted transactions per day, a fraction of the Z14’s full capacity. But not every company needs the full performance and capacity of the traditional mainframe. For companies that don’t need the full power of a Z14 mainframe, notes Green, or that have previously balked at the high price or massive footprint of full mainframe systems, these smaller mainframes may be just what it takes to bring them to the Z. Now IBM needs to come through with the advantageous pricing they insisted they would offer.

The new skinny mainframe are just the latest in IBM’s continuing efforts to keep the mainframe relevant. It began over a decade ago with porting Linux to the mainframe. It continued with Hadoop, blockchain, and containers. Machine learning and deep learning are coming right along.  The only question for DancingDinosaur is when IBM engineers will figure out how to put quantum computing on the Z and squeeze it into customers’ public or private cloud environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Shouldn’t Forget Its Server Platforms

April 5, 2018

The word coming out of IBM brings a steady patter about cognitive, Watson, and quantum computing, for which IBM predicted quantum would be going mainstream within five years. Most DancingDinosaur readers aren’t worrying about what’s coming in 2023 although maybe they should. They have data centers to run now and are wondering where they are going to get the system horsepower they will need to deliver IoT or Blockchain or any number of business initiatives clamoring for system resources today or tomorrow and all they’ve got are the z14 and the latest LinuxONE. As powerful as they were when first announced, do you think that will be enough tomorrow?

IBM’s latest server, the Z

Timothy Prickett Morgan, analyst at The Next Platform, apparently isn’t so sure. He writes in a recent piece how Google and the other hyperscalers need to add serious power to today’s server options. The solution involves “putting systems based on IBM’s Power9 processor into production.” This shouldn’t take anybody by surprise; almost as soon as IBM set up the Open Power consortium Rackspace, Google, and a handful of others started making noises about using Open POWER for a new type of data center server. The most recent announcements around Power9, covered here back in Feb., promise some new options with even more coming.

Writes Morgan: “Google now has seven applications that have more than 1 billion users – adding Android, Maps, Chrome, and Play to the mix – and as the company told us years ago, it is looking for any compute, storage, and networking edge that will allow it to beat Moore’s Law.” Notice that this isn’t about using POWER9 to drive down Intel’s server prices; Google faces a more important nemesis, the constraints of Moore’s Law.

Google has not been secretive about this, at least not recently. To its credit Google is making its frustrations known at appropriate industry events:  “With a technology trend slowdown and growing demand and changing demand, we have a pretty challenging situation, what we call a supply-demand gap, which means the supply on the technology side is not keeping up with this phenomenal demand growth,” explained Maire Mahony, systems hardware engineer at Google and its key representative at the OpenPower Foundation that is steering the Power ecosystem. “That makes it hard to for us to balance that curve we call performance per TCO dollar. This problem is not unique to Google. This is an industry-wide problem.” True, but the majority of data centers, even the biggest ones, don’t face looming multi-billion user performance and scalability demands.

Morgan continued: “Google has absolutely no choice but to look for every edge. The benefits of homogeneity, which have been paramount for the first decade of hyperscaling, no longer outweigh the need to have hardware that better supports the software companies like Google use in production.”

This isn’t Intel’s problem alone although it introduced a new generation of systems, dubbed Skylake, to address some of these concerns. As Morgan noted recently, “various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers.” So can AMD’s Epyc X86 processors. Similarly, the Open Power consortium offers an alternative in POWER9.

Morgan went on: IBM differentiated the hardware with its NVLink versions and, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Still, it is not apparent to Morgan how POWER9 will compete.

Success may come down to a battle of vendor ecosystems. As Morgan points out: aside from the POWER9 system that Google co-engineered with Rackspace Hosting, the most important contributions that Google has made to the OpenPower effort is to work with IBM to create the OPAL firmware, the OpenKVM hypervisor, and the OpenBMC baseboard management controller, which are all crafted to support little endian Linux, as is common on x86.

Guess this is the time wade into the endian morass. Endian refers to the byte ordering that is used, and IBM chips and a few others do them in reverse of the x86 and Arm architectures. The Power8 chip and its POWER9 follow-on support either mode, big or little endian. By making all of these changes, IBM has made the Power platform more palatable to the hyperscalers, which is why Google, Tencent, Alibaba, Uber, and PayPal all talk about how they make use of Power machinery, particularly to accelerate machine learning and generic back-end workloads. But as quickly as IBM jumped on the problem recently after letting it linger for years, it remains one more complication that must be considered. Keep that in mind when a hyperscaler like Google talks about performance per TCO dollar.

Where is all this going? Your guess is as good as any. The hyperscalers and the consortia eventually should resolve this and DancingDinosaur will keep watching. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: