Secure Containers for the Z

October 11, 2018

What’s all this talk about secure containers? Mainframe data center managers have long used secure containers, only they call them logical partitions (LPARs). Secure service containers must be some x86 thing.

Courtesy: Mainframe Watch Belgium

Writing the first week in Oct., Ross Mauri, General Manager IBM Z, observes: Today’s executives in a digitally empowered world want IT to innovate and deliver outstanding user experiences. But, as you know, this same landscape increases exposure and scrutiny around the protection of valuable and sensitive data. IBM’s answer: new capabilities for the IBM z14 and LinuxONE platforms that handle digital transformation while responding to immediate market needs and delivering effective solutions.

The containers provide a secure service container that hosts container-based applications for hybrid and private cloud workloads on IBM LinuxONE and Z servers as an IBM Cloud Private software solution.  This secure computing environment for microservices-based applications can be deployed without requiring code changes to exploit inherent security capabilities. In the process, it provides:

  • Tamper protection during installation time
  • Restricted administrator access to help prevent the misuse of privileged user credentials
  • Automatic encryption of data both in flight and at rest

This differs from an LPAR. According to IBM, the LPAR or logical partition are, in practice, equivalent to separate mainframes. This is not trivial power. Each LPAR runs its own operating system. This can be any mainframe operating system; there is no need to run z/OS, for example, in each LPAR. The installation planners  also may elect to share I/O devices across several LPARs, but this is a local decision.

The system administrator can assign one or more system processors for the exclusive use of an LPAR. Alternately, the administrator can allow all processors to be used on some or all LPARs. Here, the system control functions (often known as microcode or firmware) provide a dispatcher to share the processors among the selected LPARs. The administrator can specify a maximum number of concurrent processors executing in each LPAR. The administrator can also provide weightings for different LPARs; for example, specifying that LPAR1 should receive twice as much processor time as LPAR2. If the code in one LPAR crashes, it has no effect on the other LPARs. Not sure this is the case with the new microservices containers.

Mauri tries to make the case for the new containers. These containers allow applications and data to inherit a layer of security with Secure Service Containers that, in turn, inherit the embedded capabilities at the core of IBM Z and LinuxONE to help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives. DancingDinosaur does not know what “hyper protect” means in this context. Sounds like marketing-speak.

Also Mauri explains that IBM Secure Service Containers help protect the privacy of sensitive company data and customer data from administrators with elevated credentials. At the same time they allow development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications.

In fact, IBM continues the explanation by saying it selected this unique and class-leading data privacy assurance technology to allow applications and data to inherit yet another layer of security through Secure Service Containers. “We’ve embedded capabilities at the core of IBM Z and LinuxONE that help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives.” IBM does like the hyper protect phrase; wish DancingDinosaur knew what it meant. A Google search comes up with hyper Protect Crypto Services, which IBM concedes is still an experimental phase, so, in fact, it doesn’t mean anything yet. Maybe in the future.

IBM Secure Service Containers help protect the privacy of sensitive company and customer data from administrators with elevated credentials—a serious risk—while, at the same time, allowing development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications. OK, DancingDinosaur can accept this but it seems only marginally different from what you can do with good ole LPARs. Maybe the difference only becomes apparent when you attempt to build the latest generation microservices-based apps.

If your choice comes down to secure service containers or LPARs, guess you need to look at what kind of apps you want to deploy. All DancingDinosaur can add is LPARs are powerful, known, and proven technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM AI Toolset Focuses on 9 Industries

October 4, 2018

Recently, IBM introduced new AI solutions and services pre-trained for nine industries and professions including agriculture, customer service, human resources, supply chain, manufacturing, building management, automotive, marketing, and advertising. In each area the amount of data makes it more difficult for managers to keep up due to volume, velocity, and complexity of the data. The solutions generally utilize IBM’s Watson Data Platform.

For example, supply chain companies now should incorporate weather data, traffic reports, and even regulatory reports to provide a fuller picture of global supply issues. Similarly, industrial organizations are seeking to reduce product inspection resource requirements significantly through the use of visual and acoustic inspection capabilities, notes IBM.

Recent IBM research from its Institute for Business Value revealed that 82% of businesses are now considering AI deployments. Why? David Kenny, Senior Vice President, IBM Cognitive Solutions, explains: “As data flows continue to increase, people are overwhelmed by the amount of information [forcing them] to act on it every day, but luckily the information explosion coincides with another key technological advance; artificial intelligence (AI). In the 9 industries targeted by IBM, the company provides the industry-specific algorithms and system training required for making AI effective in each segment.

Let’s look at a selection of these industry segments starting with Customer Service where 77% of top performing organizations report seeing customer satisfaction as a key value driver for AI by giving customer service agents increased ability to respond quickly to questions and complex inquiries. It was first piloted at Deluxe Corporation, which saw improved response times and increased client satisfaction.

Human resources also could benefit from a ready-made AI solution. The average hiring manager flips through hundreds of applicants daily, notes IBM, spending approximately 6 seconds on each resume. This isn’t nearly enough time to make well-considered decisions. The new AI tool for HR analyzes the background of current top performing employees from diverse backgrounds and uses that data to help flag promising applicants.

In the area of industrial equipment, AI can be used to reduce product inspection resource requirements significantly by using AI-driven visual and acoustic inspection capabilities. At a time of intense global competition, manufacturers face a variety of issues that impact productivity including workforce attrition, skills-gaps, and rising raw material costs—all exacerbated by downstream defects and equipment downtime. By combining the Internet of Thing (IoT) and AI, IBM contends, manufacturers can stabilize production costs by pinpointing and predicting areas of loss; such as energy waste, equipment failures, and product quality issues.

In agriculture, farmers can use AI to gather data from multiple sources—weather, IoT-enabled tractors and irrigators, satellite imagery, and more—and see a single, overarching, predictive view of data as it relates to a farm. For the individual grower, IBM notes, this means support for making more informed decisions that help improve yield. Water, an increasingly scarce resource in large swaths of the world, including parts of the U.S., which have been experienced persistent droughts. Just remember the recent wildfires.

Subway hopes AI can increase in restaurant visits by leveraging the connection between weather and quick service (QSR) foot traffic to drive awareness of its $4.99 Foot long promotion via The Weather Channel mobile app. To build awareness and ultimately drive in-store visits to its restaurants Subway reported experiencing a 31% lift in store traffic and a 53% reduction in campaign waste due to AI.

DancingDinosaur had no opportunity to verify any results reported above. So always be skeptical of such results until they are verified to you.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Z Acceptance Grows in BMC 2018 Survey

September 27, 2018

Did Zowe, introduced publicly just a few weeks ago, arrive in the nick of time, like the cavalry rescuing the mainframe from an aging workforce? In the latest BMC annual mainframe survey released in mid September, 95% of millennials are positive about the mainframe’s long-term prospects for supporting new and legacy applications. And 63% of respondents were under the age of 50, up ten points from the previous year.

The mainframe veterans, those with 30 or even 40 years of experience, are finally moving out. DancingDinosaur itself has been writing about the mainframe for about 35 years. With two recently married daughters even a hint of a grandchild on the way will be the signal for me to stop. In the meantime, read on.

Quite interesting from the BMC survey was the very high measures among executives believing in the long-term viability of the mainframe. More interesting to DancingDinosaur, however, was the interest in and willingness to use new mainframe technology like Linux and Java, which are not exactly new arrivals to the mainframe world; as we know, change takes time.

For example 28% of respondents cited as a strength the availability of new technology on the mainframe and their high level of confidence in that new technology. And this was before word about Zowe and what it could do to expand mainframe development got out. A little over a quarter of the respondents also cited using legacy apps to create new apps. Organizations are finally waking up to leveraging mainframe assets.

Also interesting was that both executives and technical staff cite application modernization among the top priorities. No complaints there. Similarly, BMC notes executive perception of the mainframe as a long-term solution is the highest in three years, a six point increase over 2016! While cost still remains a concern, BMC continues, the relative merits of the Z outweigh the costs and this perception continues to shift positively year after year.

The mainframe regularly has been slammed over the years as too costly. Yet. IBM has steadily lowered the cost of the mainframe in term of price performance. Now IBM is talking about applying AI to boost the efficiency, management, and operation of the mainframe data center.

The past May Gartner published a report confirming the value gains of the latest z14 and LinuxONE machines: The z14 ZR1 delivers an approximately 13% total capacity improvement over the z13’s maximum capacity for traditional z/OS environments. This is due to an estimated 10% boost in processor performance, as well as system design enhancements that improve the multiprocessor ratio. In the same report Gartner recommends including IBM’s LinuxONE Rockhopper II in RFPs for highly scalable, highly secure, Linux-based server solutions.

Several broad trends are coming together to feed the growing positive feelings the mainframe has experienced in recent years as revealed in the latest survey responses. “Absolute security and 24×7 availability have never been more important than now,” observes BMC’s John McKenny, VP of Strategy for ZSolutions Optimization. Here the Z itself plays a big part with pervasive encryption and secure containers.

Other trends, particularly digitization and mobility are “placing incredible pressure on both IT and mainframes to manage a greater volume, variety, and velocity of transactions and data, with workloads becoming more volatile and unpredictable,” said Bill Miller, president of ZSolutions at BMC. The latest BMC mainframe survey confirms executive and IT concerns in that area and the mainframe as an increasingly preferred response.

Bottom line: expect the mainframe to hang around for another decade or two at least. Long before then, DancingDinosaur will be a dithering grandfather playing with grandchildren and unable to get myself off the floor.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

LinuxONE is a Bargain

September 21, 2018

LinuxONE may be the best bargain you’ll ever find this season, and you don’t have to wait until Santa brings it down your chimney. Think instead about transformation and digital disruption.  Do you want to be in business in 3 years? That is the basic question that faces every organization that exists today, writes Kat Lind, Chief Systems Engineer, Solitaire Interglobal Ltd, author of the white paper Scaling the Digital Mountain.

Then there is the Robert Frances Group’s  Top 10 Reasons to Choose LinuxONE. DancingDinosaur won’t rehash all ten. Instead, let’s selectively pick a few, starting with the first one, Least Risk Solution, which pretty much encapsulates the LinuxONE story. It reduces business, compliance, financial, operations, and project risks. Its availability, disaster recovery, scalability and security features minimize the business and financial exposures. In addition to pervasive encryption it offers a range of security capabilities often overlooked or downplayed including; logical partition (LPAR) isolation, and secure containers.

Since it is a z dedicated to Linux, unlike the z13 or z14 z/OS machines that also run Linux but not as easily or efficiently,  As the Robert Frances Group noted: it also handles Java, Python; and other languages and tools like Hadoop, Docker, other containers, Chef, Puppet, KVM, multiple Linux distributions, open source, and more.  It also can be used in a traditional legacy environment or used as the platform of choice for cloud hosting. LinuxONE supports tools that enable DevOps similar to those on x86 servers.

And LinuxONE delivers world class performance. As the Robert Frances Group puts it: LinuxONE is capable of driving processor utilization to virtually 100% without a latency impact, performance instabilities, or performance penalties. In addition, LinuxONE uses the fastest commercially available processors, running at 5.2GHz, offloads I/O to separate processors enabling the main processors to concentrate on application workloads, and enables much more data in memory, up to 32TB.

In addition, you can run thousands of virtual machine instances on a single LinuxONE server. The cost benefit of this is astounding compared to managing the equivalent number of x86 servers. The added labor cost alone would break your budget.

In terms of security, LinuxONE is a no brainer. Adds Lind from Solitaire:  Failure in this area erodes an organization’s reputation faster than any other factor. The impact of breaches on customer confidence and follow-on sales has been tracked, and an analysis of that data shows that after a significant incursion, the average customer fall-off exceeds 41% accompanied by a long-running drop in revenues. Recovery involves a significant outlay of service, equipment, and personnel expenses to reestablish a trusted position, as much as 18.6x what it cost to get the customer initially. And Lind doesn’t even begin to mention the impact when the compliance regulators and lawyers start piling on. Anything but the most minor security breach will put you out of business faster than the three years Lind asked at the top of this piece.

But all the above is just talking in terms of conventional data center thinking. DancingDinosaur has put his children through college doing TCO studies around these issues. Lind now turns to something mainframe data centers are just beginning to think about; digital disruption. The strategy and challenges of successfully navigating the chaos of cyberspace translates into a need to have information on both business and security and how they interact.

Digital business and security go hand in hand, so any analysis has to include extensive correlation between the two. Using data from volumes of customer experience responses, IT operational details, business performance, and security, Solitaire examined the positioning of IBM LinuxONE in the digital business market. The results of that examination boil down into three: security, agility, and cost. These areas incorporate the primary objectives that organizations operating in cyberspace today regard as the most relevant. And guess who wins any comparative platform analysis, Lind concludes: LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

 

 

 

 

 

 

 

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Can IBM find a place for Watson?

September 7, 2018

After beating 2 human Jeopardy game champions three times in a row in 2011 IBM’s Watson has been hard pressed to come up with a comparable winning streak. Initially IBM appeared to expect its largest customers to buy richly configured Power Servers to run Watson on prem. When they didn’t get enough takers the company moved Watson to the cloud where companies could lease it for major knowledge-driven projects. When that didn’t catch on IBM started to lease Watson’s capabilities by the drink, promising to solve problems in onesies and twosies.

Jeopardy champs lose to Watson

Today Watson is promising to speed AI success through IBM’s Watson Knowledge Catalog. As IBM puts it: IBM Watson Knowledge Catalog powers intelligent, self-service discovery of data, models, and more; activating them for artificial intelligence, machine learning, and deep learning. Access, curate, categorize and share data, knowledge assets, and their relationships, wherever they reside.

DancingDinosaur has no doubt that Watson is stunning technology and has been rooting for its success since that first Jeopardy round seven years ago. Over that time, Watson and IBM have become a case study in how not to price, package, and market powerful yet expensive technology. The Watson Knowledge Catalog is yet another pricing and packaging experiment.

Based on the latest information online, Watson Knowledge Catalog is priced according to number of provisioned catalogs and discovery connections. There are two plans available: Lite and Professional. The Lite plan allows 1 catalog and 5 free discovery connections while the Professional plan provides unlimited of both. Huh? This statement begs for clarification and there probably is a lot of information and fine print required to answer the numerous questions the above description raises, but life is too short for DancingDinosaur to rummage around on the Watson Knowledge Catalog site to look for answers. Doesn’t this seem like something Watson itself should be able to clarify with a single click?

But no, that is too easy. Instead IBM takes the high road, which DancingDinosaur calls the education track.  Notes Jay Limburn, Senior Technical Staff Member and IBM Offering Manager: there are two main challenges that might impede you from realizing the true value of your data and slowing your journey to adopting artificial intelligence (AI). They are 1) inefficient data management and 2) finding the right tools for all data users.

Actually, the issues start even earlier. In attempting AI most established organizations start at a disadvantage, notes IBM. For example:

  • Most enterprises do not know what and where their data is
  • Data science and compliance teams are handicapped by the lack of data accessibility
  • Enterprises with legacy data are even more handicapped than digitally savvy startups
  • AI projects will expose problems with limited data and poor quality; many will simply fail just due to that.
  • The need to differentiate through monetization increases in importance with AI

These are not new. People have been whining about this since the most rudimentary data mining attempts were made decades ago. If there is a surprise it is that they have not been resolved by now.

Or maybe they finally have with the IBM Watson Knowledge Catalog. As IBM puts it, the company will deliver what promises to be the ultimate data Catalog that actually illuminates data:

  • Knows what data your enterprise has
  • Where it resides
  • Where it came from
  • What it means
  • Provide quick access to it
  • Ensure protection of use
  • Exploit Machine Learning for intelligence and automation
  • Enable data scientists, data engineers, stewards and business analysts
  • Embeddable everywhere for free, with premium features available in paid editions

OK, after 7 years Watson may be poised to deliver and it has little to do with Jeopardy but with a rapidly growing data catalog market. According to a Research and Markets report, the data catalog market is expected to grow from $210 million in 2017 to $620 million by 2022. How many sales of the Professional version gets IBM a leading share.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Can Zowe Bring Young Developers to the Z

August 31, 2018

Are you ever frustrated by the Z? As powerful as it gets mainframes remain a difficult nut to crack, particularly for newcomers who have grown up with easier technologies. Even Linux on Z is not as simple or straightforward as on other platforms. This poses a problem for Z-based shops that are scrambling to replace retiring mainframers.

IBM – Jon Simon/Feature Photo Service

Shopping via smartphone

Certainly other organizations, mainly mainframe ISVs like Compuware and Syncsort, have succeeded in extending the GUI deeper into the Z but that alone is not enough. It remains too difficult for newcomers to take their newly acquired computer talents and readily apply them to the mainframe. Maybe Zowe can change this.

And here’s how:  Recent surveys show that flexibility, agility and speed are key.  Single platforms are out, multi-platforms, and multi-clouds are in. IBM’s reply: let’s bring things together with the announcement of Zowe, pronounced like joey starting with a z. Zowe represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. Launched with partners CA Technologies and Rocket Software along with the support of the Open Mainframe Project, the goal is to drive innovation for the community of next-generation mainframe developers and enable interoperability and scalability between products. Zowe promotes a faster team on-ramp to mainframe productivity, collaboration, knowledge sharing, and communication.

In short, IBM and partners are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Zowe has four components:

  1. Zowe APIs: z/OS has a set of Representational State Transfer (REST) operating system APIs. These are made available by the z/OS Management Facility (z/OSMF). Zowe uses these REST APIs to submit jobs, work with the Job Entry Subsystem (JES) queue, and manipulate data sets. Zowe Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. Zowe Explorers create an extensible z/OS framework that provides new z/OS REST services to enterprise tools and DevOps processes.
  2. Zowe API Mediation Layer: This layer has several key components, including that API Gateway built using Netflix Zuul and Spring Boot technology to forward API requests to the appropriate corresponding service through the micro-service endpoint UI and the REST API Catalog. This publishes APIs and their associated documentation in a service catalog. There also is a Discovery Service built on Eureka and Spring Boot technology, acting as the central point in the API Gateway. It accepts announcements of REST services while providing a repository for active services.
  3. Zowe Web UI: Named zLUX, the web UI modernizes and simplifies working on the mainframe and allows the user to create modern applications. This is what will enable non-mainframers to work productively on the mainframe. The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode compared to the command-line interface.
  4. Zowe Command Line Interface (CLI): Allows users to interact with z/OS from a variety of other platforms, such as cloud or distributed systems, submit jobs, issue Time Sharing Option (TSO) and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents. With this extensible and scriptable interface, you can tie in mainframes to the latest distributed DevOps pipelines and build in automation.

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services too.

The mainframe may be older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications and for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation while making experienced professionals more efficient. Start your free Zowe trial here. BTW, Zowe’s code will be made available under the open-source Eclipse Public License 2.0.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Travelport and IBM launch industry AI travel platform

August 24, 2018

Uh oh, if you have been a little sloppy with travel expenses, it’s time to clean up your travel act before AI starts monitoring your reimbursed travel. IBM and Travelport are teaming up to offer the industry’s first AI-based travel platform to intelligently manage corporate travel spend while leveraging IBM Watson capabilities to unlock previously unavailable data insights.

As IBM explains it, the new travel platform will be delivered via the IBM Cloud and exploits IBM Watson capabilities to intelligently track, manage, predict and analyze travel costs to fundamentally change how companies manage and optimize their travel programs. Typically, each work group submits its own travel expenses and reconciliation and reimbursement can be handled by different groups.

With annual global business travel spend estimated to reach a record $1.2 trillion this year, as projected by the Global Business Travel Association, corporate travel managers need new ways to reduce costs. That requires consolidating and normalizing all the information. Currently for businesses to get a full picture of travel patterns a travel manager might have to sift through data silos from travel agencies, cards, expense systems, and suppliers for end-to-end visibility of spend and compliance across all travel subcategories.  This, however, is usually undertaken from an historical view rather than in real time, which is one reason why reimbursement can take so long. As an independent contractor, DancingDinosaur generally has to submit travel expenses at the end of the project and wait forever for payment.

IBM continues: The new platform, dubbed Travel Manager,  features advanced artificial intelligence, and provides cognitive computing and predictive data analytics using what-if type scenarios, while integrated with travel and expense data to help travel management teams, procurement category managers, business units, finance, and human resource departments optimize their travel program, control spend, and enhance the end-traveler experience.  Maybe they will even squeeze independent contractors into the workflow.

The special sauce in all of this results from how IBM combines data with Travelport, a travel commerce platform on its own, to produce IBM Travel Manager as an AI platform that oversees corporate travel expenses. In the process, IBM Travel Manager gives users complete, unified access to previously siloed information, which, when combined with travel data from the Travelport global distribution system (GDS), can then be used to create real-time predictive analytics recommending how, say, adjustments in travel booking behavior patterns can positively impact a company’s travel budget.

Travelport, itself, is a heavyweight in the travel industry. It relies on technology to make the experience of buying and managing travel better. Through its travel commerce platform it provides distribution, technology, payment and other capabilities for the $7 trillion global travel and tourism industry. The platform facilitates travel commerce by connecting the world’s leading travel providers with online and offline travel buyers in a proprietary (B2B) travel marketplace.

The company helps with all aspects of the travel supply chain from airline merchandising, hotel content and distribution, mobile commerce to B2B payments. Last year its platform processed over $83 billion of travel spend, helping its customers maximize the value of every trip.

IBM Travel Manager combines and normalizes data from diverse sources, allowing for more robust insights and benchmarking than other reporting solutions. It also taps AI to unlock previously unavailable insights from multiple internal and external data sources. The product is expected to be commercially available to customers through both IBM and Travelport.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Continues Cranking Up Blockchain

August 16, 2018

 

 

Somehow between quantum computing, AI, and hybrid clouds IBM is managing to squeeze in blockchain as an active, growing business. For instance, a previously unnamed collaborative effort between the world’s largest shipping company, Maersk, and IBM has now grown to 92 participants and been dubbed TradeLens.

IBM has 92 participants in the TradeLens blockchain network

DancingDinosaur long considered blockchain as a natural for the Z due to its zero downtime reliability and high certified levels of security (EAL4+). The most recent models include IBM’s automated pervasive encryption. No more wasting time making decisions about what to encrypt. The Z just encrypts it all with minimal overhead penalty. Your applications and workloads won’t even notice and compliance audits become a breeze.

TradeLens is emerging from its beta to accept early-adopter applications and announced a new custom contract service for executing complex shipping orders with fewer middlemen. “We have seen a lot of skeptics talk about the validity of blockchain solutions,” said Marie Wieck, IBM general manager and head of blockchain. “And I think with over 90 organizations and more than 150 million events captured on the system, “we really are seeing the proof,” she adds.

The initiative now includes Germany-based Hamburg Sud, which Maersk bought last year for $4 billion, and U.S.-based Pacific International Lines, along with numerous customs authorities, cargo owners and freight forwarders. Collectively, the shipping companies account for more than 20% of the global supply chain market share, with 20 port and terminal operators in Singapore, the U.S., Holland, and more serving 235 marine gateways around the world.

TradeLens, in practice, gives users access to their own blockchain node similar to those on the bitcoin blockchain that lets users send money without the need of banks. In the case of TradeLens a shipper can cut out as many as five middlemen, even for simple queries such as identifying the location of a shipping container.

At stake is what Transparency Market Research expects will be a $32.9 billion global supply-chain software business by 2026. As far back as 2015, the World Trade Organization estimated that simplifying the global supply chain could reduce costs among users by as much as 17.5%, with developing nations expected to see as much as a 35% increase in exports as they leapfrog over legacy technology platforms.

The cooperative effort between Maersk and IBM still needs to make money. To do so, the two companies have shifted the business model from a stand-alone joint-venture to the intellectual property that comprises TradeLens being co-owned and jointly developed.

But the new cooperative structure could unnerve some potential customers. To offset concerns, the CEO of Maersk’s New Jersey-based TradeLens operation, Mike White, says a number of barriers have been put in place, including contractual restrictions on sharing data and technical barriers in the form of the independently managed blockchain nodes.

If successful, TradeLens might literally embody the common refrain among blockchain users that “all ships will rise” when they use a shared, distributed ledger. Facing decreasing global freight rates, Maersk last quarter became just the latest container shipper to cut profit forecasts.

Among competitors aiming to cut those costs and increase profits is the former head of blockchain at accounting firm Deloitte, who earlier this year announced he was raising $100 million to launch a supply chain platform using the ethereum blockchain. Similarly, blockchain startup Fr8 is preparing to raise $60 million via an initial coin offering to build its own blockchain logistics platform.

“The value proposition is for all ecosystem participants,” said White. “The ability to get better access to more real-time data, to have better visibility end-to-end, and to be able to connect one-to-many in a more efficient and effective way, makes the cost of getting that information lower, makes the ability to manage your own business better, and makes the ability to service your customers that much stronger.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

IBM AI Reference Architecture Promises a Fast Start

August 10, 2018

Maybe somebody in your organization has already fooled around with a PoC for an AI project. Maybe you already want to build it out and even put it into production. Great! According to IBM:  By 2020, organizations across a wide array of different industries that don’t deploy AI will be in trouble. So those folks already fooling around with an AI PoC will probably be just in time.

To help organization pull the complicated pieces of AI together, IBM, with the help of IDC, put together its AI Infrastrucure Reference Architecture. This AI reference architecture, as IBM explains, is intended to be used by data scientists and IT professionals who are defining, deploying and integrating AI solutions into an organization. It describes an architecture that will support a promising proof of concept (PoC), experimental application, and sustain growth into production as a multitenant system that can continue to scale to serve a larger organization, while integrating into the organization’s existing IT infrastructure. If this sounds like you check it out. The document runs short, less than 30 pages, and free.

In truth, AI, for all the wonderful things you’d like to do with it, is more a system vendor’s dream than yours.  AI applications, and especially deep learning systems, which parse exponentially greater amounts of data, are extremely demanding and require powerful parallel processing capabilities. Standard CPUs, like those populating racks of servers in your data center, cannot sufficiently execute AI tasks. At some point, AI users will have to overhaul their infrastructure to deliver the required performance if they want to achieve their AI dreams and expectations.

Therefore, IDC recommends businesses developing AI capabilities or scaling existing AI capabilities, should plan to deliberately hit this wall in a controlled fashion. Do it knowingly and in full possession of the details to make the next infrastructure move. Also, IDC recommends you do it in close collaboration with a server vendor—guess who wants to be that vendor—who can guide them from early stage to advanced production to full exploitation of AI capabilities throughout the business.

IBM assumes everything is going to AI as quickly as it can, but that may not be the case for you. AI workloads include applications based on machine learning and deep learning, using unstructured data and information as the fuel to drive the next results. Some businesses are well on their way with deploying AI workloads, others are experimenting, and a third group is still evaluating what AI applications can mean for their organization. At all three stages the variables that, if addressed properly, together make up a well-working and business-advancing solution are numerous.

To get a handle on these variables, executives from IT and LOB managers often form a special committee to actively consider their organization’s approach to the AI. Nobody wants to invest in AI for the sake of AI; the vendors will get rich enough as it is. Also, there is no need to reinvent the wheel; many well-defined use cases exist that are applicable across industries. Many already are noted in the AI reference guide.

Here is a sampling:

  • Fraud analysis and investigation (banking, other industries)
  • Regulatory intelligence (multiple industries)
  • Automated threat intelligence and prevention systems (many industries)
  • IT automation, a sure winner (most industries)
  • Sales process recommendation and automation
  • Diagnosis and treatment (healthcare)
  • Quality management investigation and recommendation (manufacturing)
  • Supply and logistics (manufacturing)
  • Asset/fleet management, another sure winner (multiple industries)
  • Freight management (transportation)
  • Expert shopping/buying advisory or guide

Notes IDC: Many can be developed in-house, are available as commercial software, or via SaaS in the cloud.

Whatever you think of AI, you can’t avoid it. AI will penetrate your company embedded in the new products and services you buy.

So where does IBM hope your AI effort end up? Power9 System, hundreds of GPUs, and PowerAI. Are you surprised?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: