Posts Tagged ‘Cloud’

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Can IBM find a place for Watson?

September 7, 2018

After beating 2 human Jeopardy game champions three times in a row in 2011 IBM’s Watson has been hard pressed to come up with a comparable winning streak. Initially IBM appeared to expect its largest customers to buy richly configured Power Servers to run Watson on prem. When they didn’t get enough takers the company moved Watson to the cloud where companies could lease it for major knowledge-driven projects. When that didn’t catch on IBM started to lease Watson’s capabilities by the drink, promising to solve problems in onesies and twosies.

Jeopardy champs lose to Watson

Today Watson is promising to speed AI success through IBM’s Watson Knowledge Catalog. As IBM puts it: IBM Watson Knowledge Catalog powers intelligent, self-service discovery of data, models, and more; activating them for artificial intelligence, machine learning, and deep learning. Access, curate, categorize and share data, knowledge assets, and their relationships, wherever they reside.

DancingDinosaur has no doubt that Watson is stunning technology and has been rooting for its success since that first Jeopardy round seven years ago. Over that time, Watson and IBM have become a case study in how not to price, package, and market powerful yet expensive technology. The Watson Knowledge Catalog is yet another pricing and packaging experiment.

Based on the latest information online, Watson Knowledge Catalog is priced according to number of provisioned catalogs and discovery connections. There are two plans available: Lite and Professional. The Lite plan allows 1 catalog and 5 free discovery connections while the Professional plan provides unlimited of both. Huh? This statement begs for clarification and there probably is a lot of information and fine print required to answer the numerous questions the above description raises, but life is too short for DancingDinosaur to rummage around on the Watson Knowledge Catalog site to look for answers. Doesn’t this seem like something Watson itself should be able to clarify with a single click?

But no, that is too easy. Instead IBM takes the high road, which DancingDinosaur calls the education track.  Notes Jay Limburn, Senior Technical Staff Member and IBM Offering Manager: there are two main challenges that might impede you from realizing the true value of your data and slowing your journey to adopting artificial intelligence (AI). They are 1) inefficient data management and 2) finding the right tools for all data users.

Actually, the issues start even earlier. In attempting AI most established organizations start at a disadvantage, notes IBM. For example:

  • Most enterprises do not know what and where their data is
  • Data science and compliance teams are handicapped by the lack of data accessibility
  • Enterprises with legacy data are even more handicapped than digitally savvy startups
  • AI projects will expose problems with limited data and poor quality; many will simply fail just due to that.
  • The need to differentiate through monetization increases in importance with AI

These are not new. People have been whining about this since the most rudimentary data mining attempts were made decades ago. If there is a surprise it is that they have not been resolved by now.

Or maybe they finally have with the IBM Watson Knowledge Catalog. As IBM puts it, the company will deliver what promises to be the ultimate data Catalog that actually illuminates data:

  • Knows what data your enterprise has
  • Where it resides
  • Where it came from
  • What it means
  • Provide quick access to it
  • Ensure protection of use
  • Exploit Machine Learning for intelligence and automation
  • Enable data scientists, data engineers, stewards and business analysts
  • Embeddable everywhere for free, with premium features available in paid editions

OK, after 7 years Watson may be poised to deliver and it has little to do with Jeopardy but with a rapidly growing data catalog market. According to a Research and Markets report, the data catalog market is expected to grow from $210 million in 2017 to $620 million by 2022. How many sales of the Professional version gets IBM a leading share.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Can Zowe Bring Young Developers to the Z

August 31, 2018

Are you ever frustrated by the Z? As powerful as it gets mainframes remain a difficult nut to crack, particularly for newcomers who have grown up with easier technologies. Even Linux on Z is not as simple or straightforward as on other platforms. This poses a problem for Z-based shops that are scrambling to replace retiring mainframers.

IBM – Jon Simon/Feature Photo Service

Shopping via smartphone

Certainly other organizations, mainly mainframe ISVs like Compuware and Syncsort, have succeeded in extending the GUI deeper into the Z but that alone is not enough. It remains too difficult for newcomers to take their newly acquired computer talents and readily apply them to the mainframe. Maybe Zowe can change this.

And here’s how:  Recent surveys show that flexibility, agility and speed are key.  Single platforms are out, multi-platforms, and multi-clouds are in. IBM’s reply: let’s bring things together with the announcement of Zowe, pronounced like joey starting with a z. Zowe represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. Launched with partners CA Technologies and Rocket Software along with the support of the Open Mainframe Project, the goal is to drive innovation for the community of next-generation mainframe developers and enable interoperability and scalability between products. Zowe promotes a faster team on-ramp to mainframe productivity, collaboration, knowledge sharing, and communication.

In short, IBM and partners are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Zowe has four components:

  1. Zowe APIs: z/OS has a set of Representational State Transfer (REST) operating system APIs. These are made available by the z/OS Management Facility (z/OSMF). Zowe uses these REST APIs to submit jobs, work with the Job Entry Subsystem (JES) queue, and manipulate data sets. Zowe Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. Zowe Explorers create an extensible z/OS framework that provides new z/OS REST services to enterprise tools and DevOps processes.
  2. Zowe API Mediation Layer: This layer has several key components, including that API Gateway built using Netflix Zuul and Spring Boot technology to forward API requests to the appropriate corresponding service through the micro-service endpoint UI and the REST API Catalog. This publishes APIs and their associated documentation in a service catalog. There also is a Discovery Service built on Eureka and Spring Boot technology, acting as the central point in the API Gateway. It accepts announcements of REST services while providing a repository for active services.
  3. Zowe Web UI: Named zLUX, the web UI modernizes and simplifies working on the mainframe and allows the user to create modern applications. This is what will enable non-mainframers to work productively on the mainframe. The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode compared to the command-line interface.
  4. Zowe Command Line Interface (CLI): Allows users to interact with z/OS from a variety of other platforms, such as cloud or distributed systems, submit jobs, issue Time Sharing Option (TSO) and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents. With this extensible and scriptable interface, you can tie in mainframes to the latest distributed DevOps pipelines and build in automation.

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services too.

The mainframe may be older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications and for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation while making experienced professionals more efficient. Start your free Zowe trial here. BTW, Zowe’s code will be made available under the open-source Eclipse Public License 2.0.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Travelport and IBM launch industry AI travel platform

August 24, 2018

Uh oh, if you have been a little sloppy with travel expenses, it’s time to clean up your travel act before AI starts monitoring your reimbursed travel. IBM and Travelport are teaming up to offer the industry’s first AI-based travel platform to intelligently manage corporate travel spend while leveraging IBM Watson capabilities to unlock previously unavailable data insights.

As IBM explains it, the new travel platform will be delivered via the IBM Cloud and exploits IBM Watson capabilities to intelligently track, manage, predict and analyze travel costs to fundamentally change how companies manage and optimize their travel programs. Typically, each work group submits its own travel expenses and reconciliation and reimbursement can be handled by different groups.

With annual global business travel spend estimated to reach a record $1.2 trillion this year, as projected by the Global Business Travel Association, corporate travel managers need new ways to reduce costs. That requires consolidating and normalizing all the information. Currently for businesses to get a full picture of travel patterns a travel manager might have to sift through data silos from travel agencies, cards, expense systems, and suppliers for end-to-end visibility of spend and compliance across all travel subcategories.  This, however, is usually undertaken from an historical view rather than in real time, which is one reason why reimbursement can take so long. As an independent contractor, DancingDinosaur generally has to submit travel expenses at the end of the project and wait forever for payment.

IBM continues: The new platform, dubbed Travel Manager,  features advanced artificial intelligence, and provides cognitive computing and predictive data analytics using what-if type scenarios, while integrated with travel and expense data to help travel management teams, procurement category managers, business units, finance, and human resource departments optimize their travel program, control spend, and enhance the end-traveler experience.  Maybe they will even squeeze independent contractors into the workflow.

The special sauce in all of this results from how IBM combines data with Travelport, a travel commerce platform on its own, to produce IBM Travel Manager as an AI platform that oversees corporate travel expenses. In the process, IBM Travel Manager gives users complete, unified access to previously siloed information, which, when combined with travel data from the Travelport global distribution system (GDS), can then be used to create real-time predictive analytics recommending how, say, adjustments in travel booking behavior patterns can positively impact a company’s travel budget.

Travelport, itself, is a heavyweight in the travel industry. It relies on technology to make the experience of buying and managing travel better. Through its travel commerce platform it provides distribution, technology, payment and other capabilities for the $7 trillion global travel and tourism industry. The platform facilitates travel commerce by connecting the world’s leading travel providers with online and offline travel buyers in a proprietary (B2B) travel marketplace.

The company helps with all aspects of the travel supply chain from airline merchandising, hotel content and distribution, mobile commerce to B2B payments. Last year its platform processed over $83 billion of travel spend, helping its customers maximize the value of every trip.

IBM Travel Manager combines and normalizes data from diverse sources, allowing for more robust insights and benchmarking than other reporting solutions. It also taps AI to unlock previously unavailable insights from multiple internal and external data sources. The product is expected to be commercially available to customers through both IBM and Travelport.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Continues Cranking Up Blockchain

August 16, 2018

 

 

Somehow between quantum computing, AI, and hybrid clouds IBM is managing to squeeze in blockchain as an active, growing business. For instance, a previously unnamed collaborative effort between the world’s largest shipping company, Maersk, and IBM has now grown to 92 participants and been dubbed TradeLens.

IBM has 92 participants in the TradeLens blockchain network

DancingDinosaur long considered blockchain as a natural for the Z due to its zero downtime reliability and high certified levels of security (EAL4+). The most recent models include IBM’s automated pervasive encryption. No more wasting time making decisions about what to encrypt. The Z just encrypts it all with minimal overhead penalty. Your applications and workloads won’t even notice and compliance audits become a breeze.

TradeLens is emerging from its beta to accept early-adopter applications and announced a new custom contract service for executing complex shipping orders with fewer middlemen. “We have seen a lot of skeptics talk about the validity of blockchain solutions,” said Marie Wieck, IBM general manager and head of blockchain. “And I think with over 90 organizations and more than 150 million events captured on the system, “we really are seeing the proof,” she adds.

The initiative now includes Germany-based Hamburg Sud, which Maersk bought last year for $4 billion, and U.S.-based Pacific International Lines, along with numerous customs authorities, cargo owners and freight forwarders. Collectively, the shipping companies account for more than 20% of the global supply chain market share, with 20 port and terminal operators in Singapore, the U.S., Holland, and more serving 235 marine gateways around the world.

TradeLens, in practice, gives users access to their own blockchain node similar to those on the bitcoin blockchain that lets users send money without the need of banks. In the case of TradeLens a shipper can cut out as many as five middlemen, even for simple queries such as identifying the location of a shipping container.

At stake is what Transparency Market Research expects will be a $32.9 billion global supply-chain software business by 2026. As far back as 2015, the World Trade Organization estimated that simplifying the global supply chain could reduce costs among users by as much as 17.5%, with developing nations expected to see as much as a 35% increase in exports as they leapfrog over legacy technology platforms.

The cooperative effort between Maersk and IBM still needs to make money. To do so, the two companies have shifted the business model from a stand-alone joint-venture to the intellectual property that comprises TradeLens being co-owned and jointly developed.

But the new cooperative structure could unnerve some potential customers. To offset concerns, the CEO of Maersk’s New Jersey-based TradeLens operation, Mike White, says a number of barriers have been put in place, including contractual restrictions on sharing data and technical barriers in the form of the independently managed blockchain nodes.

If successful, TradeLens might literally embody the common refrain among blockchain users that “all ships will rise” when they use a shared, distributed ledger. Facing decreasing global freight rates, Maersk last quarter became just the latest container shipper to cut profit forecasts.

Among competitors aiming to cut those costs and increase profits is the former head of blockchain at accounting firm Deloitte, who earlier this year announced he was raising $100 million to launch a supply chain platform using the ethereum blockchain. Similarly, blockchain startup Fr8 is preparing to raise $60 million via an initial coin offering to build its own blockchain logistics platform.

“The value proposition is for all ecosystem participants,” said White. “The ability to get better access to more real-time data, to have better visibility end-to-end, and to be able to connect one-to-many in a more efficient and effective way, makes the cost of getting that information lower, makes the ability to manage your own business better, and makes the ability to service your customers that much stronger.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

FlashSystem 9100 Includes NVMe and Spectrum Software

July 20, 2018

The new IBM FlashSystem 9100 comes with all the bells and whistles included, especially NVMe and Spectrum Software.  For software, IBM includes its full suite of software-defined capabilities for your data both on-premises and in the cloud and across public and private clouds. It also aims to modernize your infrastructure with new capabilities for private and hybrid clouds as well as optimize operations.

FlashSystem 9100 with new capabilities built-in end-to-end

It also includes AI-assisted, next-generation technology for multi-cloud environments. This should allow you to optimize business critical workloads in an effort to optimize your technology infrastructure and prepare for the era of multi-cloud digitized business now emerging.

The IT infrastructure market is changing so quickly and so radically that technology that might have been still under consideration can no longer make it to the short list. DancingDinosuar, for example, won’t even attempt to create an ROI analysis of hard disk for primary storage. Other than straight-out falsification the numbers couldn’t work.

The driver behind this, besides the advances in technology price/performance and what seems like return to Moore’s Law levels of gains, lies the success of the big hyperscalers, who are able to sustain amazing price and performance levels. DancingDinosaur readers are no hyperscalers but they are capitalizing on hyperscaler gains in the cloud and they can emulate hyperscaler strategies in their data centers wherever possible.

IBM puts it a little more conventionally: As more and more organizations move on to a multi-cloud strategy they are having more data-driven needs such as artificial intelligence (AI), machine learning (ML), and containers, it writes. All of these new needs require a storage solution that is powerful enough to address all the needs while being built on proven technology and support both the existing and evolving data centers. IBM’s response to these issues is the expansion of its FlashSystem to include the new 9100 NVMe end-to-end solution while piling on the software.

Aside from being an all NVMe storage solution, IBM is leveraging several IBM technologies such as IBM Spectrum Virtualize and IBM FlashCore as well as software from IBM’s Spectrum family. This combination of software and technology helps the 9100 store up to 2PB of data in a 2U space (32PB in a larger rack). FlashCore also enables consistent microsecond latency, with IBM quoting performance of 2.5 million IOPS, 34GB/s, and 100μs latency for a single 2U array. For storage, the FlashSystem 9100 uses FlashCore modules with an NVMe interface. These 2.5” drives come in 4.8TB, 9.6TB, and 19.2TB capacities with up to 5:1 compression. The drives leverage 64-Layer 3D TLC NAND and can be configured with as little as four drives per system.   You might not be a hyperscaler but this is the kind of stuff you need if you hope to emulate one.

To do this, IBM packs in the goodies. For starters it is NVMe-accelerated and Multi-Cloud Enabled.  And it goes beyond the usual flash array. This is an NVMe-accelerated Enterprise Flash Array – 100% NVMe end-to-end and includes NVMe IBM FlashCore modules and NVMe industry standard SSD. It also supports physical, virtual and Docker environments.

In addition, the system includes IBM Storage Insights for AI-empowered predictive analytics, storage resource management, and support delivered over the cloud. Also, it offers Spectrum Storage Software for array management, data reuse, modern data protection, disaster recovery, and containerization (how it handles Docker). Plus, IBM adds:

  • IBM Spectrum Virtualize
  • IBM Spectrum Copy Data Management
  • IBM Spectrum Protect Plus
  • IBM Spectrum Virtualize for Public Cloud
  • IBM Spectrum Connect
  • FlashSystem 9100 Multi-Cloud Solutions

And just in case you think you are getting ahead of yourself, IBM is adding what it calls blueprints. As IBM explains them: the blueprints take the form of three pre-validated, cloud-focused solution plans.

  1. Data Reuse, Protection and Efficiency solution leverages the capabilities of IBM Spectrum Protect Plus and IBM Spectrum Copy Data Management (CDM) to provide enhanced data protection features for virtual applications with powerful data copy management and reuse functionality both on premises and in the cloud.
  2. Business Continuity and Data Reuse solution leverages IBM Spectrum Virtualize for Public Cloud to extend data protection and disaster recovery capabilities into the IBM Cloud, as well as all the copy management and data reuse features of IBM Spectrum CDM.
  3. Private Cloud Flexibility and Data Protection solution enables simplified deployment of private clouds, including the technology needed to implement container environments, and all of the capabilities of IBM Spectrum CDM to manage copy sprawl and provide data protection for containerized applications.

The blueprints may be little more than an IBM shopping list that leaves you as confused as before and a little poorer. Still, the FlashSystem 9100, along with all of IBM’s storage solutions, comes with Storage Insights, the company’s enterprise, AI-based predictive analytics, storage resource management, and support platform delivered over the cloud. If you try any blueprint, let me know how it works, anonymously of course.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Hybrid Cloud to Streamline IBM Z

June 27, 2018

2020 is the year, according to IDC,  when combined IT infrastructure spending on private and public clouds will eclipse spending on traditional data centers. The researcher predicts the public cloud will account for 31.68 percent of IT infrastructure spending in 2020, while private clouds will take a 19.82 percent slice of the spending pie, totaling more than half (51.5 percent) of all infrastructure spending for the first time, with the rest going to traditional data centers.

Source: courtesy of IBM

There is no going back. By 2021 IDC expects the balance to continue tilting further toward the cloud, with combined public and private cloud dollars making up 53.15 percent of infrastructure spending. Enterprise spending on cloud, according to IDC, will grow over $530 billion as over 90 percent of enterprises will be using a mix of multiple cloud services and platforms, both on and off premises.

Technology customers want choices. They want to choose their access device, interface, deployment options, cost and even their speed of change. Luckily, today’s hybrid age enables choices. Hybrid clouds and multi-cloud IT offer the most efficient way of delivering the widest range of customer choices.

For Z shops, this shouldn’t come as a complete surprise. IBM has been preaching the hybrid gospel for years, at least since x86 machines began making significant inroads into its platform business. The basic message has always been the same: Center the core of your business on the mainframe and then build around it— using x86 if you must but now try LinuxONE and hybrid clouds, both public and on-premises.

For many organizations a multi-cloud strategy using two or more different clouds, public or on-premise, offers the fastest and most efficient way of delivering the maximum in choice, regardless of your particular strategy. For example one might prefer a compute cloud while the other a storage cloud. Or, an organization might use different clouds—a cloud for finance, another for R&D, and yet another for DevOps.

The reasoning behind a multi-cloud strategy can also vary. Reasons can range from risk mitigation, to the need for specialized functionality, to cost management, analytics, security, flexible access, and more.

Another reason for a hybrid cloud strategy, which should resonate with DancingDinosaur readers, is modernizing legacy systems. According to Gartner, by 2020, every dollar invested in digital business innovation will require enterprises to spend at least three times that to continuously modernize the legacy application portfolio. In the past, such legacy application portfolios have often been viewed as a problem subjected to large-scale rip-and-replace efforts in desperate, often unsuccessful attempts to salvage them.

With the growth of hybrid clouds, data center managers instead can manage their legacy portfolio as an asset by mixing and matching capabilities from various cloud offerings to execute business-driven modernization. This will typically include microservices, containers, and APIs to leverage maximum value from the legacy apps, which will no longer be an albatross but a valuable asset.

While the advent of multi-clouds or hybrid clouds may appear to complicate an already muddled situation, they actually provide more options and choices as organizations seek the best solution for their needs at their price and terms.

With the Z this may be easier done than it initially sounds. “Companies have lots of records on Z, and the way to get to these records is through APIs, particularly REST APIs,” explains Juliet Candee, IBM Systems Business Continuity Architecture. Start with the IBM Z Hybrid Cloud Architecture. Then, begin assembling catalogs of APIs and leverage z/OS Connect to access popular IBM middleware like CICS. By using z/OS Connect and APIs through microservices, you can break monolithic systems into smaller, more composable and flexible pieces that contain business functions.

Don’t forget LinuxONE, another Z but optimized for Linux and available at a lower cost. With the LinuxONE Rockhopper II, the latest slimmed down model, you can run 240 concurrent MongoDB databases executing a total of 58 billion database transactions per day on a single server. Accelerate delivery of your new applications through containers and cloud-native development tools, with up to 330,000 Docker containers on a single Rockhopper II server. Similarly, lower TCO and achieve a faster ROI with up to 65 percent cost savings over x86. And the new Rockhopper II’s industry-standard 19-inch rack uses 40 percent less space than the previous Rockhopper while delivering up to 60 percent more Linux capacity.

This results in what Candee describes as a new style of building IT that involves much smaller components, which are easier to monitor and debug. Then, connect it all to IBM Cloud on Z using secure Linux containers. This could be a hybrid cloud combining IBM Cloud Private and an assortment of public clouds along with secure zLinux containers as desired.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Please note: DancingDinosaur will be away for the first 2 weeks of July. The next piece should appear the week of July 16 unless the weather is unusually bad.

IBM Introduces a Reference Architecture for On-Premise AI

June 22, 2018

This week IBM announced an AI infrastructure Reference Architecture for on-premises AI deployments. The architecture promises to address the challenges organizations face experimenting with AI PoCs, growing into multi-tenant production systems, and then expanding to enterprise scale while integrating into an organization’s existing IT infrastructure.

The reference architecture includes, according to IBM, a set of integrated software tools built on optimized, accelerated hardware for the purpose of enabling organizations to jump start. AI and Deep Learning projects, speed time to model accuracy, and provide enterprise-grade security, interoperability, and support.  IBM’s graphic above should give you the general picture.

Specifically, IBM’s AI reference architecture should support iterative, multi-stage, data-driven processes or workflows that entail specialized knowledge, skills, and, usually, a new compute and storage infrastructure. Still, these projects have many attributes that are familiar to traditional CIOs and IT departments.

The first of these is that the results are only as good as the data going into it, and model development is dependent upon having a lot of data and the data being in the format expected by the deep learning framework. Surprised? You have been hearing this for decades as GIGO (Garbage In Garbage Out).  The AI process also is iterative; repeatedly looping through data sets and tunings to develop more accurate models and then comparing new data in the model to the original business or technical requirements to refine the approach.  In this sense, AI reference model is no different than IT 101, an intro course for wannabe IT folks.

But AI doesn’t stay simplistic for long. As the reference architecture puts it, AI is a sophisticated, complex process that requires specialized software and infrastructure. That’s where IBM’s PowerAI Platform comes in. Most organizations start with small pilot projects bound to a few systems and data sets but grow from there.

As projects grow beyond the first test systems, however, it is time to bulk up an appropriate storage and networking infrastructure. This will allow it to sustain growth and eventually support a larger organization.

The trickiest part of AI and the part that takes inspired genius to conceive, test, and train is the model. The accuracy and quality of a trained AI model are directly affected by the quality and quantity of data used for training. The data scientist needs to understand the problem they are trying to solve and then find the data needed to build a model that solves the problem.

Data for AI is separated into a few broad sets; the data used to train and test the models and data that is analyzed by the models and the archived data that may be reused. This data can come from many different sources such as traditional organizational data from ERP systems, databases, data lakes, sensors, collaborators and partners, public data, mobile apps, social media, and legacy data. It may be structured or unstructured in many formats such as file, block, object, Hadoop Distributed File Systems (HDFS), or something else.

Many AI projects begin as a big data problem. Regardless of how it starts, a large volume of data is needed, and it inevitably needs preparation, transformation, and manipulation. But it doesn’t stop there.

AI models require the training data to be in a specific format; each model has its own and usually different format. Invariably the initial data is nowhere near those formats. Preparing the data is often one of the largest organizational challenges, not only in complexity but also in the amount of time it takes to transform the data into a format that can be analyzed. Many data scientists, notes IBM, claim that over 80% of their time is spent in this phase and only 20% on the actual process of data science. Data transformation and preparation is typically a highly manual, serial set of steps: identifying and connecting to data sources, extracting to a staging server, tagging the data, using tools and scripts to manipulate the data. Hadoop is often a significant source of this raw data, and Spark typically provides the analytics and transformation engines used along with advanced AI data matching and traditional SQL scripts.

There are two other considerations in this phase: 1) data storage and access and the speed of execution. For this—don’t be shocked—IBM recommends Spectrum Scale to provide multi-protocol support with a native HDFS connector, which can centralize and analyze data in place rather than wasting time copying and moving data. But you may have your preferred platform.

IBM’s reference architecture provides a place to start. A skilled IT group will eventually tweak IBM’s reference architecture, making it their own.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Expands and Enhances its Cloud Offerings

June 15, 2018

IBM announced 18 new availability zones in North America, Europe, and Asia Pacific to bolster its IBM Cloud business and try to keep pace with AWS, the public cloud leader, and Microsoft. The new availability zones are located in Europe (Germany and UK), Asia-Pacific (Tokyo and Sydney), and North America (Washington, DC and Dallas).

IBM cloud availability zone, Dallas

In addition, organizations will be able to deploy multi-zone Kubernetes clusters across the availability zones via the IBM Cloud Kubernetes Service. This will simplify how they deploy and manage containerized applications and add further consistency to their cloud experience. Furthermore, deploying multi-zone clusters will have minimal impact on performance, about 2 ms latency between availability zones.

An availability zone, according to IBM, is an isolated instance of a cloud inside a data center region. Each zone brings independent power, cooling, and networking to strengthen fault tolerance. While IBM Cloud already operates in nearly 60 locations, the new zones add even more capacity and capability in these key centers. This global cloud footprint becomes especially critical as clients look to gain greater control of their data in the face of tightening data regulations, such as the European Union’s new General Data Protection Regulation (GDPR). See DancingDinosaur June 1, IBM preps z world for GDPR.

In its Q1 earnings IBM reported cloud revenue of $17.7bn over the past year, up 22 percent over the previous year, but that includes two quarters of outstanding Z revenue that is unlikely to be sustained,  at least until the next Z comes out, which is at least a few quarters away.  AWS meanwhile reported quarterly revenues up 49 percent to $5.4 billion, while Microsoft recently reported 93 percent growth for Azure revenues.

That leaves IBM trying to catch up the old fashioned way by adding new cloud capabilities, enhancing existing cloud capabilities, and attracting more clients to its cloud capabilities however they may be delivered. For example, IBM announced it is the first cloud provider to let developers run managed Kubernetes containers directly on bare metal servers with direct access to GPUs to improve the performance of machine-learning applications, which is critical to any AI effort.  Along the same lines, IBM will extend its IBM Cloud Private and IBM Cloud Private for Data and middleware to Red Hat’s OpenShift Container Platform and Certified Containers. Red Hat already is a leading provider of enterprise Linux to Z shops.

IBM has also expanded its cloud offerings to support the widest range of platforms. Not just Z, LinuxONE, and Power9 for Watson, but also x86 and a variety of non-IBM architectures and platforms. Similarly, notes IBM, users have gotten accustomed to accessing corporate databases wherever they reside, but proximity to cloud data centers still remains important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.

Contrary to simplifying things, the propagation of more and different types of clouds and cloud strategies complicate an organization’s cloud approach. Already, today companies are managing complex, hybrid public-private cloud environments. At the same time, eighty percent of the world’s data is sitting on private servers. It just is not practical or even permissible in some cases to move all the data to the public cloud. Other organizations are run very traditional workloads that they’re looking to modernize over time as they acquire new cloud-native skills. The new IBM cloud centers can host data in multiple formats and databases including DB2, SQLBase, PostreSQL, or NoSQL, all exposed as cloud services, if desired.

The IBM cloud centers, the company continues, also promise common logging and services between the on-prem environment and IBM’s public cloud environment. In fact, IBM will make all its cloud services, including the Watson AI service, consistent across all its availability zones, and offer multi-cluster support, in effect enabling the ability to run workloads and do backups across availability zones.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Continues Quantum Push

June 8, 2018

IBM continued building out its Q Network ecosystem in May with the announcement of North Carolina State University, which is the first university-based IBM Q Hub in North America. As a hub. NC State will focus on accelerating industry collaborations, learning, skills development, and the implementation of quantum computing.

Scientists inside an open dilution fridge

NC State will work directly with IBM to advance quantum computing and industry collaborations, as part of the IBM Q Network’s growing quantum computing ecosystem. The school is the latest Q Network member. The network consists of individuals and organizations, including scientists, engineers, and business leaders, along with forward thinking companies, academic institutions, and national research labs enabled by IBM Q. Its mission: advancing quantum computing and launching the first commercial applications.

This past Nov. IBM announced a 50 qubit system. Shortly after Google announced Bristlecone, which claims to top that. With Bristlecone Google topped IBM for now with 72 qubits. However, that may not be the most important metric to focus on.

Stability rather than the number of qubits should be the most important metric. The big challenge today revolves around the instability of qubits. To maintain qubit machines stable enough the systems need to keep their processors extremely cold (Kelvin levels of cold) and protect them from external shocks. This is not something you want to build into a laptop or even a desktop. Instability leads to inaccuracy, which defeats the whole purpose.  Even accidental sounds can cause the computer to make mistakes. For minimally acceptable error rates, quantum systems need to have an error rate of less than 0.5 percent for every two qubits. To drop the error rate for any qubit processor, engineers must figure out how software, control electronics, and the processor itself can work alongside one another without causing errors.

50 cubits currently is considered the minimum number for serious business work. IBM’s November announcement, however, was quick to point out that “does not mean quantum computing is ready for common use.” The system IBM developed remains extremely finicky and challenging to use, as are those being built by others. In its 50-qubit system, the quantum state is preserved for 90 microseconds—record length for the industry but still an extremely short period of time.

Nonetheless, 50 qubits have emerged as the minimum number for a (relatively) stable system to perform practical quantum computing. According to IBM, a 50-qubit machine can do things that are extremely difficult to even simulate with the fastest conventional system.

Today, IBM offers the public IBM Q Experience, which provides access to 5- and 16-qubit systems; and the open quantum software development kit, QISKit, maybe the first quantum SDK. To date, more than 80,000 users of the IBM Q Experience, have run more than 4 million experiments and generated more than 65 third-party research articles.

Still, don’t expect to pop a couple of quantum systems into your data center. For the immediate future, the way to access and run qubit systems is through the cloud. IBM has put qubit systems in the cloud, where they are available to participants in its Q Network and Q Experience.

IBM has also put some of its conventional systems, like the Z, in the cloud. This raises some interesting possibilities. If IBM has both quantum and conventional systems in the cloud, can the results of one be accessed or somehow shared with the other. Hmm, DancingDinosaur posed that question to IBM managers earlier this week at a meeting in North Carolina (NC State, are you listening?).

The IBMers acknowledged the possibility although in what form and what timeframe wasn’t even at the point of being discussed. Quantum is a topic DancingDinosaur expects to revisit regularly in the coming months or even years. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: