Posts Tagged ‘analytics’

IBM Enhances Storage for 2019

February 14, 2019

It has been a while since DancingDinosaur last looked closely at IBM’s storage efforts. The latest 4Q18 storage briefing, actually was held on Feb. 5, 2019 but followed by more storage announcements 2/11 and 2/12 For your sake, this blog will not delve into each of these many announcements. You can, however, find them at the previous link.

Sacramento-San Joaquin River Delta–IBM RESEARCH

As IBM likes to say whenever it is trying to convey the value of data: “data is more valuable than oil.”  Maybe it is time to update this to say data is more valuable than fresh, clean water, which is quickly heading toward becoming the most precious commodity on earth.

IBM CEO Ginny Rometty, says it yet another way: “80% of the world’s data, whether it’s decades of underwriting, pricing, customer experience, risk in loans… That is all with our clients. You don’t want to share it. That is gold,” maybe more valuable even, say, the value of fresh water. But whatever metaphor you choose to use—gold, clean water, oil, something else you perceive as priceless, this represents to IBM the value of data. To preserve the value it represents this data must be economically stored, protected, made accessible, analyzed, and selectively shared. That’s where IBM’s storage comes in.

And IBM storage has been on a modest multi-year storage growth trend.  Since 2016, IBM reports shipping 700 new NVMe systems, 850 VeraStack systems, 3000 DS8880 systems, 5500 PB of capacity, attracted 6,800 new IBM Spectrum (virtualized) storage customers, and sold 3,000 Storwize All-flash system along with 12,000 all-flash arrays shipped.

The bulk of the 2/5 storage announcements fell into 4 areas:

  1. IBM storage for containers and cloud
  2. AI storage
  3. Modern data protection
  4. Cyber resiliency

Except for modern data protection, much of this may be new to Z and Power data centers. However, some of the new announcements will interest Z shops. In particular, 219-135 –Statement of direction: IBM intends to deliver Managed-from-Z, a new feature of IBM Cloud Private for Linux on IBM Z. This will enable organizations to run and manage IBM Cloud Private applications from IBM Linux on Z or LinuxONE platforms. The new capability furthers IBM’s commitment to deliver multi-cloud and multi-architecture cloud-native technologies on the platform of the customer’s choice. Watson, too, will now be available on more platforms through newly announced Watson Anywhere—a version of IBM’s cognitive platform that can run Watson on-premises, in IBM’s cloud, or any other cloud, be it private or public.

Another interesting addition to the IBM storage line, the FlashSystem 9100. IBM FlashSystem 9100, as IBM explains it, combines the performance of flash and Non-Volatile Memory Express (NVMe) end-to-end with the reliability and innovation of IBM FlashCore technology and the rich features of IBM Spectrum Virtualize, — all packed into a 2U enterprise-class storage system. Providing intensive data driven multi-cloud storage capacity, FlashSystem 9100 is deeply integrated with the software defined (virtualized) capabilities of IBM Spectrum Storage, allowing organizations to easily add multi-cloud solutions that best support their business..

Finally, 219-029 –IBM Spectrum Protect V8.1.7 and IBM Spectrum Protect Plus V10.1.3 deliver new application support and optimization for long term data retention. Think of it this way: as the value of data increases, you will want to retain and protect it in more data in more ways for longer and longer. For this you will want the kind of flexible and cost-efficient storage available through Spectrum Protect.

In addition, at Think, IBM announced Watson Anywhere, a version of Watson that runs on-premises, in IBM’s cloud, or any other cloud, be it private or public.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

12 Ingredients for App Modernization

January 8, 2019

It is no surprise that IBM has become so enamored with the hybrid cloud. The worldwide public cloud services market is projected to grow 21.4 percent in 2018 to total $186.4 billion, up from $153.5 billion in 2017, according to Gartner.

The fastest-growing segment of the market is cloud system infrastructure services (IaaS), which is forecast to grow 35.9 percent in 2018 to reach $40.8 billion. Gartner expects the top 10 providers, often referred to as hyperscalers, to account for nearly 70 percent of the IaaS market by 2021, up from 50 percent in 2016.

Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a recent Forrester report, Predictions 2019: Cloud Computing. Overall, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion this year, expanding at more than 20%, the research firm predicts

Venkats’ recipe for app modernization; courtesy of IBM

Hybrid clouds, which include two or more cloud providers or platforms, are emerging as the preferred approach for enterprises.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but also monitors trends and usage to prevent outages. No surprise here; IBM just happens to offer hybrid cloud management.

By the start of 2019, the top seven cloud providers are AWS, Azure, Google Cloud, IBM Cloud, VMWare Cloud on AWS, Oracle Cloud, and Alibaba Cloud. These top players have been shifting positions around in 2018 and expect more shifting to continue this year and probably for years to come.

Clients, notes Venkat, are discovering that the real value of Cloud comes in a hybrid, multi-cloud world. In this model, legacy applications are modernized with a real microservices architecture and with AI embedded in the application. He does not fully explain where the AI comes from and how it is embedded. Maybe I missed something.

Driving this interest for the next couple of years, at least, is interest in application modernization. Companies are discovering that the real value comes through a hybrid multicloud. Here legacy applications are modernized through a real microservices architecture enhanced with AI embedded in the application, says Meenagi Venkat, Vice President of Technical Sales & Solutioning, at IBM Cloud. Venkat wrote what he calls a 12-ingredient recipe for application modernization here. Dancing Dinosaur will highlight a couple of the ingredients below. Click the proceeding link to see them all.

To begin, when you modernize a large portfolio of several thousand applications in a large enterprise, you need some common approaches. At the same time, the effort must allow teams to evolve to a microservices-based organization where each microservice is designed and delivered with great independence.

Start by fostering a startup culture. Fostering a startup culture that allows for fast failure is one of the most critical ingredients when approaching a large modernization program. The modernization will involve sunsetting some applications, breaking some down, and using partner services in others. A startup culture based on methods such as IBM Garage Method and Design Thinking will help bring the how-to of the culture shift.

Then, innovate via product design Venkat continues. A team heavy with developers and no product folks is likely to focus on the technical coolness rather than product innovation. Hence, these teams should be led by the product specialists who deliver the business case for new services or client experience

And don’t neglect security. Secure DevOps will require embedding security skills in the scrum teams with a product owner leading the team. The focus on the product and on designing security (and compliance) to various regimes at the start will allow the scaling of microservices and engender trust in the data and AI layers. Venkat put this after design and the startup culture. In truth, this should be a key part of the startup culture.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Factsheets for AI

December 21, 2018

Depending on when you check in on the IBM website the primary technology trend for 2019 is quantum computing or hybrid clouds or blockchain, or artificial intelligence or any of a handful of others. Maybe IBM does have enough talented people, resources, and time to do it all well now. But somehow DancingDinosuar is dubious.

There is an old tech industry saying: you can have it right, fast, cheap—pick 2. When it comes to AI depending on your choices or patience you could win an attractive share of the projected $83 billion AI industry by 2021 or a share of the estimated $200 billion AI market by 2025, according to venturebeat.

IBM sees the technology industry at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people.

But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM believes part of the problem is a lack of standard practices.

As a result, there’s no consistent, agreed-upon way AI services should be created, tested, trained, deployed, and evaluated, observes Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program. To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets or as more formally called Supplier’s Declaration of Conformity (DoC). The goal: increasing the transparency of particular AI services and engendering trust in them.

Such factsheets alone could enable a competitive advantage to AI offers in the marketplace. Such factsheets could provide explain-ability around susceptibility to adversarial attacks—issues that must be addressed in order for AI services to be trusted along with fairness and robustness, Mojsilovic continued. Factsheets take away the black box perception of AI and render the AI system understandable by both researchers and developers.

Several core pillars form the basis for trust in AI systems: fairness, robustness, and explain-ability, the first 3 pillars.  Late in her piece, Mojsilovic introduces a fourth pillar — lineage — which concerns AI systems’ history. Factsheets would answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset. But in Mojsilovic’s view, documents detailing the ins and outs of systems would go a long way to maintaining the public’s faith in AI.

For natural language processing algorithms specifically, the researchers propose data statements that would show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Natural language processing systems aren’t as fraught with controversy as, say, facial recognition, but they’ve come under fire for their susceptibility to bias.  IBM, Microsoft, Accenture, Facebook, and others are actively working on automated tools that detect and minimize bias, and companies like Speechmatics and Nuance have developed solutions specifically aimed at minimizing the so-called accent gap — the tendency of voice recognition models to skew toward speakers from certain regions. But in Mojsilovic’s view, documents detailing the ins and outs of systems—factsheets–would go a long way to restoring the public’s faith in AI.

Fairness, safety, reliability, explain-ability, robustness, accountability — all agree that they are critical. Yet, to achieve trust in AI, making progress on these issues alone will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions, she wrote. Understanding and evaluating AI systems is an issue of utmost importance for the AI community, an issue IBM believes the industry, academia, and AI practitioners should be working on together.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Syncsort Expands Ironstream with EView

December 10, 2018

While IBM is focused on battling the hyperscalers for cloud dominance and trying to overcome the laws of physics with quantum computing a second tier of mainframe ISVs are trying to advance mainframe data center performance. Syncsort. For instance, late in November Syncsort acquired EView Technology, Raleigh NC, to integrate mainframe and IBM i data into its enterprise IT management platform, Ironstream.

eview-discovery-for-servicenow-big-picture

How EView works with the mainframe

EView would seem a predictable choice for a Syncsort strategic acquisition. It also can be seen as yet another sign that value today lies in efficient data integration and analysis. In this case, Syncsort bolstered its capability to harvest log data originating on IBM i and mainframes through the acquisition of EView Technology, which builds big iron connectors for mainstream systems management tools.

Meanwhile, through multiple acquisitions Syncsort’s Ironstream has emerged as a leading option for forwarding critical security and operational machine data from mainframes and IBM i servers for deeper analysis. This, in turn, enables the data to be streamed and correlated with data from the rest of the enterprise within Splunk and other Security Information and Event Management (SIEM) and IT Operations Analytics (ITOA) products.

For Syncsort EView was a typical acquisition target. It served mainframe and IBM i customers and EView would expand Ironstream functionality. Not surprisingly, each company’s products are architected differently. EView sends it data through a lightweight agent as an intermediary and makes active use of ServiceNow, a ready‑built foundation that transforms how a business operates, while Ironstone takes a more direct approach by sending data directly to Splunk.

Each approach has its strengths, says David Hodgson, Syncsort’s Chief Product Officer. One possibility: Syncsort could augment the EView agent with Ironstream while giving customers a choice. Those decisions will be taken up in earnest in January.

Furthermore, in addition to Splunk and the Elastic Stack, Ironstream will now be able to integrate this data with ServiceNow Discovery, Microsoft System Center, and Micro Focus Operations Manager. With the EView acquisition, Syncsort just expands it footprint in mainframe data analytics. “ServiceNow in particular is attracting excitement,” said Hodgson. In addition, customers can augment their EView agent with Ironstream, effectively giving customers a new choice.

Adds Josh Rogers, CEO, Syncsort. “The acquisition of EView strengthens and extends the reach of our Ironstream family of products, making data from traditional systems readily available to more of the key management platforms our customers depend on for those insights.”

In addition, EView’s enterprise-proven Intelligent Agent Technology will bolster Syncsort’s ability to offer organizations more options in integrating different data sources with advanced management platforms for a more comprehensive view.

Syncsort’s Ironstream is now part of the growing Syncsort Integrate family of products. It has emerged as an industry leading solution for forwarding critical security and operational machine data from mainframes and IBM i servers for analytic purposes. This enables the data to be streamed and correlated with data from the rest of the enterprise within Splunk and other SIEM and ITOA solutions.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Are Quantum Computers Even Feasible

November 29, 2018

IBM has toned down its enthusiasm for quantum computing. Even last spring it already was backing off a bit at Think 2018. Now the company is believes that quantum computing will augment classical computing to potentially open doors that it once thought would remain locked indefinitely.

First IBM Q computation center

With its Bristlecone announcement Google trumped IBM with 72 qubits. Debating a few dozen qubits more or less may prove irrelevant. A number of quantum physics researchers have recently been publishing papers that suggest useful quantum computing may be decades away.

Mikhail Dyakonov writes in his piece titled: The Case Against Quantum Computing, which appeared last month in Spectrum IEEE.org. Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France.

As Dyakonov explains: In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. But you already know this because DancingDinosaur covered it here and several times since.

But this is what you might not know: With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described as a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and, according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

Dyakonov continues: In contrast to a classical bit a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the statement that a qubit can exist simultaneously in both of its ↑ and ↓ states. Yes, quantum mechanics often defies intuition.

So while IBM, Google, and other classical computer providers quibble about 50 qubits or 72 or even 500 qubits, to Dyakonov this is ridiculous. The real number of qubits will be astronomical as he explains: Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That’s a very big number indeed; much greater than the number of subatomic particles in the observable universe.

Just in case you missed the math, he repeats: A useful quantum computer [will] need to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

Before you run out to invest in a quantum computer with the most qubits you can buy you would be better served joining IBM’s Q Experience and experimenting with it on IBM’s nickel. Let them wrestle with the issues Dyakonov brings up.

Then, Dyakonov concludes: I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems.  I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never.

I hope my high school science teacher who enthusiastically introduced me to quantum physics has long since retired or, more likely, passed on. Meanwhile, DancingDinosaur expects to revisit quantum regularly in the coming months or even years.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

BMC’s AMI Brings Machine Learning to Z

November 9, 2018

On Oct 18 BMC announced AMI, an automated mainframe intelligence capability that promises higher performing, self-managing mainframe environments to meet the growing demands created by digital business growth and do it through the use of AI-like capabilities.

AMI delivers a self-managing mainframe

BMC’s AMI solutions combine built-in domain expertise, machine learning, intelligent automation, and predictive analytics to help enterprises automatically manage, diagnose, heal, secure, and optimize mainframe processes. BMC doesn’t actually call it AI but they attribute all the AI buzzwords to it.

BMC cited Gartner saying: by 2020, thirty percent of data centers that fail to apply artificial intelligence and machine learning effectively in support of enterprise business will cease to be operationally and economically viable.  BMC is tapping machine learning in conjunction with its analysis of dozens of KPIs and millions of metrics a day to proactively identify, predict, and fix problems before they become an issue. In the process, BMC intends relieve the burden on enterprise teams and free up IT staff to work on high-value initiatives by removing manual processes through intelligent automation. Ultimately, the company hopes to keep its customers, as Gartner put it, operationally and economically viable.

In effect, mainframe-based organizations can benefit from BMC’s expertise in collecting deep and broad z/OS operational metrics from a variety of industry data sources, built-in world-class domain expertise, and multivariate analysis.

A lot of this already is available in the Z itself through a variety of tools, particularly zAware, described by IBM as a firmware feature consisting of an integrated set of analytic applications that monitor software running on z/OS and model normal system behavior. Its pattern recognition techniques identify unexpected messages, providing rapid diagnosis of problems caused by system changes.

But BMC is adding two new ingredients that should take this further, Autonomous Solutions and Enterprise Connectors.

Autonomous Solutions promise to enable IT operations that automatically anticipate and repair performance degradations and disruptive outages before they occur, without manual intervention. This set of intelligent, integrated solutions that compasses BMC AMI for Security Management, BMC AMI for DevOps, BMC AMI for Performance and Availability Management, and BMC AMI Cost and Capacity Management.

Enterprise Connectors move business-critical data from the mainframe to the entire enterprise and simplify the enterprise-wide management of business applications. The connectors promise a complete view of enterprise data by streaming mainframe metrics and related information in real-time to a variety of data receivers, including leading Security Information and Event Management (SIEM) solutions such as Splunk, IBM QRadar, ArcSight, LogRhythm, McAfee Enterprise Security Manager, and others. Note, BMC’s AMI Data Extractor for IMS solution is available now, additional extractors will be available early in 2019.

To bolster its mainframe business further. BMC in early October announced the acquisition of the assets of CorreLog, Inc., which provides real-time security management to mainframe customers. When combined with BMC’s offerings in systems, data, and cost management, it enables end-to-end solutions to ensure the availability, performance, and security of mission critical applications and data residing on today’s modern mainframe the merged operation. CorreLog brings capabilities for security and compliance auditing professionals who need more advanced network and system security, and improved adherence to key industry standards for protecting data.

The combination of CorreLog’s security offerings with BMC’s mainframe capabilities provides organizations with enhanced security capabilities including:

Real-time visibility into security events from mainframe environments, delivered directly into SIEM/SOC systems. It also brings a wide variety of security alerts, including IBM IMS and Db2, event log correlation, which provides up-to-the second security notifications for faster remediation in the event of a breach, and a 360-degree view of mainframe threat activity. The CorreLog deal is expected to close later this quarter.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across IBM’s storage portfolio. The company’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup.  The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included is the next generation of High-Performance Flash Enclosures (HPFE Gen2), the DS8880F family delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Refreshes its Storage for Multi-Cloud

October 26, 2018

IBM has refreshed almost its entire storage offerings virtually end to end; storage services to infrastructure and cloud to storage hardware, especially flash, to management. The announcement Oct. 23, covers wide array of storage products.

IBM Spectrum Discover

Among the most interesting of the announcements was IBM Spectrum Discover. The product automatically enhances and then leverages metadata to augment discovery capabilities. It pulls data insight from unstructured data for analytics, governance and optimization to improve and accelerate large-scale analytics, improve data governance, and enhance storage economics. At a time when data is growing at 30 percent per year finding the right data fast for analytics and AI can be slow and tedious. IBM Spectrum Discover rapidly ingests, consolidates, and indexes metadata for billions of files and objects from your data, enabling you to more easily gain insights from such massive amounts of unstructured data.

As important as Spectrum Discover is NVMe may attract more attention, in large part due to the proliferation of flash storage and the insatiable demand for increasingly faster performance. NVMe (non-volatile memory express) is the latest host controller interface and storage protocol created to accelerate the transfer of data between enterprise and client systems and solid-state drives (SSDs) over a computer’s high-speed Peripheral Component Interconnect Express (PCIe) bus.

According to IBM, NVMe addresses one of the hottest segments of the storage market, This is being driven by new solutions that, as IBM puts in, span the lifecycle of data from creation to archive.

Specifically, it is fueling major expansion of lower latency and higher throughput for NVMe fabric support across our storage portfolio. IBM’s primary NVMe products introduced include:

  • New NVMe-based Storwize V7000 Gen3
  • NVMe over Fibre Channel across the flash portfolio
  • NVMe over Ethernet across the flash portfolio in 2019
  • IBM Cloud Object Storage to support in 2019

The last two are an IBM statement of direction, which is IBM’s way of saying it may or may not happen when or as expected.

Ironically, the economics of flash has dramatically reversed itself. Flash storage reduces cost as well as boosts performance. Until not too recently, flash was considered too costly for usual storage needs, something to be used selectively only when the cost justified its use due to the increased performance or efficiency. Thank you Moore’s Law and the economics of mass scale.

Maybe of greater interest to DancingDinosaur readers managing mainframe data centers is the improvements to the DS8000 storage lineup. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z through flash. The IBM DS8880F is designed to deliver extreme performance, uncompromised availability, and deep integration with IBM Z. It remains the primary storage system supporting mainframe-based IT infrastructure. Furthermore, the new custom flash provides up to double maximum flash capacity in the same footprint.  An update to the zHyperLink solution also speeds application performance by significantly reducing both write and read latency.

Designed to provide top performance for mission-critical applications, DS8880F is based on the same fundamental system architecture as IBM Watson. DS8880F, explains IBM, forms the three-tiered architecture that balances system resources for optimal throughput.

In addition, the DS8880F offers:

  • Up to 2x maximum flash capacity
  • New 15.36TB custom flash
  • Up to 8 PB of physical capacity in the same physical space
  • Improved performance for zHyperLink connectivity
  • 2X lower write latency than High Performance FICON
  • 10X lower read latency

And, included in the next generation of High-Performance Flash Enclosures (HPFE Gen2). The DS8880F family also delivers extremely low application response times, which can accelerate core transaction processes while expanding business operations into nextgen applications using AI to extract value from data. (See above, Spectrum Discover).

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: