Posts Tagged ‘analytics’

IBM Revamps V5000

April 5, 2019

On April 2nd IBM announced several key enhancements across the Storwize V5000 portfolio and along with new models. These new models include the V5010E, 5030E and the V5100. The E stands for EXPRESS.) To further complicate the story, it utilizes Broadwell, Intel’s new 14 nanometer die shrink of its Haswell microarchitecture. Broadwell did not completely replace the full range of CPUs from Intel’s previous Haswell microarchitecture but IBM is using it widely in the new V5000 models.

IBM NVMe Flash Core Module

And the results can be impressive. From a scale-out perspective the V5010E supports a single controller configuration, while the V5030E and V5100 both support up to two controller clusters. This provides for a maximum of 392 drives in the V5010E and a massive 1520 drives in either the V5030E or V5100 dual controller clusters. The V5030E includes the Broadwell DE 1.9GHz, 6 core processor in its two canisters. Each canister supports a maximum of 32GB of RAM. Better still, the V5100 boasts a single Skylake 1.7Ghz processor with 8 cores in each canister. RAM is increased to a total of 576GB for the entire controller, or 288GB maximum per canister.

.For the next generation Storwize V5000 platforms IBM encouraging them to be called Gen3. The Gen3 encompasses 8 new MTM (Machine Type Model) based on 3 hardware models, V5010E, V5030E and V5100. The V5100 comes in two models, a hybrid (HDD and Flash) and the All Flash model V5100F. Of these 4 types, each is available with a 1 year or 3 year warranty.

The V5000E models are based on the Gen2 hardware, with various enhancements, including more memory options on the V5010E. The V5100 models are all new hardware and bring same NVMe Flash Core Modules (FCM) that are available on the V7000 and FlashSystem9100 products, completing Core Modules the transition of the Storwize family to all NVMe arrays. If you haven’t seen or heard about IBM’s FCM technology introduced last year to optimize NVMe FCM are a family of high-performance flash drives that utilizes the NVMe protocol, a PCIe Gen3 interface, and high-speed NAND memory to provide high throughput and IOPS and very low latency. FCM is available in 4.8 TB, 9.6 TB, and 19.2 TB capacities. Hardware-based data compression and self-encryption are built in.

The all flash (F) variants of the V5000 can also attach SAS expansions to extend capacity using SAS based Flash drives to allow expansion up to 1520 drives. The drives, however, are not interchangeable with the new FCM drives. The E variants allow attachment of SAS 2.5” and 3.5” HDD drives, with the V5010E expandable to 392 drives and the others up to 1520.

Inbuilt host attachments come in the form of 10GbE ports for iSCSI workloads, with optional 16Gbit FibreChannel (SCSI or FC-NVMe) as well as additional 10GbE or 25GbE iSCSI. The V5100 models can also use the iSER (an iSCSI translation layer for operation over RDMA transports, such as InfiniBand) protocol over the 25GbE ports for clustering capability, with plans to support NVMeF over Ethernet. In terms of cache memory, the V5000E products are expandable up to 64GB per controller (IO Group) and the V5100 can support up to 576GB per controller. Similarly, IBM issued as a statement of direction for all 25GbE port types across the entire Spectrum Virtualize family of products.

As Lloyd Dean, IBM Senior Certified Executive IT Architect noted, the new lineup for the V5000 is impressive; regarding the quantity of drives, and the storage available per model will “blow your mind”. How mind blowing will depend, of course, on your configuration and IBM’s pricing. As usual, IBM talks about affordable and comparative cost and storage efficiency but they usually never state a price. But they did once 3 years ago: List price then for the V5010 was $9,250 including hardware, software and a one-year warranty, according to a published report. Today IBM will likely steer you to cloud pricing, which may or may not be a bargain depending on how the deal is structured and priced. With the cloud, everything is in the details.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Rides Quantum Volume  to Quantum Advantage

March 19, 2019

Recently IBM announced achieving its’ highest quantum volume to date. Of course, nobody else knows what Quantum Volume is.  Quantum volume is both a measurement and a procedure developed, no surprise here, by IBM to determine how powerful a quantum computer is. Read the May 4 announcement here.

Quantum volume is not just about the number of qubits, although that is one part of it. It also includes both gate and measurement errors, device cross talk, as well as device connectivity and circuit compiler efficiency. According to IBM, the company has doubled the power of its quantum computers annually since 2017.

The upgraded processor will be available for use by developers, researchers, and programmers to explore quantum computing using a real quantum processor at no cost via the IBM Cloud. This offer has been out in various forms since May 2016 as IBM’s Q Experience.

Also announced was a new prototype of a commercial processor, which will be the core for the first IBM Q early-access commercial systems.  Dates have only been hinted at.

IBM’s recently unveiled IBM Q System One quantum computer, with a fourth-generation 20-qubit processor, which has resulted in a Quantum Volume of 16, roughly double that of the current IBM Q 20-qubit device, which have a Quantum Volume of 8.

The Q volume math goes something like this: a variety of factors determine Quantum Volume, including the number of qubits, connectivity, and coherence time, plus accounting for gate and measurement errors, device cross talk, and circuit software compiler efficiency.

In addition to producing the highest Quantum Volume to date, IBM Q System One’s performance reflects some of the lowest error rates IBM has ever measured, with an average 2-qubit gate error less than 2 percent, and its best gate achieving less than a 1 percent error rate. To build a fully-functional, large-scale, universal, fault-tolerant quantum computer, long coherence times and low error rates are required. Otherwise how could you ever be sure of the results?

Quantum Volume is a fundamental performance metric that measures progress in the pursuit of Quantum Advantage, the Quantum Holy Grail—the point at which quantum applications deliver a significant, practical benefit beyond what classical computers alone are capable. To achieve Quantum Advantage in the next decade, IBM believes that the industry will need to continue to double Quantum Volume every year.

Sounds like Moore’s Law all over again. IBM doesn’t deny the comparison. It writes: in 1965, Gordon Moore postulated that the number of components per integrated function would grow exponentially for classical computers. Jump to the new quantum era and IBM notes its Q system progress since 2017 presents a similar early growth pattern, supporting the premise that Quantum Volume will need to double every year and presenting a clear roadmap toward achieving Quantum Advantage.

IBM’s recently unveiled IBM Q System One quantum computer, with a fourth-generation 20-qubit processor, which has produced a Quantum Volume of 16, roughly double that of the current IBM Q 20-qubit IBM Q Network device, which has a Quantum Volume of 8.

Potential use cases, such as precisely simulating battery-cell chemistry for electric vehicles, speeding quadratic derivative models, and many others are already being investigated by IBM Q Network partners. To achieve Quantum Advantage in the 2020s, IBM believes the industry will need to continue doubling Quantum Volume every year.

In time AI should play a role expediting quantum computing.  For that, researchers will need to develop more effective AI that can identify patterns in data otherwise invisible to classical computers.

Until then how should most data centers proceed? IBM researchers suggest 3 initial steps:

  1. Develop quantum algorithms that demonstrate how quantum computers can improve AI classification accuracy.
  1. Improve feature mapping to a scale beyond the reach of the most powerful classical computers
  2. Classify data through the use of short depth circuits, allowing AI applications in the NISQ (noisy intermediate scale quantum) regime and a path forward to achieve quantum advantage for machine learning.

Sounds simple, right? Let DancingDinosaur know how you are progressing.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Joins with Harley-Davidson for LiveWire

March 1, 2019

I should have written this piece 40 years ago as a young man fresh out of grad school. Then I was dying for a 1200cc Harley Davidson motorcycle. My mother was dead set against it—she wouldn’t even allow me to play tackle football and has since been vindicated (You win on that, mom.). My father, too, was opposed and wouldn’t help pay for it. I had to settle for a puny little motor scooter that offered zero excitement.

In the decades since I graduated, Harley’s fortunes have plummeted as the HOG (Harley Owners Group) community aged out and few youngsters have picked up the slack. The 1200cc bike I once lusted after probably is now too heavy for me to handle. So, what is Harley to do? Redefine its classic American brand with an electric model, LiveWire.

Courtesy: Harley Davidson, IBM

With LiveWire, Harley expects to remake the motorcycle as a cloud-connected machine and promises to deliver new products for fresh motorcycle segments, broaden engagement with the brand, and strengthen the H-D dealer network. It also boldly proclaimed that Harley-Davidson will lead the electrification of motorcycling.

According to the company, Harley’s LiveWire will leverage H-D Connect, a service (available in select markets), built on thIBM AI, analytics, and IoTe IBM Cloud. This will enable it to deliver new mobility and concierge services today and leverage an expanding use of IBM AI, analytics, and IoT to enhance and evolve the rider’s experience. In order to capture this next generation of bikers, Harley is working with IBM to transform the everyday experience of riding through the latest technologies and features IBM can deliver via the cloud.

Would DancingDinosaur, an aging Harley enthusiast, plunk down the thousands it would take to buy one of these? Since I rarely use my smartphone to do anything more than check email and news, I am probably not a likely prospect for LiveWire.

Will LiveWire save Harley? Maybe; it depends on what the promised services will actually deliver. Already, I can access a wide variety of services through my car but, other than Waze, I rarely use any of those.

According to the joint IBM-Harley announcement, a fully cellular-connected electric motorcycle needed a partner that could deliver mobility solutions that would meet riders’ changing expectations, as well as enhance security. With IBM, Harley hopes to strike a balance between using data to create both intelligent and personal experiences while maintaining privacy and security, said Marc McAllister, Harley-Davidson VP Product Planning and Portfolio in the announcement.

So, based on this description, are you ready to jump to LiveWire? You probably need more details. So far, IBM and Harley have identified only three:

  1. Powering The Ride: LiveWire riders will be able to check bike vitals at any time and from any location. Information available includes features such as range, battery health, and charge level. Motorcycle status features will also support the needs of the electric bike, such as the location of charging stations. Also riders can see their motorcycle’s current map location.  Identifying charging stations could be useful.
  2. Powering Security: An alert will be sent to the owner’s phone if the motorcycle has been bumped, tampered, or moved. GPS-enabled stolen-vehicle assistance will provide peace of mind that the motorcycle’s location can be tracked. (Requires law enforcement assistance. Available in select markets).
  3. Powering Convenience: Reminders about upcoming motorcycle service requirements and other care notifications will be provided. In addition, riders will receive automated service reminders as well as safety or recall notifications.

“The next generation of Harley-Davidson riders will demand a more engaged and personalized customer experience,” said Venkatesh Iyer, Vice President, North America IoT and Connected Solutions, Global Business Services, IBM. Introducing enhanced capabilities, he continues, via the IBM Cloud will not only enable new services immediately, but will also provide a roadmap for the journey ahead. (Huh?)

As much as DancingDinosaur aches for Harley to come roaring back with a story that will win the hearts of the HOG users who haven’t already drifted away Harley will need more than the usual buzzwords, trivial apps, and cloud hype.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet SUSE Enterprise Linux Server 12

February 25, 2019

A surprising amount of competition has emerged lately for Linux on the mainframe, but SUSE continues to be among the top of the heap.  With the newest release last fall, SUSE Linux Enterprise 12, should secure its position for some time to come.

SUSE touts SLE 12 as the latest version of its reliable, scalable and secure platform for efficiently deploying and managing highly available enterprise-class IT services in physical, virtual, or cloud environments. New products based on SLE 12 feature enhancements should allow for better system uptime, improved operational efficiency, and accelerated innovation. As the foundation for all SUSE data center operating systems and extensions, according to the company, SUSE Linux Enterprise meets the performance requirements of data centers with mixed IT environments while reducing the risk of technological obsolescence and vendor lock-in.

With SLE 12 the company also introduces an updated customer portal, SUSE Customer Center, to make it easier for customers to manage their subscriptions, access patches and updates, and communicate with SUSE customer support. It promises a new way to manage a SUSE account and subscriptions via one interface, anytime, anywhere.

Al Gillen, program vice president for servers and system software at IDC, said, “The industry is seeing growing movement of mission-critical workloads to Linux, with that trend expected to continue well into the future.” For Gillen, the modular design of SLE 12, as well as other mission-critical features like full system rollback and live kernel patching, helps address some of the key reservations customers express, and should help accelerate the adoption of Linux on z.

It’s about time. Linux has been available on the z for 20 years. Only with the introduction of IBM LinuxONE a couple of years ago has IBM gotten serious about Linux on z.  Around that time IBM also ported the Go programming language to LinuxOne. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. Taking it even further, following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z.

And it didn’t stop there. IBM ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. As expected IBM has contributed code to the Go community.

Then IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services – all of which are available today, and can now be integrated with just a few lines of code. As soon as Apple introduced Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z, which has already been released.

With SUSE Linux Enterprise Server for x86_64, IBM Power Systems, and IBM System SUSE ES 12 has boosted its versatility, able to deliver business-critical IT services in a variety of physical, virtual, and cloud environments. New features like full system rollback, live kernel patching, and software modules increase data center uptime, improve operational efficiency, and accelerate the adoption of open source innovation. ES 12 further builds on SUSE’s leadership with Linux Containers technology and adds the Docker framework, which is now included as an integral part of the operating system.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Enhances Storage for 2019

February 14, 2019

It has been a while since DancingDinosaur last looked closely at IBM’s storage efforts. The latest 4Q18 storage briefing, actually was held on Feb. 5, 2019 but followed by more storage announcements 2/11 and 2/12 For your sake, this blog will not delve into each of these many announcements. You can, however, find them at the previous link.

Sacramento-San Joaquin River Delta–IBM RESEARCH

As IBM likes to say whenever it is trying to convey the value of data: “data is more valuable than oil.”  Maybe it is time to update this to say data is more valuable than fresh, clean water, which is quickly heading toward becoming the most precious commodity on earth.

IBM CEO Ginny Rometty, says it yet another way: “80% of the world’s data, whether it’s decades of underwriting, pricing, customer experience, risk in loans… That is all with our clients. You don’t want to share it. That is gold,” maybe more valuable even, say, the value of fresh water. But whatever metaphor you choose to use—gold, clean water, oil, something else you perceive as priceless, this represents to IBM the value of data. To preserve the value it represents this data must be economically stored, protected, made accessible, analyzed, and selectively shared. That’s where IBM’s storage comes in.

And IBM storage has been on a modest multi-year storage growth trend.  Since 2016, IBM reports shipping 700 new NVMe systems, 850 VeraStack systems, 3000 DS8880 systems, 5500 PB of capacity, attracted 6,800 new IBM Spectrum (virtualized) storage customers, and sold 3,000 Storwize All-flash system along with 12,000 all-flash arrays shipped.

The bulk of the 2/5 storage announcements fell into 4 areas:

  1. IBM storage for containers and cloud
  2. AI storage
  3. Modern data protection
  4. Cyber resiliency

Except for modern data protection, much of this may be new to Z and Power data centers. However, some of the new announcements will interest Z shops. In particular, 219-135 –Statement of direction: IBM intends to deliver Managed-from-Z, a new feature of IBM Cloud Private for Linux on IBM Z. This will enable organizations to run and manage IBM Cloud Private applications from IBM Linux on Z or LinuxONE platforms. The new capability furthers IBM’s commitment to deliver multi-cloud and multi-architecture cloud-native technologies on the platform of the customer’s choice. Watson, too, will now be available on more platforms through newly announced Watson Anywhere—a version of IBM’s cognitive platform that can run Watson on-premises, in IBM’s cloud, or any other cloud, be it private or public.

Another interesting addition to the IBM storage line, the FlashSystem 9100. IBM FlashSystem 9100, as IBM explains it, combines the performance of flash and Non-Volatile Memory Express (NVMe) end-to-end with the reliability and innovation of IBM FlashCore technology and the rich features of IBM Spectrum Virtualize, — all packed into a 2U enterprise-class storage system. Providing intensive data driven multi-cloud storage capacity, FlashSystem 9100 is deeply integrated with the software defined (virtualized) capabilities of IBM Spectrum Storage, allowing organizations to easily add multi-cloud solutions that best support their business..

Finally, 219-029 –IBM Spectrum Protect V8.1.7 and IBM Spectrum Protect Plus V10.1.3 deliver new application support and optimization for long term data retention. Think of it this way: as the value of data increases, you will want to retain and protect it in more data in more ways for longer and longer. For this you will want the kind of flexible and cost-efficient storage available through Spectrum Protect.

In addition, at Think, IBM announced Watson Anywhere, a version of Watson that runs on-premises, in IBM’s cloud, or any other cloud, be it private or public.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

12 Ingredients for App Modernization

January 8, 2019

It is no surprise that IBM has become so enamored with the hybrid cloud. The worldwide public cloud services market is projected to grow 21.4 percent in 2018 to total $186.4 billion, up from $153.5 billion in 2017, according to Gartner.

The fastest-growing segment of the market is cloud system infrastructure services (IaaS), which is forecast to grow 35.9 percent in 2018 to reach $40.8 billion. Gartner expects the top 10 providers, often referred to as hyperscalers, to account for nearly 70 percent of the IaaS market by 2021, up from 50 percent in 2016.

Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a recent Forrester report, Predictions 2019: Cloud Computing. Overall, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion this year, expanding at more than 20%, the research firm predicts

Venkats’ recipe for app modernization; courtesy of IBM

Hybrid clouds, which include two or more cloud providers or platforms, are emerging as the preferred approach for enterprises.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but also monitors trends and usage to prevent outages. No surprise here; IBM just happens to offer hybrid cloud management.

By the start of 2019, the top seven cloud providers are AWS, Azure, Google Cloud, IBM Cloud, VMWare Cloud on AWS, Oracle Cloud, and Alibaba Cloud. These top players have been shifting positions around in 2018 and expect more shifting to continue this year and probably for years to come.

Clients, notes Venkat, are discovering that the real value of Cloud comes in a hybrid, multi-cloud world. In this model, legacy applications are modernized with a real microservices architecture and with AI embedded in the application. He does not fully explain where the AI comes from and how it is embedded. Maybe I missed something.

Driving this interest for the next couple of years, at least, is interest in application modernization. Companies are discovering that the real value comes through a hybrid multicloud. Here legacy applications are modernized through a real microservices architecture enhanced with AI embedded in the application, says Meenagi Venkat, Vice President of Technical Sales & Solutioning, at IBM Cloud. Venkat wrote what he calls a 12-ingredient recipe for application modernization here. Dancing Dinosaur will highlight a couple of the ingredients below. Click the proceeding link to see them all.

To begin, when you modernize a large portfolio of several thousand applications in a large enterprise, you need some common approaches. At the same time, the effort must allow teams to evolve to a microservices-based organization where each microservice is designed and delivered with great independence.

Start by fostering a startup culture. Fostering a startup culture that allows for fast failure is one of the most critical ingredients when approaching a large modernization program. The modernization will involve sunsetting some applications, breaking some down, and using partner services in others. A startup culture based on methods such as IBM Garage Method and Design Thinking will help bring the how-to of the culture shift.

Then, innovate via product design Venkat continues. A team heavy with developers and no product folks is likely to focus on the technical coolness rather than product innovation. Hence, these teams should be led by the product specialists who deliver the business case for new services or client experience

And don’t neglect security. Secure DevOps will require embedding security skills in the scrum teams with a product owner leading the team. The focus on the product and on designing security (and compliance) to various regimes at the start will allow the scaling of microservices and engender trust in the data and AI layers. Venkat put this after design and the startup culture. In truth, this should be a key part of the startup culture.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Factsheets for AI

December 21, 2018

Depending on when you check in on the IBM website the primary technology trend for 2019 is quantum computing or hybrid clouds or blockchain, or artificial intelligence or any of a handful of others. Maybe IBM does have enough talented people, resources, and time to do it all well now. But somehow DancingDinosuar is dubious.

There is an old tech industry saying: you can have it right, fast, cheap—pick 2. When it comes to AI depending on your choices or patience you could win an attractive share of the projected $83 billion AI industry by 2021 or a share of the estimated $200 billion AI market by 2025, according to venturebeat.

IBM sees the technology industry at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people.

But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM believes part of the problem is a lack of standard practices.

As a result, there’s no consistent, agreed-upon way AI services should be created, tested, trained, deployed, and evaluated, observes Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program. To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets or as more formally called Supplier’s Declaration of Conformity (DoC). The goal: increasing the transparency of particular AI services and engendering trust in them.

Such factsheets alone could enable a competitive advantage to AI offers in the marketplace. Such factsheets could provide explain-ability around susceptibility to adversarial attacks—issues that must be addressed in order for AI services to be trusted along with fairness and robustness, Mojsilovic continued. Factsheets take away the black box perception of AI and render the AI system understandable by both researchers and developers.

Several core pillars form the basis for trust in AI systems: fairness, robustness, and explain-ability, the first 3 pillars.  Late in her piece, Mojsilovic introduces a fourth pillar — lineage — which concerns AI systems’ history. Factsheets would answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset. But in Mojsilovic’s view, documents detailing the ins and outs of systems would go a long way to maintaining the public’s faith in AI.

For natural language processing algorithms specifically, the researchers propose data statements that would show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Natural language processing systems aren’t as fraught with controversy as, say, facial recognition, but they’ve come under fire for their susceptibility to bias.  IBM, Microsoft, Accenture, Facebook, and others are actively working on automated tools that detect and minimize bias, and companies like Speechmatics and Nuance have developed solutions specifically aimed at minimizing the so-called accent gap — the tendency of voice recognition models to skew toward speakers from certain regions. But in Mojsilovic’s view, documents detailing the ins and outs of systems—factsheets–would go a long way to restoring the public’s faith in AI.

Fairness, safety, reliability, explain-ability, robustness, accountability — all agree that they are critical. Yet, to achieve trust in AI, making progress on these issues alone will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions, she wrote. Understanding and evaluating AI systems is an issue of utmost importance for the AI community, an issue IBM believes the industry, academia, and AI practitioners should be working on together.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Syncsort Expands Ironstream with EView

December 10, 2018

While IBM is focused on battling the hyperscalers for cloud dominance and trying to overcome the laws of physics with quantum computing a second tier of mainframe ISVs are trying to advance mainframe data center performance. Syncsort. For instance, late in November Syncsort acquired EView Technology, Raleigh NC, to integrate mainframe and IBM i data into its enterprise IT management platform, Ironstream.

eview-discovery-for-servicenow-big-picture

How EView works with the mainframe

EView would seem a predictable choice for a Syncsort strategic acquisition. It also can be seen as yet another sign that value today lies in efficient data integration and analysis. In this case, Syncsort bolstered its capability to harvest log data originating on IBM i and mainframes through the acquisition of EView Technology, which builds big iron connectors for mainstream systems management tools.

Meanwhile, through multiple acquisitions Syncsort’s Ironstream has emerged as a leading option for forwarding critical security and operational machine data from mainframes and IBM i servers for deeper analysis. This, in turn, enables the data to be streamed and correlated with data from the rest of the enterprise within Splunk and other Security Information and Event Management (SIEM) and IT Operations Analytics (ITOA) products.

For Syncsort EView was a typical acquisition target. It served mainframe and IBM i customers and EView would expand Ironstream functionality. Not surprisingly, each company’s products are architected differently. EView sends it data through a lightweight agent as an intermediary and makes active use of ServiceNow, a ready‑built foundation that transforms how a business operates, while Ironstone takes a more direct approach by sending data directly to Splunk.

Each approach has its strengths, says David Hodgson, Syncsort’s Chief Product Officer. One possibility: Syncsort could augment the EView agent with Ironstream while giving customers a choice. Those decisions will be taken up in earnest in January.

Furthermore, in addition to Splunk and the Elastic Stack, Ironstream will now be able to integrate this data with ServiceNow Discovery, Microsoft System Center, and Micro Focus Operations Manager. With the EView acquisition, Syncsort just expands it footprint in mainframe data analytics. “ServiceNow in particular is attracting excitement,” said Hodgson. In addition, customers can augment their EView agent with Ironstream, effectively giving customers a new choice.

Adds Josh Rogers, CEO, Syncsort. “The acquisition of EView strengthens and extends the reach of our Ironstream family of products, making data from traditional systems readily available to more of the key management platforms our customers depend on for those insights.”

In addition, EView’s enterprise-proven Intelligent Agent Technology will bolster Syncsort’s ability to offer organizations more options in integrating different data sources with advanced management platforms for a more comprehensive view.

Syncsort’s Ironstream is now part of the growing Syncsort Integrate family of products. It has emerged as an industry leading solution for forwarding critical security and operational machine data from mainframes and IBM i servers for analytic purposes. This enables the data to be streamed and correlated with data from the rest of the enterprise within Splunk and other SIEM and ITOA solutions.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Are Quantum Computers Even Feasible

November 29, 2018

IBM has toned down its enthusiasm for quantum computing. Even last spring it already was backing off a bit at Think 2018. Now the company is believes that quantum computing will augment classical computing to potentially open doors that it once thought would remain locked indefinitely.

First IBM Q computation center

With its Bristlecone announcement Google trumped IBM with 72 qubits. Debating a few dozen qubits more or less may prove irrelevant. A number of quantum physics researchers have recently been publishing papers that suggest useful quantum computing may be decades away.

Mikhail Dyakonov writes in his piece titled: The Case Against Quantum Computing, which appeared last month in Spectrum IEEE.org. Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France.

As Dyakonov explains: In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. But you already know this because DancingDinosaur covered it here and several times since.

But this is what you might not know: With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described as a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and, according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

Dyakonov continues: In contrast to a classical bit a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the statement that a qubit can exist simultaneously in both of its ↑ and ↓ states. Yes, quantum mechanics often defies intuition.

So while IBM, Google, and other classical computer providers quibble about 50 qubits or 72 or even 500 qubits, to Dyakonov this is ridiculous. The real number of qubits will be astronomical as he explains: Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That’s a very big number indeed; much greater than the number of subatomic particles in the observable universe.

Just in case you missed the math, he repeats: A useful quantum computer [will] need to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

Before you run out to invest in a quantum computer with the most qubits you can buy you would be better served joining IBM’s Q Experience and experimenting with it on IBM’s nickel. Let them wrestle with the issues Dyakonov brings up.

Then, Dyakonov concludes: I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems.  I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never.

I hope my high school science teacher who enthusiastically introduced me to quantum physics has long since retired or, more likely, passed on. Meanwhile, DancingDinosaur expects to revisit quantum regularly in the coming months or even years.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM’s Multicloud Manager for 2nd Gen Hybrid Clouds

November 15, 2018

A sign that IBM is serious about hybrid cloud is its mid-October announcement of its new Multicloud Manager, which promises an operations console for companies as they increasingly incorporate public and private cloud capabilities with existing on-premises business systems. Meanwhile, research from Ovum suggests that 80 percent of mission-critical workloads and sensitive data are still running on business systems located on-premises.

$1 Trillion or more hybrid cloud market by 2020

Still, the potential of the hybrid cloud market is huge, $1 trillion or more within just a few years IBM projects. If IBM found itself crowded out by the big hyperscalers—AWS, Google, Microsoft—in the initial rush to the cloud, it is hoping to leapfrog into the top ranks with the next generation of cloud, hybrid clouds.

And this exactly what Red Hat and IBM hope to gain together.  Both believe they will be well positioned to accelerate hybrid multi-cloud adoption by tapping each company’s leadership in Linux, containers, Kubernetes, multi-cloud management, and automation as well as leveraging IBM’s core of large enterprise customers by bringing them into the hybrid cloud.

The result should be a mixture of on premises, off prem, and hybrid clouds. It also promises to be based on open standards, flexible modern security, and solid hybrid management across anything.

The company’s new Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud. With Multicloud Manager, IBM is extending those capabilities to interconnect various clouds, even from different providers, creating unified systems designed for increased consistency, automation, and predictability. At the heart of the new solution is a first-of-a-kind dashboard interface for effectively managing thousands of Kubernetes applications and spanning huge volumes of data regardless of where in the organization they are located.

Adds Arvind Krishna, Senior Vice President, IBM Hybrid Cloud: “With its open source approach to managing data and apps across multiple clouds” an enterprise can move beyond the productivity economics of renting computing power to fully leveraging the cloud to invent new business processes and enter new markets.

This new solution should become a driver for modernizing businesses. As IBM explains: if a car rental company uses one cloud for its AI services, another for its bookings system, and continues to run its financial processes using on-premises computers at offices around the world, IBM Multicloud Manager can span the company’s multiple computing infrastructures enabling customers to book a car more easily and faster by using the company’s mobile app.

Notes IDC’s Stephen Elliot, Program Vice President:  “The old idea that everything would move to the public cloud never happened.” Instead, you need multicloud capabilities that reduce the risks and deliver more automation throughout these cloud journeys.

Just last month IBM announced a number of companies are starting down the hybrid cloud path by adopting IBM Cloud Private. These include:

New Zealand Police, NZP, is exploring how IBM Cloud Private and Kubernetes containers can help to modernize its existing systems as well as quickly launch new services.

Aflac Insurance is adopting IBM Cloud Private to enhance the efficiency of its operations and speed up the development of new products and services.

Kredi Kayıt Bürosu (KKB) provides the national cloud infrastructure for Turkey’s finance industry. Using IBM Cloud Private KKB expects to drive innovation across its financial services ecosystem.

Operating in a multi-cloud environment is becoming the new reality to most organizations while vendors rush to sell multi-cloud tools. Not just IBM’s Multicloud Manager but HPE OneSphere, Right Scale Multi-Cloud platform, Data Dog Cloud Monitoring, Ormuco Stack, and more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: