Archive for the ‘Uncategorized’ Category

IBM IBV Sees Shift in Pandemic Consumer Attitude

June 25, 2020

Do you wonder how this pandemic is going to end? Or when. Or what the world will be like when it actually does or if it does, and how we will even know.

IBM quantum computing researcher

IBM’s Institute of Business Value (IBV), an IBM research group, was asking similar questions. It polled more than 18,000 U.S. adults in May and early June to understand how COVID-19 has affected their perspectives on topics that include remote work; the return to the workplace; where they want to live; how they want to shop; and more. 

IBV’s results are not exactly encouraging. For example, it found that consumers are preparing themselves for more permanent changes in behavior because of the pandemic and their fears about future outbreaks. Two of every three respondents said they were concerned about a second wave of COVID-19 hitting later in 2020. More than 60 percent said they believed there were likely to be more global pandemic events like COVID-19 in the future.

The research also suggests that organizations in every industry must pay attention to their customers’ shifting preferences. And they must respond with agility: by adopting technology, rethinking processes and, most importantly, addressing culture in order to emerge from the pandemic smarter and stronger, say the researchers.

DancingDinosaur is not nearly as methodical as the researchers at IBV. But having spent nearly four months being bombarded with solicitations for almost anything that can be squeezed into Zoom I have been able to form some opinions. The first is how ingenious and creative a lot of marketers have become in repackaging their previously tedious messages for what has almost overnight emerged as a virtual Zoom-like world. 

For decades DancingDinosaur has dodged meetings like a plague, or maybe a pandemic. But some have managed to tease me into attending a few virtual zooms, which, surprisingly, were informative and useful and concise. When the pandemic is finally done and gone, marketers may never get DancingDinosaur into a convention center or seminar venue again. Not when it is so easy to click in and, as importantly, how convenient it is to click leave the meeting.

IBV’s research appears to have uncovered some interesting behaviors. For instance, nearly one in five urban residents indicated they would definitely relocate or would consider moving to suburban or rural areas as a result of the pandemic. Fewer than 1 in 10 indicated they now found living in an urban area more appealing. 

That makes sense. If DancingDinosaur was quarantined in a 1 bedroom or studio condo for weeks or months he’d never do that again and hope you wouldn’t either, no matter how tempting the restaurants might have been when you could actually go into them.

Another set of IBV data points bodes badly for combating climate change. Young climate change activist Greta Thunberg, please forgive them. The researchers found 25 percent of respondents said they would use their personal vehicles exclusively as their mode of transport, and an additional 17 percent said they’d use them more than before. A full 60 percent of those who want to use a personal vehicle but don’t own one said they would buy one. The remainder in this group said they would rent a vehicle until they felt safe using shared mobility.

IBV also looked at work-from-home. Before COVID-19 containment measures went into effect, less than 11% of respondents worked from home. As of June 4, that percentage had grown to more than 45%. What’s more, 81% of respondents—up from 75% in April—indicated they want to continue working remotely at least some of the time.  More than half—61%—would like this to become their primary way of working. 

DancingDinosaur spent his entire career working from home. It can be a great life. Of course,  I didn’t have to educate my children at home or on short notice with minimal guidance. They went to public school and summer camp. When they came home from school each day, it made a great excuse for me to take a cookie break with them. I do miss not having my cookie break partners. They married great guys and, if I set any kind of proper example, they now have cookie breaks with them instead.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghost-writer still working from home in the Boston area. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

D-Wave and NEC Advance Quantum Computing

June 22, 2020

IBM boasts of 18 quantum computer models, based on the number of qbits, but it isn’t the only player staking out the quantum market. Last week D-Wave, another early shipper of quantum systems, announced a joint quantum product development and marketing initiative with NEC, which made a $10 million investment in D-Wave.

D-Wave NEC Qauntum Leap

The two companies, according to the announcement,  will work together on the development of hybrid quantum/classical technologies and services that combine the best features of classical computers and quantum computers; the development of new hybrid applications that make use of those services; and joint marketing and sales go-to-market activities to promote quantum computing. Until quantum matures, expect to see more combinations of quantum and classical computing as companies try to figure out how these seemingly incompatible technologies can work together.

For example the two companies suggest that NEC and D-Wave will create practical business and scientific quantum applications in fields ranging from transportation to materials science to machine learning, using D-Wave’s Leap with new joint hybrid services. Or, the two companies might apply D-Wave’s collection of over 200 early customer quantum applications to six markets identified by NEC, such as finance, manufacturing and distribution.

“We are very excited to collaborate with D-Wave. This announcement marks the latest of many examples where NEC has partnered with universities and businesses to jointly develop various applications and technologies. This collaborative agreement aims to leverage the strengths of both companies to fuel quantum application development and business value today,” said Motoo Nishihara, Executive Vice President and CTO, NEC.

Also, NEC and D-Wave intend to create practical business and scientific quantum applications in fields ranging from transportation to materials science to machine learning, using Leap and the new joint hybrid services. The two companies also will apply D-Wave’s collection of over 200 early customer applications to six markets identified by NEC, such as finance, manufacturing and distribution. The two companies will also explore the possibility of enabling the use of NEC’s supercomputers on D-Wave’s Leap quantum cloud service.

“By combining efforts with NEC, we believe we can bring even more quantum benefit to the entire Japanese market that is building business-critical hybrid quantum applications in both the public and private sectors,” said Alan Baratz, CEO of D-Wave. He adds: ” We’re united in the belief that hybrid software and systems are the future of commercial quantum computing. Our joint collaboration will further the adoption of quantum computing in the Japanese market and beyond.”

IBM continues to be the leader in quantum computing, boasting 18 quantum computers of various qubit counts. And they are actually available for use via the Internet, where IBM keeps them running and sufficiently cold–a few degrees above absolute zero–to ensure computational stability. Quantum computers clearly are not something you want to buy for your data center.

But other companies are rushing into the market. Google operates a quantum computer lab with five machines and Honeywell has six quantum machines, according to published reports. Others include Microsoft and Intel. Plus there are startups: IonQ, Quantum Circuits, and Rigetti Computing. All of these have been referenced previously in earlier DancingDinosaur, which just hopes to live long enough to see useful quantum computing come about.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Wazi cloud-native devops for Z

June 12, 2020

In this rapidly evolving world of hybrid and multicloud systems, organizations are required to quickly evolve their processes and tooling to address business needs. Foremost among that are development environments that include IBM Z as part of their hybrid solution face, says Sanjay Chandru, Director, IBM Z DevOps.

IBM’s goal, then  is to provide a cloud native developer experience for the IBM Z that is consistent and familiar to all developers. And that requires cross platform consistency in tooling for application programmers on Z who will need to deliver innovation faster and without the backlogs that have been expected in the past.

Wazi, along with OpenShift,  is another dividend from IBM purchase of Red Hat. Here is where IBM Wazi for Red Hat CodeReady Workspaces comes in: an add-on to IBM Cloud Pak for Applications. It allows developers to use an industry standard integrated development environment (IDE),  such as Microsoft Visual Studio Code (VS Code) or Eclipse, to develop and test IBM z/OS applications in a containerized, virtual z/OS environment on Red Hat OpenShift running on x86 hardware. The container creates a sandbox. 

The combination of IBM Cloud Pak for Applications goes beyond what Zowe offers as an open source framework for z/OS and the OpenProject to enable Z development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Developers who are not used to z/OS and IBM Z, which are most developers, now can  become productive faster in a familiar and accessible working environment, effectively  improving DevOps adoption across the enterprise

As IBM explained: Wazi integrates seamlessly into a standard, Git-based open tool chain to enable continuous integration and continuous delivery (CI/CD) as part of a fully hybrid devops process encompassing distributed and z systems.

IBM continues: Wazi is offered with deployment choices so that organizations can flexibly rebalance entitlement over time based on its business needs. In short, the organization can 

protect and leverage its IBM Z investments with robust and standard development capabilities that encompasses IBM Z and multicloud platforms.

The payoff comes as developers who are NOT used to z/OS and IBM Z, which is most of the developer world, can become productive faster in a familiar and accessible working environment while  improving DevOps adoption across the enterprise. IBM Wazi integrates seamlessly into a standard, Git-based open tool chain to deliver CI/CD and is offered with deployment choices so that any organization can flexibly rebalance over time based on its business needs. In short, you are protecting and leveraging your IBM Z investments with robust and standard development capabilities that encompass the Z and multicloud platforms.

As one large IBM customer put it: “We want to make the mainframe accessible. Use whatever tool you are comfortable with – Eclipse / IDz / Visual Studio Code. All of these things we are interested in to accelerate our innovation on the mainframe” 

An IT service provider added in IBM’s Wazi announcement: “Our colleagues in software development have been screaming for years for a dedicated testing environment that can be created and destroyed rapidly.” Well, now they have it in Wazi.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work athttp://technologywriter.com/

BMC Finalizes Compuware Acquisition 

June 4, 2020

On June 1 BMC completed its acquisition of Compuware. Both were leading mainframe independent software vendors (ISV) and leading providers of mainframe application development, delivery, and performance solutions. Recently the mainframe ISV space has picked up the action. Just a week ago DancingDinosaur was writing about the renaming of Syncsort to Precisely after completing its acquisition of Pitney Bowes, a company best known for its postage metering.


Given IBM’s lackluster performance as a mainframe software application vendor, albeit somewhat constrained by legalities, a healthy mainframe ISV market is good for everyone that wants to thrive in the mainframe space. And there are others DancingDinosaur hasn’t covered recently, such as DataKinetics, a mainframe performance and optimization provider, and Software Diversified Services (SDS), which specializes in mainframe security.

In some ways DancingDinosaur is saddened that the number of independent mainframe ISVs has dropped by one, but is hopeful that those that remain are going to be stronger, more innovative, and better for the mainframe space overall. As BMC says in its announcement: Customers to benefit from an integrated DevOps toolchain with mainframe operations management and agile application development and delivery. Everybody with a stake in the mainframe space should wish them success.

As BMC puts it: the strategic combination of the two companies builds on the success of BMC’s Automated Mainframe Intelligence (AMI) and Compuware’s Topaz suite, ISPW technology, and classic product portfolios to modernize mainframe environments. BMC with Compuware now enables automation and intelligent operations with agile development and delivery – empowering the next generation of mainframe developers and operations teams to excel when working with mainframe programming languages, applications, data, infrastructure, and security.

And the industry analysts say in the announcement: “Adding Compuware’s Topaz software development environment to the BMC portfolio is another step in the direction of targeting the enterprise developer. With Topaz, developers take a modern approach to building, testing, and deploying mainframe applications. This move should allow BMC to spread the word that modern tools matter for the mainframe engineer,” wrote Christopher Condo, Chris Gardner, and Diego Lo Giudice at Forrester Research.

In addition: fifty percent of respondents in a 2019 Forrester study reported that they plan to grow their use of the mainframe over the next two years and 93% of respondents in the 2019 BMC Mainframe Survey believe in the long-term and new workload strength of the platform.

For the mainframe shop, the newly unified portfolio will enable enterprises to:

  • Leverage the processing power, stability, security, and agile scalability of the mainframe
  • Scale Agile and DevOps methods with a fully integrated DevOps toolchain – allowing for mainframe applications to get to market more quickly and efficiently without compromising quality.
  • Combine the self-analyzing, self-healing, and self-optimizing power of the BMC AMI suite of products to increase mainframe availability, efficiency, and security while mitigating risk; along with the Compuware Topaz suite, to empower the next generation of developers to build, analyze, test, deploy, and manage mainframe applications
  • Create a customer experience to meet the business demands of the digital age – jumpstarting their Autonomous Digital Enterprise journey

BMC’s AMI brings an interesting twist. Specifically, it aims to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. Key elements of such a self-managing mainframe in the areas of security for advanced network and system security include improved adherence to PCI DSS, HIPAA, SOX, FISMA, GDPR, ISO 27001, IRS Pub. 1075, NERC, and other industry standards for protecting data. Most helpful should be BMC AMI for Security to execute out-of-the-box scorecards for frequently audited areas. 

Similarly, AMI can address areas like  capacity management to optimize mainframe capacity by addressing bottlenecks before they occur, boost staff productivity, and deliver a right-sized, cost-optimized mainframe environment. Or DevOps AMI for the mainframe through application orchestration tools to automatically capture database changes and communicate them to the database administrator (DBA) while enforcing DevOps best practices.

ISVs also can ignite a spark under IBM, especially now that it has Red Hat, as is the case of IBM enabling Wazi, a cloud native devop tool for the z. That’s why we want a strong ISV community.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Syncsort Now Precisely After Pitney Bowes Acquisition

May 29, 2020

After announcing its acquisition of Pitney Bowes last August and completing the deal in December, Syncsort earlier this month rebranded itself as Precisely. The company, a long established mainframe ISV, is trying to position Precisely as a major player among enterprises seeking to handle quantities of data in various ways.

Precisely’s combined and updated Syncsort and Pitney Bowes product lines to span what the rebranded operation now describes as  “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools.

The rebranded company’s solution portfolio spans five areas based on the use case. 

  • Integrate is its data integration line that features Precisely Connect, Ironstream, Assure, and Syncsort.
  • Verify unit of data quality tools includes Precisely Spectrum Quality, Spectrum Context, and Trillium.
  • Location intelligence (Locate) touts Precisely Spectrum Spatial, Spectrum Geocoding, MapInfo, and Confirm
  • Enrich features Precisely Streets, Boundaries, Points Of Interest, Addresses, and Demographics. 
  • Engage unit aims to create seamless, personalized and omnichannel communications on any medium, anytime

According to the company, the updated product line will span what it describes as “the breadth of the data integrity spectrum” by offering data integration, data quality and location intelligence tools. Adds Josh Rogers, CEO, Syncsort, now Precisely,  “With the combination of Syncsort and Pitney Bowes software and data, we are creating in Precisely a new company that is focused on helping enterprises advance their use of data through expertise across data domains, disciplines and platforms.”

Rogers continued: “Advancements in storage, compute, analytics, and machine learning have opened up a world of possibilities for enhanced decision-making, but inaccuracies and inconsistencies in data have held back innovation and stifled value creation. Achieving data integrity is the next business imperative. Put simply, better data means better decisions, and Precisely offers the industry’s most complete portfolio of data integrity products, providing the link between data sources and analytics that helps companies realize the value of their data and investments.”

Precisely may again be onto something by emphasizing the quality of data for decision making, which is just an amplification of the old GIGO (Garbage In Garbage Out), especially now as the volume, variety, and availability of data skyrockets. When edge devices begin generating new and different data it will further compound these challenges. Making data-driven decisions already has become increasingly complex for even the largest enterprises.

Despite the proliferation of cloud-based analytics tools, according to published studies in Forbes, Harvard Business Review, and elsewhere CEOs found that 84 percent do not trust the data they are basing decisions on, and with good reason, as another study found almost half of newly created data records have at least one critical error. Meanwhile, the cost of noncompliance with new governmental regulations, including GDPR and CCPA, has created an even greater urgency for trusted data.

Out of the gate, Precisely has more than 2,000 employees and 12,000 customers in more than 100 countries, with 90 of those part of the Fortune 100. The company boasts annual revenue of over $600 million.

Prior to its acquisition Pitney Bowes delivered solutions, analytics, and APIs in the areas of ecommerce fulfillment, shipping and returns; cross-border ecommerce; office mailing and shipping; presort services; and financing.

Syncsort provides data integration and optimization software alongside location Intelligence, data enrichment, customer information management, and engagement solutions. Together, the two companies serve more than 11,000 enterprises and hundreds of channel partners worldwide.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Supercomputers Battle COVID-19

May 22, 2020

When the world finally defeats the coronavirus and Covad-19, a small part in the victory will go to massive computer power. As Dario Gil, Director of IBM Research, noted; 16 systems with more than 400 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting are among the firepower.

Back in March DancingDinosaur reported here that IBM’s Summit, which the company was boasting as the world’s most powerful supercompter was able to simulate 8,000 chemical compounds in a matter of days in a hunt for something that could impact the COVID-19 infection. 

Writing this today, late in May, we already know that teams of medical researchers, scientists, technology experts, and a vast array of talents are working almost non-stop to find, develop, test, and mass produce a cure, with luck in the form of a vaccine. We should also note all the countless nurses, doctors, aides, assistants and various hospital and food and logistics staff of all types and outside support roles who are involved in keeping things working, feeding staff, wheeling patients around, and otherwise helping to save lives.

As Gil explains: high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling–all the required science disciplines that need to be involved in whatever success is ultimately achieved. You can probably throw in chemistry and a few other areas of electronics and engineering as well. Without massive computer horsepower these experiments would take years to complete if worked by hand, or months if handled on slower, traditional computing platforms.

These machines—more than 25 U.S.-based supercomputers with more than 400 petaflops of computing power—are now available for free to those working toward a vaccine or treatment against the virus, through the COVID-19 High Performance Computing Consortium.

It was created with government, academia and industry—including competitors, working side by side. IBM is co-leading the effort with the U.S. Department of Energy, which operates the National Laboratories of the United States. Google, Microsoft, Amazon, and Hewlett Packard Enterprise have joined, as well as NASA, the National Science Foundation, Pittsburgh Supercomputing Center, and six National Labs—Lawrence Livermore, Lawrence Berkeley, Argonne, Los Alamos, Oak Ridge, and Sandia, and others. And then there are academic institutions, including MIT, Rensselaer Polytechnic Institute, the University of Texas, Austin, and the University of California, San Diego.

The White House has been getting deservedly bashed for its slowness, incompetence, and narrow-minded bungling. However, Gil reports the White House’s Office of Science and Technology Policy has taken up an effort that can make a real difference. He adds; I want to offer this promise: IBM will continue to explore everything in our power to use our technology and expertise to drive meaningful progress in this global fight.

The first thing Gil’s team did was to spread the word to people who might be working on this on any or all fronts—from drug discovery and development with AI-led simulations to genomics, epidemiology and health systems.

He goes on:  We need to understand the whole life cycle of this virus, all the gearboxes that drive it—how it encounters and infects the host cell and replicates inside it, preventing it from producing vital particles. We need to know the molecular components, the proteins involved in the virus’ biochemistry, and then to use computational modeling to see how we can interrupt the cycle. That’s the standard scientific methodology of drug discovery, but we want to amplify it and speed it up.

The virus has been exploding in humans for months, providing an abundance of samples for computer modeling and analysis, Gil continued. Scientists already are depositing samples into public data sources such as GenBank and Protein Data Bank. There are many unknowns and assumptions but a lot of proposals involve using the available protein structures to come up with potential molecular compounds that could lead to a therapeutic treatment or a vaccine. Let’s hope they have great success, the sooner the better.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Apps and Ecosystem Critical for 5G Edge Success

May 18, 2020

According to the gospel of IBM, Edge computing with 5G creates opportunities in every industry. It brings computation and data storage closer to where data is generated, enabling better data control, reduced costs, faster insights and actions, and continuous operations.

Edge computing IBM Cloud Architecture

By 2025, 75% of enterprise data will be processed more efficiently on devices at the edge, compared to only 10% today. It will eliminate the need to relay data acquired, and often used for decision making in the field back to a data center for processing and storage. 

In short, the combination of 5G and smart devices on the edge aids this growing flow of data and processing through the proliferation of a variety of clouds: private, public, multi, and hybrid. But more is needed.

To get things rolling, IBM announced a handful of applications and tools and an edge ecosystem. As IBM notes: organizations across industries can now fully realize the benefits of edge computing, including running AI and analytics at the edge to achieve insights closer to where the work is done and the results applied. These new solutions include:

  • IBM Edge Application Manager – an autonomous management tool to enable AI, analytics and IoT enterprise workloads to be deployed and remotely managed, delivering real-time analysis and insight at scale. It aims to enable the management of up to 10,000 edge nodes simultaneously by a single administrator. It is the first to be powered by Open Horizon, which is folded into the Linux Foundation. 
  • IBM Telco Network Cloud Manager – runs on Red Hat OpenShift and Red Hat Open Stack,  a cloud computing platform that virtualizes resources from industry-standard hardware, organizes them into clouds, and manages them to provide new services now and going forward as 5G adoption expands.
  • A portfolio of edge-enabled applications and services, including IBM Visual Insights, IBM Production Optimization, IBM Connected Manufacturing, IBM Asset Optimization, IBM Maximo Worker Insights and IBM Visual Inspector. All aim to deliver the flexibility to deploy AI and cognitive applications and services at the edge and at scale. 
  • Red Hat OpenShift, which manages containers with automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes cluster services, and applications—on any cloud.
  • Dedicated IBM Services teams for edge computing and telco network clouds that draw on IBM’s expertise to deliver 5G and edge-enabled capabilities across all industries.

In addition, IBM is announcing the IBM Edge Ecosystem, through which an increasingly broad set of ISVs, GSIs and more will be helping enterprises capture the opportunities of edge computing with a variety of solutions built upon IBM’s technology. IBM is also creating the IBM Telco Network Cloud Ecosystem, bringing together a set of partners across the telecommunications industry that offer a breadth of network functionality that helps providers deploy their network cloud platforms. 

These open ecosystems of equipment manufacturers, networking and IT providers, and software providers include Cisco, Dell Technologies, Juniper Networks, Intel, NVIDIA, Samsung, Packet, Equinix Company, Hazelcast, Sysdig, Turbonomic, Portworx, Humio, Indra Minsait, Eurotech, Arrow Electronics, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks, and ADVA as members. 

Making the promise of edge computing a reality requires an open ecosystem with diverse participants. It also requires open standards-based, cloud native solutions that can be deployed and autonomously managed at massive scale throughout the edge and can move data and applications seamlessly between private data centers, hybrid multiclouds, and the edge. IBM has already enlisted dozens of organizations in what it describes as its open edge ecosystem.  You can try to join the IBM ecosystem or start organizing your own.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

5G Joins Edge Technology and Hybrid Multicloud

May 11, 2020

At IBM’s virtual Think Conference the first week in May the company made a big play for edge computing and 5G together. 

From connected vehicles to intelligent manufacturing equipment, the amount of data from devices has resulted in unprecedented volumes of data at the edge. IBM is convinced the data volumes will compound as 5G networks increase the number of connected mobile devices.

z15 T02  and the LinuxONE 111 LT2

Edge computing  and 5G networks promise to reduce latency while improving speed, reliability, and processing. This will deliver faster and more comprehensive data analysis, deeper insights, faster response times, and improved experiences for employees, customers, and their customers.

First gaining prominence with the Internet of Things (IoT) a few years back IBM defined edge computing as a distributed computing framework that brings enterprise applications closer to where data is created and often remains, where it can be processed. This is where decisions are made and actions taken.

5G stands for the Fifth Generation of cellular wireless technology. Beyond higher speed and reduced latency, 5G standards will have a much higher connection density, allowing networks to handle greater numbers of connected devices combined with network slicing to isolate and protect designated applications.

Today, 10% of data is processed at the edge, an amount IBM expects to grow to 75% by 2025. Specifically, edge computing enables:

  • Better data control and lower costs by minimizing data transport to central hubs and reducing vulnerabilities and costs
  • Faster insights and actions by tapping into more sources of data and processing that data there, at the edge
  • Continuous operations by enabling systems that run autonomously, reduce disruption, and lower costs because data can be processed by the devices themselves on the spot and where decisions can be made

In short: the growing number of increasingly capable devices, faster 5G processing, and the increased pressure to drive the edge computing market beyond what the initial IoT proponents, who didn’t have 5G yet, envisioned. They also weren’t in a position to imagine the growth in the processing capabilities of edge devices in just the past year or two.

But that is starting to happen now, according to IDC: By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today.

Also unimagined was the emergence of the hybrid multicloud, which IBM has only recently started to tout. The convergence of 5G, edge computing, and hybrid multicloud, according to the company, is redefining how businesses operate. As more embrace 5G and edge, the ability to modernize networks to take advantage of the edge opportunity is only now feasible. 

And all of this could play very well with the new z machines, the z15 T02  and LinuxONE lll LT2. These appear to be sufficiently capable to handle the scale of business edge strategies and hybrid cloud requirements for now. Or the enterprise class z15 if you need more horsepower.

By moving to a hybrid multicloud model, telcos can process data at both the core and network edge across multiple clouds, perform cognitive operations and make it easier to introduce and manage differentiated digital services. As 5G matures it will become the network technology that underpins the delivery of these services. 

Enterprises adopting a hybrid multicloud model that extends from corporate data centers (or public and private clouds) to the edge is critical to unlock new connected experiences. By extending cloud computing to the edge, enterprises can perform AI/analytics faster, run enterprise apps to reduce impacts from intermittent connectivity, and minimize data transport to central hubs for cost efficiency. 

Deploying a hybrid multicloud model from corporate data centers to the edge is central to capitalizing on  new connected experiences. By extending cloud computing to the edge, organizations can run AI/analytics faster  while minimizing data transport to central hubs for cost efficiency. By 2023, half of the newly deployed on-premises infrastructure will be in critical edge locations rather than corporate datacenters, up from less than 10% today. It’s time to start thinking about making edge part of your computer strategy. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Power9 Certified for SAP HANA Enterprise Cloud

April 28, 2020

SAP HANA has again this year been designated a top performer in the cloud-native, multi-tenant, business intelligence segment by Gartner.  Driving its popularity is the broad interest in its wide base of SAP enterprise applications and the SAP Analytics Cloud,  a cloud-native, multi-tenant platform with a broad set of analytic capabilities. 

Behind the SAP Cloud, increasingly, is IBM’s POWER 9 servers. Specifically, the SAP-managed, private cloud environment runs on IBM POWER9 systems, specifically the E980, which brings the industry’s largest virtualized server scalability at 24TB, more than enough for even the largest SAP HANA database applications to run in memory, where they experience the best performance. In truth, most HANA users don’t require 24 TB but it can be there if they need it.

.

IBM Power E980

IBM Power Systems has been certified for the SAP HANA Enterprise Cloud as a critical infrastructure platform provider for large in-memory usage. The goal is to simplify the IT infrastructure for the managed, private cloud environment. The service will run on IBM POWER9-based Power Systems E980 servers, which offer the industry’s largest virtualized server scalability for the HANA database. The E980 server lineup starts as small as 2 sockets and runs up to 16 sockets. 

The IBM Power9, notes IBM, more than provides the IT infrastructure for this mission-critical managed environment. The Power9 is a scalable and secured service that is designed to accelerate a user’s evolution on the path to cloud readiness, explains Vicente Moranta, Vice President, Offering Management for IBM’s Enterprise Linux on Power Systems. It provides capabilities that span the software and hardware stack through a comprehensive menu of functional and technical services with the level of control in the SAP cloud that clients should expect on premises, all in one privately SAP-managed environment.

SAP HANA Enterprise Cloud users can take advantage of the firmware-based virtualization built into the IBM POWER platform as PowerVM, a virtualization engine implemented at the firmware level. PowerVM delivers better capabilities while avoiding the problem of noisy neighbors where multiple clients on the box can interfere. It helps with this through micro-partitions and other advanced features. As a result, it delivers the largest SAP HANA scalability in a scale-up system. 

This combination is the result of a three-year collaboration between IBM Power Systems and SAP to provide virtualization on-demand via hypervisor-defined features. These features give an SAP HANA LPAR the ability to match what a client wants, effectively avoiding long acquisition cycles and wasteful over-provisioning. Specifically it provides what amounts to virtual on demand and accurately configured systems for great granularity. It avoids the need for SAP users to revert to bare metal servers due to virtualization issues. SAP manages this work itself through Power9 to achieve optimum performance. 

The latest 2019 Information Technology Intelligence Consulting (ITIC) Reliability Update polled over 800 corporations from July through early September 2019. The study compared the reliability and availability of over a dozen of the most widely deployed mainstream server platforms. Among the mainstream server distributions, IBM’s Power Systems, led by the Power9 topped the field registering a record low of 1.75 minutes per server downtime. Each of the mainstream servers studied delivered a solid five nines (99.999%) of inherent hardware reliability.

Not surprisingly, one server beat them all: the IBM Z mainframe system delivered what ITIC called true fault tolerance, delivering six nines (99.9999%) uptime to 89% of enterprise users. That translates into 0.74 second per server due to any inherent flaws in the server hardware. Just imagine how much you could accomplish in that 0.74 second?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 


%d bloggers like this: