Archive for the ‘Uncategorized’ Category

Z Open Terminal Emulation

September 25, 2020

You can spend a lot of time working with the Z and not find much new in terminal emulation. But there actually are a few new things, mainly because times change and people work differently, using different devices and doing new things. Sure, it all goes back to the mainframe, but it is a new world.

Terminal emulator screen

Rocket Software’s latest wrinkle in terminal emulation is BlueZone Web, which promises to simplify using the mainframe by enabling users to access host-based applications anywhere and on any type of device. It is part of a broader initiative Rocket calls Open AppDev for Z. From DancingDinosaur’s perspective its strength lies in being Zowe-compliant, an open source development environment from the Open Mainframe Project.This makes IBM Z a valuable open platform for an enterprise DevOps infrastructure.

Zowe is the first open source framework for z/OS. It facilitates DevOps teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Launched in a collaboration of initial contributors IBM, CA Technologies, and Rocket Software, and supported by the Open Mainframe Project. The goal is to cultivate the next generation of mainframe developers, whether or not they have Z experience. Zowe promotes a faster team on-ramp to productivity, collaboration, knowledge sharing, and communication.

This is the critical thing about Zowe: you don’t need Z platform experience. Open source developers and programmers can use a wide range of popular open source tools, languages, and technologies–the tools they already know. Sure it’d be nice to find an experienced zOS developer  but that is increasingly unlikely, making Zowe a much better bet.   

According to the Open Source Project, IBM’s initial contribution to Zowe was an extensible z/OS framework that provides REST-based services and APIs that will allow even inexperienced developers to rapidly use new technology, tools, languages, and modern workflows with z/OS. 

IBM continues to invest in the open source environment through Zowe and other open source initiatives.  Zowe also has help from Rocket Software, which provides a web user interface, and CA, which handles the Command Line Interface. You can find more about zowe here.

IBM introduced Linux, a leading open source technology, to the Z over 20 years ago. In time it has expanded the range of the Z through open-source tools that can be combined with products developed by different communities. This does create unintentional regulatory and security risks. Rocket Open AppDev for Z helps mitigate these risks, offering a solution that provides developers with a package of open tools and languages they want, along with the security, easy management, and support IBM Z customers require.

“We wanted to solve three common customer challenges that have prevented enterprises from leveraging the flexibility and agility of open software within their mainframe environment: user and system programmer experience, security, and version latency,” said Peter Fandel, Rocket’s Product Director of Open Software for Z. “With Rocket Open AppDev for Z, we believe we have provided an innovative secure path forward for our customers,” he adds. Businesses can now extend the mainframe’s capabilities through the adoption of open source software, making IBM Z a valuable platform for their DevOps infrastructure.”

But there is an even bigger question here that Rocket turned to IDC to answer. The question: whether businesses that run mission-critical workloads on IBM Z or IBMi should remain on these platforms and modernize them by leveraging the innovative tools that exist today or replatform by moving to an alternative on-premises solution, typically x86 or the cloud.

IDC investigated more than 440 businesses that have either modernized the IBM Z or IBMi or replatformed. The results: modernizers incur lower costs for their modernizing initiative than the replatformers.  Modernizers were more satisfied with the new capabilities of their modernized platform than replatformers; and the modernizers achieved a new baseline for which they paid less in hardware, software, and staffing. There is much more of interest in this study, which DancingDinosaur will explore in the weeks or months ahead.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

IBM Roadmap for Quantum Computing

September 18, 2020

IBM started quantum computing in the cloud a few years back by making their small qubit machines available in the cloud and even larger ones now. In mid September, IBM released its quantum computing roadmap to take it from today to million-plus qubit devices. The first benchmark is a 1,000-plus qubit device, IBM Condor, targeted for the end of 2023. Its latest challenge: going beyond what’s possible on conventional computers by running revolutionary applications across industries.

control lots of qubits for long enough with few errors

The key is making quantum computers stable by keeping them cold. To that end IBM is developing a dilution refrigerator larger than any currently available commercially. Such a refrigerator puts IBM on a course toward a million-plus qubit processor. 

The IBM Quantum team builds quantum processors that rely on the mathematics of elementary particles in order to expand its computational capabilities running quantum circuits. The biggest challenge facing IBM’s team today is figuring out how to control large systems of qubits for long enough, and with minimal errors, to run the complex quantum circuits required by future quantum applications.

IBM has been exploring superconducting qubits since the mid-2000s, increasing coherence times and decreasing errors to enable multi-qubit devices in the early 2010s. Continued refinements allowed it to put the first quantum computer in the cloud in 2016. 

Today, IBM maintains more than two dozen stable systems on the IBM Cloud for clients and the general public to experiment on, including the 5-qubit IBM Quantum Canary processor and its 27-qubit IBM Quantum Falcon processor,  on which it recently ran a long enough quantum circuit to declare a Quantum Volume of 64, an IBM created metric. This achievement also incorporated improvements to the compiler, refined calibration of the two-qubit gates, and upgrades to the noise handling and readout based on tweaks to the microwave pulses.

This month IBM quietly released its 65-qubit IBM Quantum Hummingbird processor to its Q Network members. This device features 8:1 readout multiplexing, meaning it combines readout signals from eight qubits into one, reducing the total amount of wiring and components required for readout and improving its ability to scale.

Next year, IBM intends to debut a 127-qubit IBM Quantum Eagle processor. Eagle features several upgrades in order to surpass the 100-qubit milestone: through silicon vias, which allow electrical signals to pass through the substrates to enable smaller device sizes and a reduced signal path and multi-level wiring to effectively fan-out a large density of conventional control signals while protecting the qubits in a separate layer in order to maintain high coherence times. The qubit layout will allow IBM to implement the heavy-hexagonal error-correcting code that its team debuted last year, as it scaled up the number of physical qubits and error-corrected logical qubits.

These design principles established for its smaller processors will set it on a course to release a 433-qubit IBM Quantum Osprey system in 2022. More efficient and denser controls and cryogenic infrastructure will ensure that scaling up the processors doesn’t sacrifice the performance of the individual qubits, introduce further sources of noise, or take too large a footprint.

In 2023, IBM intends to debut the 1,121-qubit Quantum Condor processor, incorporating the lessons learned from previous processors while continuing to lower the critical two-qubit errors so that it can run longer quantum circuits. IBM presents Condor as a milestone that marks its ability to implement error correction and scale up devices while simultaneously complex enough to solve problems that can be solved more efficiently on a quantum computer than on the world’s best supercomputers, achieving the quantum Holy Grail.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Pushing Quantum Onto the Cloud

September 4, 2020

Did you ever imagine the cloud would become your quantum computing platform, a place where you would run complex quantum algorithms requiring significant specialized processing across multi-qubit machines available at a click? But that is exactly what is happening.

IBM started it a few years back by making their small qubit machines available in the cloud and even larger ones now. Today Xanadu is offering 8-qubit or 12-qubit chips, and even a 24-qubit chip in the next month or so, according to the Toronto-based company.

Xanadu quantum processor

As DancingDinosaur has previously reported, there are even more: Google reports a quantum computer lab with five machines and Honeywell has six quantum machines. D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.

D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.In September, Xanadu introduced its quantum cloud platform. This allows developers to access its gate-based photonic quantum processors with 8-qubit or 12-qubit chips across the cloud.

Photonics-based quantum machines have certain advantages over other platforms, according to the company. Xanadu’s quantum processors operate at room temperature, not low Kelvin temperatures. They can easily integrate into an existing fiber optic-based telecommunication infrastructure, enabling quantum computers to be networked. It also offers scalability and fault tolerance, owing to error-resistant physical qubits and flexibility in designing error correction codes. Xanadu’s type of qubit is based on squeezed states – a special type of light generated by its own chip-integrated silicon photonic devices, it claims.

DancingDinosaur recommends you check out Xanadu’s documentation and details. It does not have sufficient familiarity with photonics, especially as related to quantum computing, to judge any of the above statements. The company also notes it offers a cross-platform Python library for simulating and executing programs on quantum photonic hardware. Its open source tools are available on GitHub.

Late in August IBM has unveiled a new milestone on its quantum computing road map, achieving the company’s highest Quantum Volume to date. By following the link, you see that Quantum Value is a metric conceived by IBM to measure and compare quantum computing power. DancingDinosaur is not aware of any other quantum computing vendors using it, which doesn’t mean anything of course. Quantum computing is so new and so different and with many players joining in with different approaches it will be years before anadu see what metrics prove most useful. 

To come up with its Quantum Volume rating, IBM  combined a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit, systems to achieve the high Quantum Volume rating. The company has made a total of 28 quantum computers available over the last four years through the IBM Quantum Experience, which companies join to gain access to its quantum machines and tools, including its software development toolset, 

Do not confuse Quantum Volume with Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer versus a conventional one. Quantum Advantage will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume, notes IBM, measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve its Quantum Volume milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit. The IBM Quantum team has shared details on the technical improvements made across the full stack to reach Quantum Volume 64 in a preprint released on arXiv, today.

What is most exciting is that the latest quantum happenings are things quantum you can access over the cloud without having to cool your data center to near zero Kelvin temperatures. If you try any of these, DancingDinosaur would love to hear how it goes.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Here Comes POWER10

August 26, 2020

Early in my writing about computers Intel began regularly introducing a series of x86 processors, including one called Pentium. Another IT analyst friend was drooling over his purchase of a laptop built on the new Pentium. “This is a mainframe in a laptop!” he exclaimed. It wasn’t but sounded exciting.

IBM POWER10

IBM’s latest technology announcement now is the new POWER10, expected in the second half of 2021. According to IBM’s announcement, the new processor delivers 3X performance based on pre-silicon engineering analysis of Integer, Enterprise, and Floating Point environments on a POWER10 dual socket server offering with 2×30-core modules vs a POWER9 dual socket server offering with 2×12-core modules. More power for sure, but you won’t find DancingDinosaur, apologies to my old friend, even suggesting this is comparable to a mainframe. 

The IBM POWER10 was designed for enterprise hybrid cloud computing. The POWER10 uses a design focused on energy efficiency and performance in a 7nm form factor, fabricated by Samsung,  with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the current POWER9 processor.

This is a processor intended for today’s increasingly complex hybrid cloud workloads. To that end, IBM packed the processor with innovations, including:

    • IBM’s First Commercialized 7nm Processor that is expected to deliver up to a 3x improvement in capacity and processor energy efficiency within the same power envelope as IBM POWER9, allowing for greater performance.
    • Support for Multi-Petabyte Memory Clusters with a new technology called Memory Inception, designed to improve cloud capacity and economics for memory-intensive workloads. Memory Inception enables any of the IBM POWER10 processor-based systems to share memory and drive cost and energy savings.
    • New Hardware-Enabled Security Capabilities including transparent memory encryption designed to support end-to-end security.  IBM engineered the POWER10 processor to achieve significantly faster encryption performance with quadruple the number of AES encryption engines per core compared to IBM POWER9 for today’s most demanding standards and anticipated future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption, (FHE), which let’s you perform computation directly on the data wherever it lands while it remains encrypted. Sounds ideal for hybrid clouds. It also brings new enhancements to container security.
    • New Processor Core Architectures in the IBM POWER10 processor with an embeddedC. which is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16 and INT8 calculations per socket respectively than the IBM POWER9 processor, while improving performance for enterprise AI inference workloads.

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor includes  innovations like:

    • Support for Multi-Petabyte Memory Clusters, which leverages Memory Inception, designed to improve cloud capacity and economics for memory-intensive workloads.
    • New Hardware-Enabled Security Capabilities including transparent memory encryption designed to support end-to-end security. The POWER10 processor is engineered to achieve significantly faster encryption performance with quadruple the number of AES encryption engines per core compared to IBM POWER9. It handles the most demanding standards and anticipated future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption. It also brings new enhancements to container security. 
    • New Processor Core Architectures in the IBM POWER10 processor with an embedded Matrix Math Accelerator, which is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16 and INT8 calculations per socket respectively to infuse AI into business applications.

With the 7nm processor not shipping until the second half of 2021, you have time to think about this. IBM has not mentioned pricing or packaging. Notes Stephen Leonard, GM of IBM Cognitive Systems. “With IBM POWER10 our stated goal of making Red Hat OpenShift the default choice for hybrid cloud. IBM POWER10 brings hardware-based capacity and security enhancements for containers to the IT infrastructure level.” Translation: POWER10 won’t come cheap.

With only a vague shipping date and no hint of pricing and packaging, you have time to think about fitting POWER10 into your plans.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

5G Will Accelerate a New Wave of IoT Applications and Z

August 10, 2020

Even before the advent of 5G DancingDinosaur, which had ghostwritten a top book on IoT, believed that IoT and smartphones would lead back to the Z eventually, somehow. Maybe the arrival of 5G and smart edge computing might slow the path to the Z. Or maybe not.

Even transactions and data originating and being processed at the edge will need to be secured, backed up, stored, distributed to the cloud, to other servers and systems, to multiple clouds, on premises, and further  processed and reprocessed in numerous ways. Along the way, they will find their ways back to a Z somehow and somewhere, sooner or later.

an edge architecture

5G is driving change in the Internet of Things (IoT). It’s a powerful enabling technology for a new generation of use cases that will leverage edge computing to make IoT more effective and efficient,” writes Rishi Vaish and Sky Matthews. Rishi Vaish is CTO and VP, IBM AI Applications; Sky Matthews is CTO, Engineering Lifecycle Management at IBM.  DancingDinosaur completely agrees, adding only that it won’t just stop there.

Vaish and Matthews continue: “In many ways, the narrative of 5G is the interaction between two inexorable forces: the rise in highly reliable, high-bandwidth communications, and the rapid spread of available computing power throughout the network. The computing power doesn’t just end at the network, though. End-point devices that connect to the network are also getting smarter and more powerful.” 

True enough, the power does not just end there; neither does it start there. There is a long line of powerful systems, the z15 and generations of Z before it that handle and enhance everything that happens in whatever ways are desired at that moment or, as is often the case, later. 

And yes, there will be numerous ways to create comparable services using similarly smart and flexible edge devices. But experience has shown that it takes time to work out the inevitable kinks that invariably will surface, often at the least expected and most inopportune moment. Think of it as just the latest manifestation of Murphy’s Law moved to the edge and 5G.

The increasingly dynamic and powerful computational environment that’s taking shape as telcos begin to redesign their networks for 5G will accelerate the uptake of IoT applications and services throughout industry,  Vaish and Matthews continue. We expect that 5G will enable new use cases in remote monitoring and visual inspection, autonomous operations in large-scale remote environments such as mines, connected vehicles, and more.

This rapidly expanding range of computing options, they add,  requires a much more flexible approach to building and deploying applications and AI models that can take advantage of the most cost-efficient compute resources available.

IBM chimes in: There are many ways that this combination of 5G and edge computing can enable new applications and new innovations in various industries. IBM and Verizon, for example, are developing potential 5G and edge solutions like remote-controlled robotics, near real-time video analysis, and other kinds of factory-floor automation.

The advantage comes from smart 5G edge devices doing the analytics immediately, at the spot where decisions may be best made. Are you sure that decisions made at the edge immediately are always the best? DancingDinosaur would like to see a little more data on that.

In that case, don’t be surprised to discover that there will be other decisions that benefit from being made later, with the addition of other data and analysis. There is too much added value and insight packed into the Z data center to not take advantage of it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

AI Hardware Accelerators and AI Models

August 3, 2020

At times it seems like IBM is just dallying with AI, but at a late July briefing IBM showed just how seriously it is pursuing AI and how difficult the challenge is. It starts with today’s computers and supercomputers, essentially bit processors. Can we say that’s the easy stuff; at least that’s the stuff we are most familiar with.

Neurons come next in the IBM scheme. Here biology and information are tapped for today’s rudimentary AI systems. Next you throw in qubits, which combine physics and information. Now we’re entering the realm of quantum machines.

DancingDinosaur is mesmerized by quantum computing but only understands it at the level of his 40-year old physics course. It starts with today’s computers and supercomputers, essentially bit processors. Can we say this is the easy stuff; at least that’s the stuff we are most familiar with.

Where all this is going is not toward some mesmerizing future of quantum systems dazzling us with nearly instant solutions to seemingly impossible problems. No, it seems, at one level, more prosaic than that, according to Jeffrey Burns.  IBM’s Director, AI Compute.

As Burns  puts it: IBM is building the future of computing. That future, he continues, is a pipeline of innovation for the future of Hybrid Cloud and AI. We should have known that except various IBMers have been saying it for several years at least and it just sounded too simple.

Burns breaks it down into four areas: Core Technology, Innovation for AI, Innovation for Hybrid Cloud, and Foundational Scientific Discovery. 

It is tempting to jump right to the Foundational Scientific Discovery stuff; that’s the sexy part. It includes new devices and materials, breakthrough data communications, computational, secured storage, and  persistent memory architectures.

At the other end is what Burns calls core technology. This encompasses semiconductor devices, processor architecture,  novel memory, MRAM, and advanced packaging.

Among the innovations for AI are AI hardware, real-time AI for transaction processing,  HW and SW for federated AI learning to enhance security and privacy.

Finally, there are innovations for hybrid cloud. These include Red Hat RHEL integration,  storage and data recovery,  high speed networking,  security, and heterogeneous system architecture for hybrid cloud.  

But, AI and Hybrid Cloud can advance only as far as hardware can take them, notes Burns. The processing demands at even the first two steps are significant. For example image recognition training with a dataset of 22K requires 4 GPUs, takes 16 days, and consumes 385 kWh. If you want it faster, you can throw 256 GPUs at it for 7 hours, which still consumes 450 kWh. Or think of it another way, he suggests: 1 model training run eats the equivalent of ~2 weeks of home energy consumption.

And we’ve just been talking about narrow AI. Broad AI, Burns continues, brings even more computational demands and greater functionality requirements at the edge.If you’re thinking of trying this with your data center, none of this is trivial. Last year IBM invested $2B  to create an  Artificial Intelligence Hardware Center. Twelve organizations have joined it and it continues to grow, Burns reports. You’re welcome to join.

IBM’s idea is to innovate and lead in AI accelerators for training and inferencing, leverage partnerships to drive AI leadership from materials and devices through software, and generate AI application demonstrators with an industry leading roadmap. 

Here is where Burns wants to take it: Extending performance by 2.5X/year through 2025.  Apply approximate computing principles to Digital AI cores with reduced precision, as well as Analog AI Cores (Remember analog? Burns sees it playing a big energy-saving role.), which could potentially offer another 100x in energy-efficiency.

If you want to try your hand at AI at this level, DancingDinosaur would love to know and throw some digital ink your way.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Elastic Storage System 5000–AI Utility Storage Option

July 27, 2020

The newest storage from IBM is the Elastic Storage System 5000. It promises, as  you would expect, leading performance, density, and scalability, but that’s not the most interesting part of IBM’s July and Aug. storage offerings. In conjunction with the new storage hardware, IBM is tapping AI through IBM Storage Spectrum, a familiar product to create a smart utility storage option. This allows you to define your current and future storage needs on day one, deploy it all that day while paying only for the capacity you are actually using and activate more only when you need it, paying for it only when you have activated it.

AI Elastic Storage Systems via Spectrum 

Will that save you much money? Maybe, but what it mainly will save you is time. That comes from not having to go through the entire process of ordering and installing the extra capacity. DancingDinosaur guesses it will save you money if you usually over-ordered what you needed initially and paid for it then. IBM says some customers actually do that, but certainly there is no good reason to ever do that–paying for it in advance–at least no longer.

Yes, data volumes continue to grow at explosive rates. And yes, a prudent IT manager does not want to suddenly find the company in a position when a lack of sufficient storage is constraining the performance of critical applications. 

But a prudent data center manager should never be in that position. DancingDinosaur was always taught that the selective use of data compression can free up some top tier storage capacity, even on very short notice. And who doesn’t also have at least a few old storage arrays hanging around that can’t be put back into use to ease a sudden crunch, even if only briefly? OK, it won’t be the fastest or best storage but it could work for a short time at least.

Of course, IBM puts a somewhat different spin on the situation. It explains: For the last 30 years, the standard method has been for organizations to calculate their current application capacity needs and then guess the rest—hoping the guess is enough to meet future needs. The organization then works up an RFQ, RFP or some other tortuous procurement method to get the capacity they need as quickly as possible. The organization then invites all the vendors it knows and some it doesn’t to pitch their solution—at which point the organization usually finds it will need much more capacity than originally thought—or budgeted. 

And IBM continues: Then, only a few months later, the organization realizes its needs have changed—and that what it originally requested is no longer adequate. A year past the initial start, your new capacity is finally in place—and the organization hopes it won’t have to go through the same process again next year. 

Does that sound like you? DancingDinosaur hopes not, at least not in 2020. 

Maybe if your storage environment is large, one with more than 250 TB of storage and is growing, IBM notes. And depending how you specified it initially, additional capacity through its elastic storage program, is instantly available by simply provisioning what you need. From a billing standpoint, it allows you to move what would otherwise be high upfront costs to a predictable quarterly charge directly related to your business activity.

DancingDinosaur has long felt that IBM Spectrum Storage delivered a nifty set of capabilities even before AI became a current fad. If you can take advantage of it to shortcut the storage acquisition and provisioning process while holding onto a few bucks for a little longer what’s not to like. 

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

July 13, 2020

IBM IBV Sees Shift in Pandemic Consumer Attitude

June 25, 2020

Do you wonder how this pandemic is going to end? Or when. Or what the world will be like when it actually does or if it does, and how we will even know.

IBM quantum computing researcher

IBM’s Institute of Business Value (IBV), an IBM research group, was asking similar questions. It polled more than 18,000 U.S. adults in May and early June to understand how COVID-19 has affected their perspectives on topics that include remote work; the return to the workplace; where they want to live; how they want to shop; and more. 

IBV’s results are not exactly encouraging. For example, it found that consumers are preparing themselves for more permanent changes in behavior because of the pandemic and their fears about future outbreaks. Two of every three respondents said they were concerned about a second wave of COVID-19 hitting later in 2020. More than 60 percent said they believed there were likely to be more global pandemic events like COVID-19 in the future.

The research also suggests that organizations in every industry must pay attention to their customers’ shifting preferences. And they must respond with agility: by adopting technology, rethinking processes and, most importantly, addressing culture in order to emerge from the pandemic smarter and stronger, say the researchers.

DancingDinosaur is not nearly as methodical as the researchers at IBV. But having spent nearly four months being bombarded with solicitations for almost anything that can be squeezed into Zoom I have been able to form some opinions. The first is how ingenious and creative a lot of marketers have become in repackaging their previously tedious messages for what has almost overnight emerged as a virtual Zoom-like world. 

For decades DancingDinosaur has dodged meetings like a plague, or maybe a pandemic. But some have managed to tease me into attending a few virtual zooms, which, surprisingly, were informative and useful and concise. When the pandemic is finally done and gone, marketers may never get DancingDinosaur into a convention center or seminar venue again. Not when it is so easy to click in and, as importantly, how convenient it is to click leave the meeting.

IBV’s research appears to have uncovered some interesting behaviors. For instance, nearly one in five urban residents indicated they would definitely relocate or would consider moving to suburban or rural areas as a result of the pandemic. Fewer than 1 in 10 indicated they now found living in an urban area more appealing. 

That makes sense. If DancingDinosaur was quarantined in a 1 bedroom or studio condo for weeks or months he’d never do that again and hope you wouldn’t either, no matter how tempting the restaurants might have been when you could actually go into them.

Another set of IBV data points bodes badly for combating climate change. Young climate change activist Greta Thunberg, please forgive them. The researchers found 25 percent of respondents said they would use their personal vehicles exclusively as their mode of transport, and an additional 17 percent said they’d use them more than before. A full 60 percent of those who want to use a personal vehicle but don’t own one said they would buy one. The remainder in this group said they would rent a vehicle until they felt safe using shared mobility.

IBV also looked at work-from-home. Before COVID-19 containment measures went into effect, less than 11% of respondents worked from home. As of June 4, that percentage had grown to more than 45%. What’s more, 81% of respondents—up from 75% in April—indicated they want to continue working remotely at least some of the time.  More than half—61%—would like this to become their primary way of working. 

DancingDinosaur spent his entire career working from home. It can be a great life. Of course,  I didn’t have to educate my children at home or on short notice with minimal guidance. They went to public school and summer camp. When they came home from school each day, it made a great excuse for me to take a cookie break with them. I do miss not having my cookie break partners. They married great guys and, if I set any kind of proper example, they now have cookie breaks with them instead.

Alan Radding is DancingDinosaur, a veteran information technology analyst, writer, and ghost-writer still working from home in the Boston area. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 


%d bloggers like this: