Posts Tagged ‘supercomputersl’

IBM Advances Commercial Quantum Computing

August 7, 2019

The reason IBM and others are so eager for quantum computing is simple: money. Recent efforts have demonstrated that quantum analytics can process massive amounts of transactions quickly and accurately, as much as nearly $70 Trillion last year, according to the World Bank.

“These are enormous amounts of money,” says mathematician Cornelis Oosterlee of Centrum Wiskunde & Informatica, a national research institute in the Netherlands for a piece in Wired Magazine. “Some single trades involve numbers that are scary to imagine”—part of a company’s pension fund, say, or a university endowment, he continues.

Of course, this isn’t exactly new. Large organizations with access to huge amounts of resources devote inordinate quantities of those resources in an effort to predict how much their assets will be worth in the future.  If they could do this modeling faster or more accurately or more efficiently, maybe just shaving off a few seconds here or there; well you can do the arithmetic.

Today these calculations are expensive to run, requiring either an in-house supercomputer or two or a big chunk of cloud computing processors and time. But if or when quantum computing could deliver on some of its theoretical promise to drive these analyses faster, more accurately, more efficiently and cheaper that’s something IBM could build into the next generation of systems.. 

And it is not just IBM. From Google on down to startups, developers are working on machines that could one day beat conventional computers at various tasks, such as classifying data through machine learning or inventing new drugs—and running complex financial calculations. In a step toward delivering on that promise, researchers affiliated with IBM and J.P. Morgan recently figured out how to run a simplified risk calculation on an actual quantum computer.

Using one of IBM’s machines, located in Yorktown Heights, New York, the researchers demonstrated they could simulate the future value of a financial product called an option. Currently, many banks use what’s called  the Monte Carlo method to simulate prices of all sorts of financial instruments. In essence, the Monte Carlo method models the future as a series of forks in the road. A company might go under; it might not. President Trump might start a trade war; he might not. Analysts estimate the likelihood of such scenarios, then generate millions of alternate futures at random. To predict the value of a financial asset, they produce a weighted average of these millions of possible outcomes.

Quantum computers are particularly well suited to this sort of probabilistic calculation, says Stefan Woerner, who led the IBM team. Classical (or conventional) computers—the kind most of us use—are designed to manipulate bits. Bits are binary, having a value of either 0 or 1. Quantum computers, on the other hand, manipulate qubits, which represent an in-between state. A qubit is like a coin flipping in the air—neither heads nor tails, neither 0 nor 1 but some probability of being one or the other. And because a qubit has unpredictability built in, it promises to  be a natural tool for simulating uncertain outcomes.

Woerner and his colleagues ran their Monte Carlo calculations using three of the 20 qubits available on their quantum machine. The experiment was too simplistic to be useful commercially, but it’s a promising proof of concept; once bigger and smoother-running quantum computers are available, the researchers hope to execute the algorithm faster than conventional machines.

But this theoretical advantage is just that, theoretical. Existing machines remain too error-ridden to compute consistently, In addition, financial institutions already have ample computing power available, onsite or in the cloud.. And they will have even more as graphics processing units (GPU), which can execute many calculations in parallel, come on line. A quantum computer might well be faster than an individual chip but it’s unclear whether it could beat a fleet of high performance GPUs in a supercomputer.

Still, it’s noteworthy that the IBM team was able to implement the algorithm on actual hardware, says mathematician Ashley Montanaro of the University of Bristol in the UK, who was not involved with the work. Academics first developed the mathematical proofs behind this quantum computing algorithm in 2000, but it remained a theoretical exercise for years. Woerner’s group took a 19-year-old recipe and figured out how to make it quantum-ready on actual quantum hardware.

Now they’re looking to improve their algorithm by using more qubits. The most powerful quantum computers today have fewer than 200 qubits, Practitioners suggest it may take thousands to consistently beat conventional methods.

But demonstrations like Woerner’s, even with their limited scope, are useful in that they apply quantum computers to problems organizationz actually want to solve, And that is what it will take if IBM expects to build quantum computing into a viable commercial business.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com. 


%d bloggers like this: