Posts Tagged ‘Monte Carlo simulation’

The Rush to Quantum Computing

March 9, 2018

Are you excited about quantum computing? Are you taking steps to get ready for it? Do you have an idea of what you would like to do with quantum computing or a plan for how to do it? Except for the most science-driven organizations or those with incomprehensively complex challenges to solve DancingDinosaur can’t imagine this is the most pressing IT issue you are facing today.

Yet leading IT-based vendors are making astounding gains in moving quantum computing forward further and faster than the industry was even projecting a few months ago. This past Nov. IBM announced a 50 qubit system. Earlier this month Google announced Bristlecone, which claims to top that. With Bristlecone Google trumps IBM for now with 72 qubits. However, that may not be the most important metric to focus on.

Never heard of quantum supremacy? You are going to hear a lot about it in the coming weeks, months, and even years as the vendors battle for the quantum supremacy title. Here is how Wikipedia defines it: Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers cannot. In computational complexity-theoretic terms, this generally means providing a super-polynomial speedup over the best known or possible classical algorithm. If this doesn’t send you racing to dig out your old college math book you were a better student than DancingDinosaur. In short, supremacy means beating the current best conventional algorithms. But you can’t just beat them; you have to do it using less energy or faster or some way that will demonstrate your approach’s advantage.

The issue resolves around the instability of qubits; the hardware needs to be sturdy to run them. Industry sources note that quantum computers need to keep their processors extremely cold (Kelvin levels of cold) and protect them from external shocks. Even accidental sounds can cause the computer to make mistakes. To operate in even remotely real-world settings, quantum processors also need to have an error rate of less than 0.5 percent for every two qubits. Google’s best came in at 0.6 percent using its much smaller 9-qubit hardware. Its latest blog post didn’t state Bristlecone’s error rate, but Google promised to improve on its previous results. To drop the error rate for any qubit processor, engineers must figure out how software, control electronics, and the processor itself can work alongside one another without causing errors.

50 cubits currently is considered the minimum number for serious business work. IBM’s November announcement, however, was quick to point out that “does not mean quantum computing is ready for common use.” The system IBM developed remains extremely finicky and challenging to use, as are those being built by others. In its 50-qubit system, the quantum state is preserved for 90 microseconds—record length for the industry but still an extremely short period of time.

Nonetheless, 50 qubits have emerged as the minimum number for a (relatively) stable system to perform practical quantum computing. According to IBM, a 50-qubit machine can do things that are extremely difficult to simulate without quantum technology.

The problem touches on one of the attributes of quantum systems.  As IBM explains, where normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena—entanglement and superposition—to process information differently.  Conventional computers store numbers as sequences of 0 and 1 in memory and process the numbers using only the simplest mathematical operations, add and subtract.

Quantum computers can digest 0 and 1 too but have a broader array of tricks. That’s where entanglement and superposition come in.  For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum systems can handle multiple, contradictory states. That can be very helpful if you are trying to solve huge data- and compute-intensive problems like a Monte Carlo simulation. After working at quantum computing for decades the new 50-cubit system finally brings something IBM can offer to businesses which face complex challenges that can benefit from quantum’s superposition capabilities.

Still, don’t bet on using quantum computing to solve serious business challenges very soon.  An entire ecosystem of programmers, vendors, programming models, methodologies, useful tools, and a host of other things have to fall into place first. IBM and Google and others are making stunningly rapid progress. Maybe DancingDinosaur will actually be alive to see quantum computing as just another tool in a business’s problem-solving toolkit.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM POWER8 Tops STAC-A2 Benchmark in Win for OpenPOWER

June 25, 2015

In mid-March the Security Technology Analysis Center (STAC) released the first audited STAC-A2 Benchmark results for a server using the IBM Power8 architecture. STAC provides technology research and testing tools based on community-source standards. The March benchmark results showed that an IBM POWER8-based server can deliver more than twice the performance of the best x86 server when running standard financial industry workloads.

stac benchmark power8

IBM Power System S824

This is not IBM just blowing its own horn. The STAC Benchmark Council consists of a group of over 200 major financial firms and other algorithmic-driven enterprises as well as more than 50 leading technology vendors. Their mission is to explore technical challenges and solutions in financial services and develop technology benchmark standards that are useful to financial organizations.

The POWER8 system not only delivered more than twice the performance of the nearest x86 system but its set four new performance records for financial workloads, 2 of which apparently were new public records.  This marked the first time the IBM Power8 architecture has gone through STAC-A2 testing.

The community developed STAC-A2 benchmark set represents a class of financial risk analytics workloads characterized by Monte Carlo simulation and Greeks computations. Greeks computations cover theta, rho, delta, gamma, cross-gamma, model vega, and correlation vega. Together they are referred to as the Greeks. Quality is assessed for single assets by comparing the Greeks obtained from the Monte Carlo with Greeks obtained from a Heston closed form formula for vanilla puts and calls.  Suffice to say, this as an extremely CPU-intensive set of computations. For more detail, click here.

In this case, results were compared to other publicly-released results of warm runs on the Greeks benchmark (STAC-A2.β2.GREEKS.TIME.WARM). The two-socket Power8 server, outfitted with two 12-core 3.52 GHz Power8 processor cards, achieved:

  • 2.3x performance over the comparable x86 setup, an Intel white box with two Xeon E5-2699 v3 (Haswell EP) @ 2.30GHz.
  • 1.7x the performance of the best-performing x86 solution, an Intel white box with two Intel Xeon E5-2699 v3 processors (Haswell EP) @ 2.30GHz and one Intel Xeon Phi 7120A coprocessor.
  • Only 10% less performance than the best-performing solution, a Supermicro server with two 10-core Intel Xeon E5-2690 v2 @ 3.0GHz (Ivy Bridge) and one NVIDIA K80 GPU accelerator.

The Power server also set new records for path scaling (STAC-A2.β2.GREEKS.MAX_PATHS) and asset capacity (STAC-A2.β2.GREEKS.MAX_ASSETS). Compared to the best four-socket x86-based solution — a server comprised of four Xeon E7-4890 v2 (Ivy Bridge EX) parts running at 2.80 GHz — the Power8 server delivered:

  • Double the throughput.
  • 16 percent increase for asset capacity.

The STAC test system consisted of an IBM Power System S824 server with two 12-core 3.52 GHz POWER8 processor cards, equipped with 1TB of DRAM and running Red Hat Enterprise Linux version 7. The solution stack included the IBM-authored STAC-A2 Pack for Linux on Power Systems (Rev A), which used IBM XL, a suite for C/C++ developers that includes the C++ Compiler and the Mathematical Acceleration Subsystem libraries (MASS), and the Engineering and Scientific Subroutine Library (ESSL).

POWER8 processors are based on high performance, multi-threaded cores with each core of the Power System S824 server running up to eight simultaneous threads at 3.5 GHz. With POWER8 IBM also is able to tap the innovations of the OpenPOWER Foundation including CAPI and a variety of accelerators that have started to ship.

The S824 also brings a very high bandwidth memory interface that runs at 192 GB/s per socket which is almost three times the speed of a typical x86 processor. These factors along with a balanced system structure including a large internal 8MB per core L3 cache are the primary reasons why financial computing workloads run significantly faster on POWER8-based systems than alternatives, according to IBM.

Sumit Gupta, vice president of HPC and OpenPOWER operations at IBM, reports STAC-A2 gives a much more accurate view of the expected performance as compared to micro benchmarks or simple code loops. This is especially important when the challenge is big data.

In his blog on the topic, Gupta elaborated on the big data challenge in the financial industry and the POWER8 advantages. STAC-A2 is a set of standard benchmarks that help estimate the relative performance of full systems running complete financial applications. This enables clients in the financial industry to evaluate how systems will perform on real applications. “Those are the kind of results that matter—real results for real client challenges,” Gupta wrote.

Gupta went on to note that the S824 also has a very high bandwidth memory interface. Combined with the large L3 cache noted above it can run financial applications noticeably faster than alternatives.  Combine the STAC results with data recently published by Cabot Partners and you have convincing proof that IBM POWER8-based systems have taken the performance lead in the financial services space (and elsewhere). The Cabot Partners report evaluates functionality, performance, and price/performance across several industries, including life sciences, financial services, oil and gas, and analytics while referencing standard benchmarks as well as application-oriented benchmark data.

Having sat through numerous briefings on POWER8 performance, DancingDinosaur felt reassured, but he doesn’t have to actually run these workloads. It is encouraging, however, to see proof in the form of 3rd party benchmarks like STAC and reports from Cabot Partners. Check out Cabot’s OpenPOWER report here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


%d bloggers like this: