Posts Tagged ‘Samsung’

IBM Q Network Promises to Commercialize Quantum

December 14, 2017

The dash to quantum computing is well underway and IBM is preparing to be one of the leaders. When IBM gets there it will find plenty of company. HPE, Dell/EMC, Microsoft and more are staking out quantum claims. In response IBM is speeding the build-out of its quantum ecosystem, the IBM Q Network, which it announced today.

IBM’s 50 qubit system prototype

Already IBM introduced its third generation of quantum computers in Nov., a prototype 50 qubit system. IBM promises online access to the IBM Q systems by the end of 2017, with a series of planned upgrades during 2018. IBM is focused on making available advanced, scalable universal quantum computing systems to clients to explore practical applications.

Further speeding the process, IBM is building a quantum computing ecosystem of big companies and research institutions. The result, dubbed IBM Q Network, will consist of a worldwide network of individuals and organizations, including scientists, engineers, business leaders, and forward thinking companies, academic institutions, and national research labs enabled by IBM Q. Its mission: advancing quantum computing and launching the first commercial applications.

Two particular goals stand out: Engage industry leaders to combine quantum computing expertise with industry-oriented, problem-specific expertise to accelerate development of early commercial uses. The second: expand and train the ecosystem of users, developers, and application specialists that will be essential to the adoption and scaling of quantum computing.

The key to getting this rolling is the groundwork IBM laid with the IBM Q Experience, which IBM initially introduced in May of 2016 as a 5 cubit system. The Q Experience (free) upgrade followed with a 16-qubit upgrade in May, 2017. The IBM effort to make available a commercial universal quantum computer for business and science applications has increased with each successive rev until today with a prototype 50 cubit system delivered via the IBM Cloud platform.

IBM opened public access to its quantum processors over a year ago  to serve as an enablement tool for scientific research, a resource for university classrooms, and a catalyst for enthusiasm. Since then, participants have run more than 1.7M quantum experiments on the IBM Cloud.

To date IBM was pretty easy going about access to the quantum computers but now that they have a 20 cubit system and 50 cubit system coming the company has become a little more restrictive about who can use them. Participation in the IBM Q Network is the only way to access these advanced systems, which involves a commitment of money, intellectual property, and agreement to share and cooperate, although IBM implied at any early briefing that it could be flexible about what was shared and what could remain an organization’s proprietary IP.

Another reason to participate in the Quantum Experience is QISKit, an open source quantum computing SDK anyone can access. Most DancingDinosaur readers, if they want to participate in IBM’s Q Network will do so as either partners or members. Another option, a Hub, is really targeted for bigger, more ambitious early adopters. Hubs, as IBM puts it, provide access to IBM Q systems, technical support, educational and training resources, community workshops and events, and opportunities for joint work.

The Q Network has already attracted some significant interest for organizations at every level and across a variety of industry segments. These include automotive, financial, electronics, chemical, and materials players from across the globe. Initial participants include JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University, and University of Melbourne.

As noted at the top, other major players are staking out their quantum claims, but none seem as far along or as comprehensive as IBM:

  • Dell/EMC is aiming to solve complex, life-impacting analytic problems like autonomous vehicles, smart cities, and precision medicine.
  • HPE appears to be focusing its initial quantum efforts on encryption.
  • Microsoft, not surprisingly, expects to release a new programming language and computing simulator designed for quantum computing.

As you would expect, IBM also is rolling out IBM Q Consulting to help organizations envision new business value through the application of quantum computing technology and provide customized roadmaps to help enterprises become quantum-ready.

Will quantum computing actually happen? Your guess is as good as anyone’s. I first heard about quantum physics in high school 40-odd years ago. It was baffling but intriguing then. Today it appears more real but still nothing is assured. If you’re willing to burn some time and resources to try it, go right ahead. Please tell DancingDinosaur what you find.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Taps PCM to Advance Neuron-based Cognitive Computing

August 19, 2016

Just a couple of months ago DancingDinosaur reported a significant IBM advance in phase change memory (PCM). Then earlier this month IBM announced success in creating randomly spiking neurons using phase-change materials to store and process data. According to IBM, this represents a significant step toward achieving energy-efficient, ultra-dense, integrated neuromorphic technologies for application in cognitive computing.

IBM Phase Change Neurons

Phase Change Neurons

This also represents big step toward a cognitive computer. According to IBM, scientists have theorized for decades that it should be possible to imitate the versatile computational capabilities of large populations of neurons as the human brain does. With PCM it appears to be happening sooner than the scientists expected. “We have been researching phase-change materials for memory applications for over a decade, and our progress in the past 24 months has been remarkable,” said IBM Fellow Evangelos Eleftheriou.

As the IBM researchers explain: Phase-change neurons consist of a chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. In the graphic above individual devices are accessed by means of an array of probes to allow for precise characterization, modeling and interrogation. The tiny squares are contact pads that are used to access the nanometer-scale, phase-change cells (not visible). The sharp probes touch the contact pads to change the phase configuration stored in the cells in response to the neuronal input. Each set of probes can access a population of 100 cells. The chip hosts only the phase-change devices that are the heart of the neurons. There are thousands to millions of these cells on one chip that can be accessed (in this particular graphic) by means of the sharp needles of the probe card.

Not coincidentally, this seems to be dovetailing with IBM’s sudden rush to cognitive computing overall, one of the company’s recent strategic initiatives that has lately moved to the forefront.  Just earlier this week IBM was updating industry analysts on the latest with Watson and IoT and, sure enough, cognitive computing plays a prominent role.

As IBM explains it, the artificial neurons designed by IBM scientists in Zurich consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These artificial neurons do not store digital information; they are analog, just like the synapses and neurons in our biological brain, which is what makes them so tempting for cognitive computing.

In the published demonstration, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallization of the phase-change material, ultimately causing the neurons to fire. In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This forms the foundation for event-based computation and, in principle, is similar to how our brain triggers a response when we touch something hot.

Even a single neuron can exploit this integrate-and-fire property to detect patterns and discover correlations in real-time streams of event-based data. To that end, IBM scientists have organized hundreds of artificial neurons into populations and used them to represent fast and complex signals. Moreover, the artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts (for comparison, 60 million microwatts power a 60 watt lightbulb).

The examples the researchers have provided so far seem pretty conventional.  For example, IoT sensors can collect and analyze volumes of weather data collected at the network edge for faster forecasts. Artificial neurons could be used to detect patterns in financial transactions that identify discrepancies. Even data from social media can be used to discover new cultural trends in real time. To make this work, large populations of these high-speed, low-energy nano-scale neurons would most likely be used in neuromorphic coprocessors with co-located memory and processing units, effectively mixing neuron-based cognitive computing with conventional digital computing.

Makes one wonder if IBM might regret spending millions to dump its chip fabrication capabilities.  According to published reports Samsung is very interested in this chip technology and wants to put the new processing power to work fast. The processor, reportedly dubbed TrueNorth by IBM, uses 4,096 separate processing cores to form one standard chip. Each can operate independently and are designed for low power consumption. Samsung hopes  the chip can help with visual pattern recognition for use in autonomous cars, which might be just a few years away. So, how is IBM going to make any money from this with its chip fab gone and commercial cognitive computers still off in the future?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.

Post-Silicon-R&D_Infographic_070715_Final

Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: