• Scientists at MIT are developing brains-on-a-chip for neuromorphic computing.
  • It would allow processing facts, patterns and learning at lightning speed and could fast-forward the development of humanoids and autonomous driving technology.
  • Last year the market for chips that enable machine learning was approximately worth $4.5 billion, according to Intersect360.

While the pace of machine learning has quickened over the last decade, the underlying hardware enabling machine-learning tasks hasn’t changed much: racks of traditional processing chips, such as computer processing units (CPUs) and graphics processing units (GPUs) combined in large data centers.

But on the cutting edge of processing is an area called neuromorphic computing, which seeks to make computer chips work more like the human brain — so they are able to process multiple facts, patterns and learning tasks at lightning speed. Earlier this year, researchers at the Massachusetts Institute of Technology unveiled a revolutionary neuromorphic chip design that could represent the next leap for AI technology.

The secret: a design that creates an artificial synapse for “brain on a chip” hardware. Today’s digital chips make computations based on binary, on/off signaling. Neuromorphic chips instead work in an analog fashion, exchanging bursts of electric signals at varying intensities, much like the neurons in the brain. This is a breakthrough, given that there are “more than 100 trillion synapses that mediate neuron signaling in the brain,” according to the MIT researchers.

The MIT research, published in the journal Nature Materials in January, demonstrated a new design for a neuromorphic chip built from silicon germanium. Think of a window screen, and you have an approximation of what this chip looked like at the microscopic level. The structure made for pathways that allowed the researchers to precisely control the intensity of electric current. In one simulation, the MIT team found its chip could represent samples of human handwriting with 95 percent accuracy.

“Supercomputer-based artificial neural network operation is very precise and very efficient. However, it consumes a lot of power and requires a large footprint,” said lead researcher Jeehwan Kim, professor and principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

Eventually, such a chip design could lead to processors capable of carrying out machine learning tasks with dramatically lower energy demands. It could fast-forward the development of humanoids and autonomous driving technology.

Another plus is cost-saving and improvements in portability. It’s thought that small neuromorphic chips would consume much less power — perhaps even up to 1,000 times less — while efficiently processing millions of computations simultaneously, something currently possible only with large banks of supercomputers.

“That’s exactly what people are envisioning: a larger category of problems can be done on a single chip, and over time that migrates into something very portable,” said Addison Snell, CEO of Intersect360 Research, an industry analyst that tracks high-performance computing.

The current market for chips that enable machine learning is quite large. Last year, according to Intersect360, the market was approximately worth $4.5 billion. Neuromorphic chips represent a tiny sliver. According to Deloitte, fewer than 10,000 neuromorphic chips will probably be sold this year, whereas it expects more than 500,000 GPUs will be sold in 2018.

GPUs were developed initially by Nvidia in the 1990s for computer-based gaming. Eventually, researchers discovered they were highly effective at supporting machine-learning tasks via artificial neural networks, which are run on supercomputers and allow for the training and inference tasks that make up the main segments of any AI workflow. (If you want to build an image-recognition system that knows what is and what isn’t a tiger, you first feed the network millions of images labeled by humans as “tigers” or “not tigers,” which trains the computer algorithm. Next time the system is shown a photo of a tiger, it will be able to infer that the image is indeed a tiger.)

The evolution of machine learning
But in recent years small start-ups and big companies alike have been modifying their chip architecture to meet the demands of new artificial intelligence workloads, including autonomous driving and speech recognition. Two years ago, according to Deloitte, almost all the machine-learning tasks that involved artificial neural networks made use of large banks of GPUs and CPUs. This year new chip designs, such as FPGAs (field programmable gate arrays) and ASICs (application-specific integrated circuits), make up a larger share of machine-learning chips in data centers.

“These new kinds of chips should increase dramatically the use of machine learning, enabling applications to consume less power and at the same time become more responsive, flexible and capable,” according to a Deloitte market analysis published this year.

Neuromorphic chips represent the next level, especially as chip architecture based on the premise of shrinking transistors has begun to slow down. Although neuromorphic computing has been around since the 1980s, it’s still considered an emerging field — albeit one that has garnered more attention from researchers and tech companies over the last decade.

“The power and performance of neuromorphic computing is far superior to any incremental solution we can expect on any platform,” said Dharmendra S. Modha, IBM chief scientist for brain-inspired computing.

 

 

A 64-chip array of IBM’s TrueNorth chips, which represents 64 million neurons.
IBM

Modha initiated IBM’s own project into neuromorphic chip design back in 2004. Funded in part by the Defense Advanced Research Projects Agency, the years-long effort by IBM researchers resulted in TrueNorth, a neuromorphic chip the size of a postage stamp that draws just 70 milliwatts of power, or the same amount required by a hearing aid.

“We don’t envision that neuromorphic computing will replace traditional computing, but I believe it will be the key enabling technology for self-driving cars and for robotics,” Modha said.

For computing at the edge — like the reams of data a self-driving car must process in real time to prevent crashing — small, portable neuromorphic chips would represent a boon. Indeed, the ultimate end game is taking a deep neural network and embedding it onto a single chip. Current neuromorphic technology is far from that, however.

These new kinds of chips should increase dramatically the use of machine learning.
Deloitte marketing report

The MIT research spearheaded by Kim took about three years and still continues, thanks to a $125,000 grant from the National Science Foundation.

“People have been pursuing neuromorphic computing for decades. We’re getting closer to where such chips are possible,” said Intersect360′s Snell. “But in the near term the market will be more geared toward what can be done with traditional processing elements.”

From left, MIT researchers Scott H. Tan, Jeehwan Kim and Shinhyun have unveiled a neuromorphic chip design that could represent the next leap for AI technology. The secret: a design that creates an artificial synapse for “brain on a chip” hardware.
Kuan Qiao

 
www.pdf24.org    Send article as PDF   
Source: CNBC
Author: Andrew Zaleski