Pushing Processors Past Moore’s Law

1Executive Summary

Is it really time to say goodbye to Moore’s Law? And if so, what comes next to ensure that compute power continues to increase? Currently, Intel is manufacturing its fastest chips at 32 nanometer process, with plans to make its next generation chips at 22 nanometers in the second half of next year. At 32 nanometers, the distance between the lines etched on the chip measure far less than the width of a human hair, which is about 10,000 nanometers in width. Chip manufacturers making those lines thinner is called moving down the process node. It’s an effort to cram more transistors on a chip in order to keep pushing Moore’s Law — giving the world more computing for less cost.

Moving down the process node also makes chips more power-efficient, but it’s becoming more and more difficult — and therefore expensive — to keep making the lines on the chip thinner. Already, to reach 22 nanometers, chip manufacturers have had to figure out how to etch ever-thinner lines, mandating gains in equipment like immersion lithography and double patterning. It’s also hard, at that size, to ensure each chip works, which means more and more chips can’t be used. Yields go down.

All of the above has big implications for the world of electronics — we’re going to stop seeing performance gains and cost reductions for our chips. For memory manufacturers it means the cost of memory may not fall as rapidly as it once did — generally the amount of memory you can buy for a dollar amount doubles every 24 months — almost the inverse of Moore’s Law. We saw the clock speeds of chips plateau around 2005 and are now relying on parallel processing from multiple cores to improve performance. The trend toward using graphics processors is another way of optimizing parallel processing, which is the accepted way forward for computing gains absent Moore’s Law. Those in the GPU industry have already written the eulogy for Moore’s Law.

So, if we can’t continue relying on the current manufacturing techniques to eek out performance gains, and we assume we’ll eventually reach both design and power constraints when it comes to shoving more and more cores onto a chip, how will we keep our advancements in computing and personal electronics from grinding to a halt?

Companies like IBM, Hewlett Packard, Samsung and Toshiba, as well as many universities and government labs, are all investigating the next generation of processors with techniques as far off as quantum or biological computing to things that are nearer term, like memristors or low-voltage nanomagnetic materials.

Memristors: Hewlett-Packard in April said it had made a breakthrough in a new type of circuit that would replace the world of transistors with switches that aren’t limited to zeros and ones. A memristor is like a chip that can think in color rather than in the binary black and white (transistors can’t even think in shades of gray) because the memory and the logic can be stored on the same chip. The computing company talked about its discovery of what’s called a memristor two years ago, and in April said that now its chip can switch (or process information) at the same speed of today’s silicon chips. That means we’re looking at a new type of computing that could process more information and might be better suited for artificial intelligence.

That’s a big deal to commercialize this technology, but we’re still a long way out on memristors — both because there’s more to commercialize and because transistors are so ingrained in our devices. Transistors have been around for more than 60 years and we have whole industries dedicated to building them in billion-dollar factories supplied by large equipment companies. We also know how to program them to do a variety of tasks, from taking my tire pressure and showing me when it gets to low to delivering my Facebook pages. No one knows how to program a memristor today and initially they would likely be cost-prohibitive because the manufacturing economies of scale aren’t there.

Quantum computing: The same issues around programming and manufacturing face an even more distant technology, which aims to build faster computers using quantum processors that would use quantum bits — or qubits — and could compute multiple information paths simultaneously to reach the right answer. The research in this area is widespread and currently open to different ways to achieve quantum computing. Some researchers are focused on building quantum computers using quantum dots, while others are looking at ion traps or nuclear magnetic resonance (for a detailed look, check out this roadmap published by Los Alamos National Labs). However, the progress on building a computer that can do more than simple calculations, hold its state long enough to perform complex calculations, move the qubits around on a processor and address multiple other problems will require years of research. While a company called D-Wave says it has already built a working quantum computer, others estimate quantum computing is something that will reach commercial viability in decades rather than years.

Nanomagnetic materials: Nanomagetic materials use a current to change the polarization of a nanomagnet to as a means of storing information. For details, let me refer you to Cornell’s page on the topic:

The recent discovery that the electron-spin polarized current flowing to or from a thin-film ferromagnet can reversibly switch the magnetic orientation of another nearby nanomagnet by a “spin-transfer” process is opening up the prospect of a new means for ultra-high-density information storage. It also could lead to the development of new nanoscale components for high-frequency electronics. Research concerned with the injection and manipulation of electron spin in normal metal and semiconductor nanostructures could lead to the development of quantum computer elements as well as to a number of other “spintronics” applications.

However, before we get too focused on the processors, it’s worth noting that even if memristors, quantum computing and new materials are farther off than we think, computers can get faster by improving the interconnects on the chip by using optics instead of wires, and by improving memory to allow more information to be stored on the chip. There’s also stacking technologies already in place for memory, which allow chips to be placed vertically, but research is also ongoing for stacked integrated circuits that do the thinking inside computers. Tom Theis, Director of Physical Sciences for IBM Research, points out that IBM’s research into phase change memory, racetrack memory and other intersections between memory and storage as examples of ways memory can get faster, while HP has an entire research lab dabbling in photonics.

So, even as manufacturing challenges make it more difficult to meet the dictates of Moore’s law on a single chip, multicore processors and parallel programming are still boosting performance today. In the medium term, new interconnects and memory advances can also increase systems performance. However, the push for more transistors that keep the old model in place will continue to drive computing that isn’t economically or ecologically sustainable. Moving down the process node results in chips that consume less energy, but thinking beyond silicon, beyond CMOS and beyond the transistor could turn us onto better ways of building chips that are many times less energy intensive to manufacture and use less energy for computing. So in the long run, we’re going to have to look to memristors, quantum computing and nanomagnetic materials for the gadgets that are decades out. Maybe then we can do better than Moore’s Law. After almost five decades, it’s had a good run.

You must be logged in to post a comment.
7 Comments Subscribers to comment
Explore Related Topics

Learn about our services or Contact us: Email / 800-906-8098