Smaller, Smarter, Speedier, Stacked: Engineering Next-Gen Computing

At Georgia Tech, engineers are finding new ways to shrink transistors, make systems more efficient, and design better computers to power technologies not yet imagined.
asif-khan-cleanroom-wafer-thumb.jpg

Asif Khan holds a silicon wafer in Georgia Tech’s cleanroom facility. Khan is trying to build new kinds of computer memory using fundamentally different mechanisms to store data. (Photo: Candler Hobbs)

The power of modern computing is hard to overstate.

Your smartphone has more than 100,000 times the power of the computer that guided Apollo 11 to the moon. It’s about 5,000 times faster than 1980s supercomputers. And that’s just processing power.

Apple’s original iPod promised “1,000 songs in your pocket” in 2001. Today’s average smartphone has enough memory to store 25,000, along with thousands more photos, apps, and videos.

This exponential leap in capability traces a prediction made in 1965 by Intel co-founder Gordon Moore. He suggested the number of transistors — tiny electronic switches — on a computer chip would double roughly every two years. Moore’s Law, as it became known, has served as a benchmark and guiding principle for the tech industry, influencing the trajectory of innovation for nearly six decades.

But now miniaturizing transistors has slowed. Headlines regularly declare Moore’s Law dead.

Arijit Raychowdhury sees it differently.

He said Moore’s Law was never just about shrinking transistors. It was about making computing better.

“Moore’s Law is fundamentally economic,” said Raychowdhury, Steve W. Chaddick School Chair of Electrical and Computer Engineering (ECE). “It’s not about the physics of making transistors smaller. It’s about the business imperative to deliver better performance, lower power consumption, smaller form factors, or reduced costs.”

Read the full story in Helluva Engineer magazine.