Nov 6, 2018 | Atlanta, GA
The density of transistors in microchips can no longer be expected to double every year, but there are only a few researchers dedicated to solving the end of Moore’s law from all angles. They gathered at Georgia Tech for the second annual Center for Research into Novel Computing Hierarchies (CRNCH) Summit on Nov. 2.
More than 80 faculty members, industry partners, and students attended the all-day summit, which featured 10 plenary talks on the future of computingand concluded with a panel discussion of these suggestions.
The day reflected how much CRNCH has set the agenda for new computing hierarchies for the entire stack since it was founded in 2016 by Tom Conte, a professor in the schools of Computer Science (SCS) and Electrical and Computer Engineering (ECE).
“Three years ago, when Moore’s law was on its death bed, most architects went into mourning, but Tom was different,” College of Computing Dean Zvi Galil said during his opening remarks. “Immediately after the funeral, he created CRNCH to actually do something.”
CRNCH has grown ever since. Now the center has more than 30 dedicated faculty members from the College of Computing, the College of Engineering, the College of Sciences, and the Georgia Tech Research Institute. This year, SCS Professor Vivek Sarkar was named co-director to offer his high performance computing and programming languages expertise. CRNCH’s first research lab — the Rogues Gallery , a collection of novel hardware – continues to add new hardware and has pioneered industry connections with partners such as Emu Technologies.
“Everyone is coming up with new architectures, but what happens when these architectures meet real programmers and real students?” Jason Riedy, a research scientist in the School of Computational Science and Engineering, said of the Rogues Gallery. “We’re trying to break the idea of ‘it’s too much work’ to try something new.”
CRNCH is all about trying something new, “the crazier the better,” as Conte said. The plenary talks covered a lot of innovations that could propel computing forward from architecture and artificial intelligence to quantum computing and algorithms.
Some research highlights:
- Dana Randall, professor in SCS and co-executive director of the Institute for Data Science and Engineering, suggested using emergent computation, which she defines as predictable macro changes to system as the user modifies a few parameters.
- Moinuddin Qureshi, professor in ECE, highlighted the potential of quantum computing.
“Quantum computing is exotic and seems like something only physics and mathematics can touch, but it’s very important for the field that engineers are aware of this,” Qureshi said. “The real metric for success is reliability. If we can come up with good architectures and compilers that can reduce the error rate, that can be a big win.”
Another goal of the summit was to invite collaboration, and many presenters detailed how novel architectures could benefit their research, including:
- David Womble, director of Artificial Intelligence Programs at Oak Ridge National Laboratory, discussed the lab’s efforts in AI for endeavors such as bioscience, material synthesis, and additive manufacturing that require thinking about computing in a new way to achieve success.
- Jim Ang, manager of Physical & Computational Sciences at Pacific Northwest National Laboratory, presented the history and the future of high performance computing in the Department of Energy’s national labs.
- Jason Poovey, a research scientist at George Tech Research Institute, emphasized how the Center for Health Analytics and Informatics (CHAI) needs new scalable architectures for the health data they work with.
A keynote from Peter Kogge, a distinguished professor at Notre Dame and IEEE Computer Society Pioneer award winner, perhaps summed up best the challenges and opportunities of this post-Moore era that CRNCH is eager to take on.
“Post-Moore is going to be heavily heterogeneous, lots of new technologies will show up first as accelerators, clock rates will continue to go flat meaning we will have billions of threads, and power efficiency is paramount,” he said. “We need some new thinking to get around this.”