Live Updates: Harvard President Claudine Gay’s Inauguration
A Proposal to Merge Harvard’s Small Language Programs Has Fallen Flat. What’s Next for the Humanities?
Cambridge Public Schools MCAS Scores Return to Pre-Pandemic Levels
‘Celebrations Come to Life’ for Harvard Students Celebrating Rosh Hashanah, Yom Kippur
Harvard College Suspends ‘Senior Gift’ Campaign Amid Falling Buy-in from Students
Updated September 21, 2023, at 2:47 p.m.
Google researcher Denny Zhou discussed the emergence of reasoning in large language models at a Harvard Machine Learning Foundations talk Friday afternoon at the Science and Engineering Complex.
Zhou, the founder and lead of the reasoning team for Google DeepMind, outlined the way he trains emerging artificial intelligence LLMs to shrink the gap between machine and human intelligence.
The lecture, titled “Teach Language Models to Reason,” is part of the Kempner Seminar Series organized by the Harvard Machine Learning Foundations Group, comprised of faculty, graduate students, and postdoctoral fellows at the University who research the topic of machine learning.
Zhou talked about his approach to investigating reasoning in AI technology, which he started five years ago.
“The first thing I tried was to combine deep-learning models with, first of all, a lot of neurological machines,” he said.
The approach is composed of four elements: “chain-of-thought,” or adding thoughts before a final answer; “self-consistency,” sampling repeatedly and selecting the most frequent answer; “least-to-most,” breaking down problems into different parts and solving them individually; and “instruction finetuning,” calibrating an AI to assess new problems without training.
Though “good ideas have really amazing performance,” Zhou said AI still has a long way to go in comparison to human thinking.
Zhou said he is unconvinced that AI’s integration into modern society will live up to its expectations. Though some might say that super intelligence will emerge in five or 10 years, Zhou said, “I just want to see a self-driving car coming in 10 years, and I cannot imagine that in this moment.”
In a post-talk interview, Zhou elaborated that the AI technology behind self-driving cars would be “very difficult to scale up” because AI data needed for training models are specific to each city, so training models would need to collect data from different cities.
Human intelligence, Zhou said, still supersedes AI capabilities.
“Humans are humans. If you know how to drive cars in one city, you have no problem to drive cars in other cities,” he said. “That is very different from the kinds of techniques used to do self-driving cars.”
Zhou shared his hopes for the development of LLMs with reasoning capabilities and their contributions to human society.
“I expect lots of AI models will greatly improve our experience of using different softwares,” he said.
He cited ChatGPT’s ability to “write better text” and larger models’ capacity to “help write code.”
“Larger models will make our world more productive, ” Zhou said.
—Staff writer Camilla J. Martinez can be reached at email@example.com.
—Staff writer Tiffani A. Mezitis can be reached at firstname.lastname@example.org.
Want to keep up with breaking news? Subscribe to our email newsletter.