Student Who Sued Harvard Over Antisemitism to Speak at Republican National Convention


Former Defense Department General Counsel Appointed Harvard’s Top Lawyer


Democracy Center Protesters Stage ‘Emergency Rally’ with Pro-Palestine Activists Amid Occupation


Harvard Violated Contract With HGSU in Excluding Some Grad Students, Arbitrator Rules


House Committee on China to Probe Harvard’s Handling of Anti-CCP Protest at HKS

Google DeepMind Researcher Discusses Emergence of Reasoning in AI at Harvard ML Foundations Talk

Denny Zhou, a researcher at Google, discussed large language models at a talk at the Science and Engineering Complex in Allston.
Denny Zhou, a researcher at Google, discussed large language models at a talk at the Science and Engineering Complex in Allston. By Julian J. Giordano
By Camilla J. Martinez and Tiffani A. Mezitis, Crimson Staff Writers

Updated September 21, 2023, at 2:47 p.m.

Google researcher Denny Zhou discussed the emergence of reasoning in large language models at a Harvard Machine Learning Foundations talk Friday afternoon at the Science and Engineering Complex.

Zhou, the founder and lead of the reasoning team for Google DeepMind, outlined the way he trains emerging artificial intelligence LLMs to shrink the gap between machine and human intelligence.

The lecture, titled “Teach Language Models to Reason,” is part of the Kempner Seminar Series organized by the Harvard Machine Learning Foundations Group, comprised of faculty, graduate students, and postdoctoral fellows at the University who research the topic of machine learning.

Zhou talked about his approach to investigating reasoning in AI technology, which he started five years ago.

“The first thing I tried was to combine deep-learning models with, first of all, a lot of neurological machines,” he said.

The approach is composed of four elements: “chain-of-thought,” or adding thoughts before a final answer; “self-consistency,” sampling repeatedly and selecting the most frequent answer; “least-to-most,” breaking down problems into different parts and solving them individually; and “instruction finetuning,” calibrating an AI to assess new problems without training.

Though “good ideas have really amazing performance,” Zhou said AI still has a long way to go in comparison to human thinking.

Zhou said he is unconvinced that AI’s integration into modern society will live up to its expectations. Though some might say that super intelligence will emerge in five or 10 years, Zhou said, “I just want to see a self-driving car coming in 10 years, and I cannot imagine that in this moment.”

In a post-talk interview, Zhou elaborated that the AI technology behind self-driving cars would be “very difficult to scale up” because AI data needed for training models are specific to each city, so training models would need to collect data from different cities.

Human intelligence, Zhou said, still supersedes AI capabilities.

“Humans are humans. If you know how to drive cars in one city, you have no problem to drive cars in other cities,” he said. “That is very different from the kinds of techniques used to do self-driving cars.”

Zhou shared his hopes for the development of LLMs with reasoning capabilities and their contributions to human society.

“I expect lots of AI models will greatly improve our experience of using different softwares,” he said.

He cited ChatGPT’s ability to “write better text” and larger models’ capacity to “help write code.”

“Larger models will make our world more productive, ” Zhou said.

—Staff writer Camilla J. Martinez can be reached at

—Staff writer Tiffani A. Mezitis can be reached at

Want to keep up with breaking news? Subscribe to our email newsletter.

EventsTechnologyArtificial Intelligence