‘Hyped Just About Right’: How the AI Boom is Reshaping Research at Harvard

By Hewson Duffy, Crimson Staff Writer
By Catherine H. Feng

In the early 2010s, Psychology professor Joshua Greene got a glimpse of the future. It was a video of a neural network playing classic Atari games like Pong and Space Invaders just like a human — an early demonstration of a 2015 Google DeepMind breakthrough in the quest to teach machines how to learn.

“I was stunned,” Greene says. “There was nothing like that before.”

Immediately, Greene, who studies the psychology of morality, began wondering about the implications for human labor and automated warfare.

“It was kind of like a nuclear bomb going off in my head when I realized the potential of this new wave of AI,” he says.

By 2016, Greene was teaching a seminar devoted in large part to the moral implications of human and superhuman-level AI.

In the ensuing years, he has watched the rest of Harvard catch up.

Through the late 2010s, machine learning was already a growing discipline for computer scientists and adjacent researchers. But the inflection point came in 2022 with the arrival of ChatGPT.

Since then, a surge in both academic and corporate funding for AI — or anything remotely related enough to catch the tide — has begun to reshape research at Harvard.

At the University level, new faculty committees, research programs, and a whole new institute are scrambling to understand the technology.

Yet as tech giants race to create ever-larger AI models, even Harvard can no longer match the scale of their funding, and some Computer Science professors have left the University entirely to continue their research in industry.

Professors across other disciplines, meanwhile, have begun using AI to speed up their research, aid their classroom instruction, and design new methodologies altogether. While faculty are stunned by the pace of advancement and apprehensive about the technology’s risks, many are excited about the benefits it might herald, too.

To them, the wave of AI Greene saw coming is here — and they’re rushing to meet it.

‘A Different Ecosystem’

Even before the release of ChatGPT, Harvard was gearing up to turn its institutional gaze onto AI.

In December 2021, Mark Zuckerberg, a Harvard dropout, and Priscilla Chan ’07 pledged $500 million to found the Kempner Institute for the Study of Artificial and Natural Intelligence — named after Zuckerberg’s mother and maternal grandparents. At that time, obsession with AI was still mostly confined to Silicon Valley and computer science departments. Popular chatbots had major flaws and, at least in the popular view, certainly weren’t catching up to humans anytime soon.

Mark E. Zuckerberg, left and Priscilla Chan '07 speak at the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence in September 2022.
Mark E. Zuckerberg, left and Priscilla Chan '07 speak at the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence in September 2022. By Mayesha R. Soshi

But while the wheels of academia turned slowly, AI research was reaching breakneck speed. As the Kempner Institute gathered faculty and built up administrative infrastructure over the 2022 academic year, ChatGPT launched to become the fastest-growing consumer application in history, reaching 100 million users in just two months. By early 2023, Google and Meta had diverted massive amounts of funding to join an AI arms race with OpenAI.

It wasn’t until the fall of 2023 when the Kempner Institute became noticeable on Harvard’s campus. This year, posters for Kempner-sponsored talks on cutting-edge machine learning and neuroscience research began appearing on the glass doors and concrete columns of the Science and Engineering Complex in Allston.

Blog posts on the Institute’s website explained research advances made by faculty affiliates. And walking the SEC’s halls were the 22 recipients of the Kempner Institute’s inaugural graduate fellowship, which covers students’ tuition, stipend, and health insurance for four years, and provides them access to Kempner faculty mentorship and computing power.

The stream of funding began to reach undergraduates, too. A Kempner undergraduate research program awarded its first grants this spring, and in June a small cohort of students will move into Harvard housing and begin full-time research under Kempner-affiliated faculty as part of KRANIUM, the Institute’s new summer research program.

By the end of this semester — two and a half years after its announcement — the Kempner Institute had moved into a large wood-paneled space filled with whiteboards on the top floor of the SEC and seemed at last to be operating at full capacity.

Yet at this point, even $500 million had nothing on industry investment. As lavish and well-funded as the Kempner Institute might seem at Harvard, the past two years had seen frontier AI research become orders of magnitude more capital-intensive.

In 2021, the largest AI models cost a few million dollars to produce, but by now the cost for training a frontier model may exceed $1 billion.

This ballooning scale has dramatically changed the relationship between academic and industry research. Rather than train the next generation of models themselves, AI researchers at Harvard must ask smaller questions — or different ones entirely.

Elise Porter, the executive director of the Kempner Institute, puts it in stark terms.

“The problems that industry is trying to solve — it’s a different ecosystem than the problems that we’re trying to solve,” she says.

The difference is materially evident in the relative size of computing clusters. Training and testing AI models requires massive amounts of computing power, typically via clusters of high-powered graphics processing units.

Last summer, the Kempner Institute ordered almost 400 state-of-the-art versions of these GPUs — likely a seven-figure purchase, as each unit was selling for around $30,000 at the time. When the cluster is fully set up — the second half of the GPUs only arrived in May due to hardware issues and extreme demand — it will be one of the largest available in academia, Porter says.

Meanwhile, Meta announced in January that it would buy 350,000 of the same processors, a staggering investment likely worth over $10 billion. Google and OpenAI have made similar moves.

As the discrepancy between academic and industry funding has grown, some professors have moved from one to the other. Last July, Computer Science and Applied Math professor Yaron Singer left Harvard to focus on his AI security startup, Robust Intelligence, and Computer Science professor Boaz Barak went on leave to work at OpenAI this spring.

Still, Porter emphasizes that the Kempner Institute can contribute to knowledge without training the largest-known models. She points to experiments — some of which are currently being run on the Kempner Institute’s cluster — exploring how models change as they grow bigger and ingest more data (so-called “scaling laws”). While these experiments might only take tens or hundreds of GPUs to run, their results might inform how corporate labs build their largest models.

The Kempner Institute for the Study of Natural and Artificial Intelligence purchased nearly 400 advanced graphics processing units to bolster its resources for training generative AI models.
The Kempner Institute for the Study of Natural and Artificial Intelligence purchased nearly 400 advanced graphics processing units to bolster its resources for training generative AI models. By Addison Y. Liu

Even in industry, Porter says, companies “still have to have proof of concept” before spending tens or hundreds of millions on creating a single large model.

These proof of concepts, however, require substantial computing resources. The goal of research at the Kempner Institute, Porter says, is to find the “sweet spot” of questions whose answers are “meaningful and create advancement” — while being testable without billion-dollar computing clusters.

‘Felt Like Magic’

But AI’s reach at Harvard extends far beyond the Kempner Institute, and its funding doesn’t just come from Zuckerberg. Over the past year, Harvard has established three University-wide working groups on AI, dedicated a section of its website to the topic, and sponsored many more standalone events.

Over the summer, Harvard Law School announced an Initiative on AI and the Law in partnership with the Berkman Klein Center for Internet & Society, and this fall, the Medical School accepted its first applications for a new AI in Medicine PhD track.

Through one of the working groups, Harvard established another summer research program — separate from the Kempner Institute-affiliated one — to fund projects studying, or even just using, Generative AI. According to multiple faculty, the resulting GenAI Research Program awarded funding with seemingly unprecedented speed.

Carole T. Voulgaris, a Graduate School of Design professor who hired two summer research assistants through the GenAI Research Program, says the process of securing funding “felt like magic.”

After receiving an email about the program, she filled out a form describing her proposed project — using ChatGPT to extract structured data from narrative text within Federal Transit Administration annual reports. A week or two later, she says, she received a list of interested students, and “shortly thereafter” two students accepted her offer.

“It’s faster than anything ever goes in academia broadly,” she says.

“The University’s working groups have been working to make new resources available wherever possible, as quickly as possible,” Vice Provost for Research John H. Shaw wrote in an emailed statement. “In this case, the GenAI R&S Working Group recognized an unmet need for supporting faculty and students across the University who wished to engage in related research over the summer.”

Professors have also used Harvard funding to develop educational chatbots for classes like Computer Science 50, as well as Economics 50 and Physical Sciences 2.

According to Logan S. McCarty, who serves as the associate dean for science education in the office that funded many of the chatbots, the results have been “very promising.”

The Science and Engineering Complex, located at 150 Western Avenue in Allston, houses the administration of the School of Engineering and Applied Sciences.
The Science and Engineering Complex, located at 150 Western Avenue in Allston, houses the administration of the School of Engineering and Applied Sciences. By Julian J. Giordano

“Students are very willing to ask what feel like dumb questions to the AI, that they might not be willing to ask a human TF,” he says.

“I think we’ve seen positive effects in almost all cases,” McCarty adds — before clarifying this is distinct from using AI as a “shortcut.”

(Those shortcuts are not uncommon: In The Crimson’s 2024 senior survey, almost a third of respondents reported using an AI model to complete an assignment without being allowed to do so.)

So where’s all this money coming from?

At the beginning of the year, McCarty says, “none of this was part of our budget.” But “when something like AI comes along, we try to collectively find” funding to divert. In the case of the course chatbots, money came from discretionary funding, as well as “course innovation funds.”

Some professors praised the University’s swift response.

“Harvard has an impressive presence in AI,” wrote Computer Science professor Ariel Procaccia in an email, citing both the Kempner Institute and new AI initiatives within SEAS. “I expect that these initiatives will lead to impactful, large-scale collaborations around AI.”

“I’m really happy with Harvard,” Astronomy professor V. Ashley Villar says. “They’ve just really embraced what I think will be revolutionary.”

‘A Fundamental Building Block’

Last spring, a colleague told Harvard Kennedy School Professor Gautam Nair a statistic he found extraordinary: Teams at Microsoft who used LLMs in their day-to-day work were vastly more productive than those who did not.

Struck by this fact, Nair began to use LLMs “quite extensively.” AI now helps him do everything from writing code to do statistical analyses to outlining scholarly articles.

He’s not the only one. Professors from across the Faculty of Arts and Sciences and many of Harvard’s graduate schools have begun incorporating AI into both their research and pedagogy.

Before ChatGPT turned generative AI into a buzzword, researchers were already dashing to use machine learning — the broader term for pattern-recognition systems that underpinned facial recognition and Google Translate.

“Machine learning has very quickly been integrated across the social sciences and life sciences,” Greene says. “It’s increasingly driving a lot of cutting edge research — really anything where there’s just a lot of data.”

In psychology, he guesses, roughly half the talks given in the department now “have some kind of machine learning component.”

Villar, the Astronomy professor, compares the increasing use of machine learning to the adoption of basic statistics. Machine learning, she says, is “a fundamental building block at this point of so much of what we do in science.”

The advent of ChatGPT only accelerated this trend, but with a new twist — beyond simply using machine learning to analyze their data, some professors began to treat AI as an object of study itself.

In psychology, according to Greene, “There’s people studying large language models, especially chatGPT, as if it were a child or a chimp or some new intelligence that we’re trying to understand.”

Professors are also using it just like students do. Recently, Nair used an LLM to come up with and write code for a new classification scheme for congressional speeches — one that could save his research team hundreds of hours of manual labeling. McCarty, who taught a Gen Ed course on generative AI called “Rise of the Machines,” says he used ChatGPT to help him understand technical CS papers in preparation for the course.

For others, better generative models are unlocking previously inaccessible questions. Graduate School of Design faculty member Allen Sayegh, who studies people’s reactions to the built environment, is experimenting with using AI to generate virtual reality “immersive spaces” and adapt them to users’ reactions in real time.

Over at the SEC, Procaccia recently published results suggesting that large language models can effectively generate policies based on human opinions that would enjoy wide support. In an email, Procaccia touted AI’s “potential to bolster democracy by providing new ways of finding consensus, supporting deliberation and enhancing political decision making.”

But how much of this potential is well-grounded — and how much is just hype?

Though professors widely note the limitations of current AI, particularly potential biases and hallucinations, many also express awe at the pace of advancement.

The field of machine learning, Villar says, “has suffered from booms of fads.” She brings up how some astronomers will build a complicated, AI-powered model to analyze their data when “basic linear algebra” would have performed better.

But even if some researchers are just “following the current trend,” she says, “the success rate of these things for being transformative is high enough that that’s okay.”

OpenAI CEO Sam Altman, right, speaks at a packed event in Memorial Church in May. The excitement surrounding Altman's visit to campus demonstrated broad interest in artificial intelligence from the student body.
OpenAI CEO Sam Altman, right, speaks at a packed event in Memorial Church in May. The excitement surrounding Altman's visit to campus demonstrated broad interest in artificial intelligence from the student body. By Emily L. Ding

AI is “hyped just about right these days,” wrote Harvard Kennedy School professor Mathias Risse in an email.

Yet as much as faculty say they’re excited for the possibilities AI can bring, they are just as quick to express concerns over an uncertain future.

“As a researcher, it’s a sense of excitement,” Greene, the psychologist, says. “But as humans who care about the world, plenty of people I know are concerned about the destructive effects of AI.”

For the moment, though, he’s not too worried about machines coming for his job — or those of his colleagues.

“Relatively speaking, I think scientists are more likely to have their work complemented by AI technology rather than displaced,” he says.

“Knock on wood — for now,” Greene adds.

—Magazine Chair Hewson Duffy can be reached at hewson.duffy@thecrimson.com.

Tags
ResearchSciences DivisionSEASScienceAcademicsTechnologyFaculty NewsFront Middle FeatureFeatured ArticlesArtificial Intelligence