News

Harvard Hall Final Exam Locations Changed as Harvard Yard Encampment Enters Second Week

News

Amid Encampment, Cambridge City Council Discusses Resolution Supporting Student Right to Protest

News

Harvard Corporation to Review Presidential Search Process as Faculty Demand Transparency

News

House Committee’s Harvard Antisemitism Investigation Expanded to House-Wide Probe

News

Cambridge City Council Calls to Keep Democracy Center Open

Harvard Releases First Guidelines for ‘Responsible Experimentation with Generative AI Tools’

Harvard University Information Technology offices are located on Memorial Drive in Cambridge.
Harvard University Information Technology offices are located on Memorial Drive in Cambridge. By Julian J. Giordano
By Rahem D. Hamid and Claire Yuan, Crimson Staff Writers

Harvard announced initial guidelines for the use of generative artificial intelligence programs such as ChatGPT in an email to University affiliates on Thursday.

The first University-wide message to address the rising use of AI on campus, the email — sent from Provost Alan M. Garber ’76, Executive Vice President Meredith L. Weenick ’90, and Information Technology Vice President and Chief Information Officer Klara Jelinkova — emphasized the protection of confidential data, reiterated academic integrity policies, and warned against AI phishing attempts.

“The University supports responsible experimentation with generative AI tools, but there are important considerations to keep in mind when using these tools, including information security and data privacy, compliance, copyright, and academic integrity,” they wrote.

The five-point message instructs members of the Harvard community to “protect confidential data” — defined as all information that is not already public — and informs them that they are “responsible” for any content they produce that includes AI-generated material, as AI models can violate copyright laws and spread misinformation.

“Review your AI-generated content before publication,” the email urged.

While the administrators wrote that the guidelines are not new but instead “leverage existing University policies,” there is currently no policy in the Faculty of Arts and Sciences on AI’s impact on academic integrity.

In a recent faculty survey, almost half of surveyed FAS faculty believed AI would have a negative impact on higher education, while almost 57 percent said they did not have an explicit or written policy on AI usage in the classroom.

The use of AI platforms has already begun making its way into Harvard’s classrooms.

Computer Science 50: “Introduction to Computer Science,” the University’s flagship introductory coding class, has made plans to incorporate artificial intelligence into its course instruction for the first time this coming fall semester. Students will be allowed to use AI to find bugs in their code, seek feedback on design and error messages, and answer individual questions.

Administrators wrote that the University will “continue to monitor developments and incorporate feedback from the Harvard community to update our guidelines accordingly.”

—Staff writer Rahem D. Hamid can be reached at rahem.hamid@thecrimson.com.

—Staff writer Claire Yuan can be reached at claire.yuan@thecrimson.com. Follow her on Twitter @claireyuan33.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
CollegeCentral AdministrationFASUniversityTechnologyFront Middle FeatureFeatured ArticlesArtificial Intelligence