A Message About AI & Campus Services

Dear Colleagues,

I’m following-up on the University email recently sent (see below) about the rapidly expanding use of publicly available Artificial Intelligence (AI). While AI is not new, the growth in easily accessible generative AI tools has the potential to permeate all facets of life, including the workplace. Today’s message is the beginning of a conversation we will have to better understand AI and what it can, and should, mean for Campus Services and the broader University. Our approach will be to balance the exciting and innovative possibilities with sensible and appropriate implementations that enhance the way our employees deliver services to the Harvard community.

For now, it’s important that we adhere to the University’s guidelines. Employees should:

  • Use care when handling confidential information, including being careful not to share financial and employee-related information. Please don’t enter confidential information into publicly available tools like ChatGPT.
  • Continue to own your work by ensuring any content (i.e. websites, reports, presentations, etc.) generated with the support of AI is accurate.
  • Stay vigilant for suspicious emails and other digital communications, and don’t fall for increasingly challenging phishing scams.
  • Consult with CSIT before purchasing any new AI technology.

While our employees have consistently done well in these areas, it’s important to understand that AI adds a layer of complexity to all of them. If you have any questions about how Artificial Intelligence may interface with our workplace, reach out to your primary IT contact or to ben_gaucherin@harvard.edu directly. Please pass this message along to others as you deem necessary.

We will strive to have an ongoing dialogue so that we can leverage these tools to maximum effectiveness while ensuring employees have the knowledge to grow alongside them. I hope you’re eager to participate in this conversation as we go forward.

Sincerely,

Sean

—————————————————————————————————————————————————————————–

Original University Message Sent July 13th

Dear Members of the Harvard Community,

We write today with initial guidelines on the use and procurement of generative artificial intelligence (AI) tools, such as OpenAI’s ChatGPT and Google Bard. The University supports responsible experimentation with generative AI tools, but there are important considerations to keep in mind when using these tools, including information security and data privacy, compliance, copyright, and academic integrity.

Generative AI is a rapidly evolving technology, and the University will continue to monitor developments and incorporate feedback from the Harvard community to update our guidelines accordingly.

Initial guidelines for use of generative AI tools

  • Protect confidential data: You should not enter data classified as confidential (Level 2 and above), including non-public research data, into publicly-available generative AI tools, in accordance with the University’s Information Security Policy. Information shared with generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties.
  • You are responsible for any content that you produce or publish that includes AI-generated material: AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called “hallucinations”) or may contain copyrighted material. Review your AI-generated content before publication.
  • Adhere to current policies on academic integrity: Review your School’s student and faculty handbooks and policies. We expect that Schools will be developing and updating their policies as we better understand the implications of using generative AI tools. In the meantime, faculty should be clear with students they’re teaching and advising about their policies on permitted uses, if any, of generative AI in classes and on academic work. Students are also encouraged to ask their instructors for clarification about these policies as needed.
  • Connect with HUIT before procuring generative AI tools: The University is working to ensure that tools procured on behalf of Harvard have the appropriate privacy and security protections and provide the best use of Harvard funds.
    • If you have procured or are considering procuring generative AI tools or have questions, contact HUIT at ithelp@harvard.edu

It is important to note that these guidelines are not new University policy; rather, they leverage existing University policies. You can find more information about generative AI, including a survey to collect data on its potential use, on the HUIT website, which will be updated as new information becomes available. 

Sincerely,

Alan M. Garber

Provost

Meredith Weenick

Executive Vice President

Klara Jelinkova

Vice President and University Chief Information Officer