AI Use Policies and Guidance

AI and Machine Learning are transforming research practices, providing increased capabilities and efficiencies. However, navigating through the complex landscape of AI tools requires careful planning and adherence to best practices to ensure security, compliance, and ethical integrity.


Ensure Data Security, Contact HMS IT

  • Objective: Evaluate risks and secure your data.
  • Action: Submit a request to HMS IT detailing your project and the AI tools you intend to use.
  • Outcome: HMS IT will assess the security implications and guide you on how to safeguard your data.


Navigate Procurement with HMS

  • Objective: Smooth purchasing and contract navigation.
  • Action: Before making any purchases, consult with HMS Procurement to ensure you are in compliance.
  • Outcome: Compliant and secure acquisition of AI tools. They may also be privy to existing licensing negotiations and other relevant information.


Adhere to Regulations and Standards

  • Sponsor or Funding Agency Regulations: Consult relevant agencies to understand and adhere to their requirements.
  • Data Privacy Standards: Ensure compliance with data protection and privacy laws applicable to your region and field of research.
  • Harvard Information Security Policy: You should not enter data classified as confidential (Level 2 and above), including non-public research data, into publicly-available generative AI tools, in accordance with the University’s Information Security Policy. Information shared with generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties. HUIT has created a chart to compare Generative AI tools available that includes what levels of confidential data are approved for use.


Proper Acknowledgment in Publications

  • Objective: Maintain transparency and integrity in your scholarly publications.
  • Action: Review our Publication Guide on the proper acknowledgment of AI tools used in your research.
  • Outcome: Ensure ethical integrity and transparency in publications.


Plagiarism Detectors

  • We did not include plagiarism detectors in our Tools list, as evidence has shown significant false positives (i.e., incorrectly identifying a fully human-written text as AI-generated) and biases against non-native speakers.
  • If using a plagiarism detector, please review and consider the following guidance:
    • Prior to utilizing the software, establish a clear plan for plagiarism detection.
    • Openly acknowledge the possibility of false positives at the forefront of conversation.
    • Approach the results of plagiarism detection software as suggestive rather than accusatory, and always assume positive intent



Need More Help?

Reach out to us at



This resource is provided to assist researchers in navigating the complex landscape of AI and Machine Learning tools within the realm of scholarly research. While we strive to offer accurate and up-to-date information, the field of AI is rapidly evolving, and practices and tools may change over time. It is the responsibility of the individual researcher and their respective teams to ensure that they are in compliance with all relevant laws, regulations, and institutional policies.

The steps outlined in this guide are intended to serve as a general framework, and may not cover all specific scenarios or requirements. Researchers are encouraged to conduct thorough due diligence, seek expert advice, and engage with relevant institutional departments, such as HMS IT and HMS Procurement, to ensure that all aspects of data security, procurement, regulatory compliance, and ethical standards are adequately addressed.

Harvard Medical School and its affiliates do not endorse any specific AI or Machine Learning tools, and the inclusion of any tools or practices in this guide does not imply endorsement. The responsibility for ensuring the security, compliance, and ethical integrity of the use of AI tools in research rests solely with the individual researcher and their respective teams.