Harvard Guidance on Use of AI in Research
- HMS: https://it.hms.harvard.edu/about/policies/responsible-use-generative-ai
- Harvard: https://huit.harvard.edu/news/ai-guidelines
AI in Grant Proposal/Peer Review Process
- NIH Peer Review: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html
- NSF Reviews and Proposals: https://new.nsf.gov/news/notice-to-the-research-community-on-ai
AI in Authorship and Publications
- Committee on Publication Ethics (COPE) Authorship Guidelines: https://publicationethics.org/cope-position-statements/ai-author
- World Association of Medical Editors (WAME) Recommendations for Scholarly Publications: https://wame.org/page3.php?id=106
AI Tools available at Harvard
- HUIT Generative AI Tools Comparison: https://huit.harvard.edu/ai/tools
- Level 3 data and below can be used with the following tools:
AI Best Practices
- You should not enter data classified as confidential (Level 2 and above), including non-public research data, into publicly-available generative AI tools, in accordance with the University’s Information Security Policy.
- AI Tools cannot be listed as an author on a paper.
- The NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.
- NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools.
- Authors should be transparent when AI tools are used and provide information about how AI tools were used.