Skip to Main Content
UNLV Logo

Use of Generative AI in Research: Data Privacy Concerns

This guide will help you to navigate some of the tools and information needed to consider using generative artificial intelligence (AI) in your research

Data Privacy

Review data use policies of the generative AI tool you plan to use. Only input data that are appropriate to share publicly and externally to UNLV. Exercise caution when working with private, sensitive, or identifiable information, and avoid sharing any student information (which could be a FERPA violation), proprietary data, human subject data, controlled/regulated information, third party copyrighted materials or any materials that you do not own or manage the rights to.

Some generative AI tools have data use policies (For example, ChatGPT Enterprise) that may make them HIPAA compliant. Before inputting data into those tools it is recommended that you seek guidance from the Office of Research Integrity.

Peer Review

Uploading a manuscript in whole or in part to a Generative AI tool as part of your review process may breach the requirement to maintain the confidentiality of the content. As an example, NIH recently issued clear guidance prohibiting the use of generative AI in the NIH peer review process.

If you suspect that text or an image has been created using Generative AI, and the author has not disclosed this, you may want to alert the editor of this concern. While there are tools available to detect generated images (see the page of this guide on Detection), note that uploading those images/text during the peer review process may breach the confidentiality of the content.

© University of Nevada Las Vegas