EC Guidelines on the responsible Use of generative AI in Research

As the adoption of generative AI technology continues to expand across various fields, including science, these recommendations address both the promising opportunities and potential challenges associated with its proliferation. Rooted in the principles of research integrity, the guidelines offer clear directives to researchers, research organizations, and funders to ensure a unified approach across Europe. They draw upon established frameworks such as the European Code of Conduct for Research Integrity and guidelines on trustworthy AI.

Download

The transformative impact of AI on research is undeniable, streamlining processes and expediting discoveries. However, while generative AI tools offer efficiency in generating text, images, and code, researchers are cautioned to remain vigilant of its limitations, including risks such as plagiarism, inadvertent disclosure of sensitive information, and biases inherent in the models.

Key highlights from the guidelines include:

  • Encouragement for researchers to abstain from employing generative AI tools in sensitive activities like peer reviews or evaluations, and to uphold privacy, confidentiality, and intellectual property rights.
  • Emphasis on research organizations to facilitate the responsible deployment of generative AI and to actively monitor its development and usage within their domains.
  • Call for funding organizations to provide support to applicants in utilizing generative AI transparently.

Given the dynamic nature of generative AI technology, these guidelines will undergo periodic updates informed by feedback from the scientific community and stakeholders, ensuring their relevance and effectiveness in guiding responsible AI usage in research endeavors.

Read the press release