Responsible use of GenAI: A Step-by-Step Guide

The 14 guiding questions below will help you decide whether using GenAI is appropriate in a given instance.

To help you reflect deeply about each guiding question, the guide provides some general risk assessment considerations as a starting point that must be answered with "Yes". If you answered "No", the use of GenAI may pose serious risks.
For some criteria, it implies that you need not, cannot, or must not use GenAI.

You may add criteria or alter them based on your use case.

This guide is also available as a PDF:

Planning & Input

  • Do you possess the foundational skills needed to accomplish the task, e.g., coding, critical reading, literature searching, scientific writing?
  • Do you have the subject-matter expertise and task-specific skills to write effective prompts for GenAI tools and to vet the outputs produced by the tools?
  • Are you allowed to use GenAI for the task? For instance, some publishers ban AI-generated images and most forbid use of AI to generate peer reviews.

FURTHER CONSIDERATIONS AND SUPPORT

  • Every semester, the tutors from Medical and Natural Sciences Library offer courses in “How to use AI tools for searching, reading,
    and writing: Critical thinking and sustainable best practices” within the Transferable Skills Program.
  • You can also contact support_med.ub@unibe.ch if you want courses tailored to your research groups.
  • Will using GenAI save you time?
  • Will it help you produce higher quality output?
  • Can you afford the tool?
  • If data are confidential, will the GenAI tool keep it protected and secure?
  • If data are non-sensitive or open, can you foresee any harms that might result from sharing them with the GenAI tool? If yes, do you have a plan for mitigating the harm (e.g., running the tool on a local server or local device)?

FURTHER CONSIDERATIONS AND SUPPORT

  • If you are uploading external information (e.g., published articles, slides) to the GenAI tool, do you have the copyright holders' permission (1, 2) to do so?
  • Have you made sure your prompts will not inadvertently reveal proprietary or confidential information (e.g., breach the confidentiality agreement you accepted as a peer reviewer)?

FURTHER CONSIDERATIONS AND SUPPORT

  • For answers to questions about copyrights and licenses, contact: openscience@unibe.ch.
  • Read the terms and conditions to determine whether there is an opt-out that disallows use of your inputs for training the LLM.
  • Have you considered the harms caused to research participants, patients, the public, and your career if your use of GenAI produced false or poor-quality information?
  • Will you have the time and resources to check all the information produced by GenAI to make sure it is not false or of poor quality?
  • Are you documenting your GenAI use in sufficient enough detail so you can write honest and transparent disclosure statements?

FURTHER CONSIDERATIONS AND SUPPORT

  • For further guidance on properly documenting use of AI tools, refer to COPE’s statement on authorship and AI tools (3).
  • If you have chosen your journal(s), read their latest ethical AI guidelines.
  • Refer to the guidance from publishers on how to disclose AI use in research (e.g., Springer Nature’s AI Policy, T&F AI Policy).

Tool & Availability

  • Have you tested comparable tools and ensured your choice best meets your purpose?
  • What capabilities (e.g., RAG) does your model have?
  • Are the models suited to the task?
  • Have you considered other task-specific, less resource-intensive tools you could use instead of general-purpose GenAI tools?

FURTHER CONSIDERATIONS AND SUPPORT

  • Will you use the free or paid version of the tool? What factor influenced your decision?
  • Can you adopt and implement tools with data servers in Europe or locally made GenAI or open source models?
  • Have you considered the effect of AI use on the environment (4)?
  • Have you tested the tool for internal consistency by rephrasing your prompt or challenging its answer through Socratic questioning, e.g., “Upon what assumptions are your conclusions based?", "Are you aware of gaps or bias in your knowledge base?"
  • Have you tested GenAI's answers for accuracy under a sufficient range of conditions?

FURTHER CONSIDERATIONS AND SUPPORT

  • First, test the model/tool under low-risk conditions with at least 2 independent auditors. (Use publicly available work + outputs for
    your test.)
  • Allocate enough time and effort to vet the tools properly.
  • Consider creating a Standard Operating Procedure (SOP) for working with AI tools that includes task-specific standardised prompts and quality control checklists.
  • Have you read and understood the latest UniBE guidelines for procuring and handling AI tools and data protection?

FURTHER CONSIDERATIONS AND SUPPORT

Data Source

  • Are you certain that using data from unknown sources for your task will not create a risk for you, your research participants, or your research output?
  • Have you thought about how relying on data from unknown sources could compromise the quality of your research output?
  • Have you thought about how choosing an LLM trained on a large corpus of unknown and potentially illegally obtained data could compromise the integrity of your research output?
  • Have you considered if your reputation is put at risk if you use unethically sourced data?

FURTHER CONSIDERATIONS AND SUPPORT

Stay abreast of the news and regulations regarding AI copyright infringement.

Output

  • Do you know how scholars in your field usually mitigate prevailing biases? Can you implement a similar or equally effective method for de-biasing outputs?
  • Have you tested the effect on the output by comparing examples from different demographic groups?

FURTHER CONSIDERATIONS AND SUPPORT

  • GenAI is not neutral but reflects prevailing biases (2).
  • List known biases and lacunae in your field (e.g., publication bias, gender bias, demographic bias) and draft a plan to mitigate them for instance, by incorporating data from other sources.
  • Libraries that assess and mitigate bias include Fairlearn, GitHub - columbia/fairtest
  • Is the topic sensitive or likely to affect groups of people differently? (Evaluate likely unequal effects, preferably with input from people from the affected groups.)
  • Have you tested GenAI's answers for consistency under a sufficient range of conditions?
  • Are you following the latest AI guidelines published by your journal or publisher of choice?

FURTHER CONSIDERATIONS AND SUPPORT

Be aware that GenAI will not provide consistent answers to the same prompt. Thoroughly test the range of answers it provides, e.g., by multiple application of the same prompt, until you are sure that any answers in that range are sufficient for your purposes.

  • Can you affirm that you will not use GenAI to fabricate or manipulate research data and results (2, 5)?
  • Can you affirm that you did not use GenAI to generate image-based data, e.g., Western blots?
  • If the GenAI output contained references, have you verified that these references exist and that the GenAI output correctly represents the original authors' meaning?

FURTHER CONSIDERATIONS AND SUPPORT

GenAI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. Therefore, proper human oversight and verification is needed at every stage (1, 2, 5).

 

References

 

Disclaimer

We are actively working on expanding this GenAI decision guide. If you would like to participate, share insights, or provide recommendations, please contact us: Medical and Natural Sciences Library: support_med.ub@unibe.ch or frnat.ub@unibe.ch