Skip to main content

Using AI outputs

Using AI-generated outputs

There are several restrictions on how you may use the outputs from an AI system. You should not publicise AI outputs:

  • that have not been checked for biases in the AI process or arising from the input data or prompts
  • that are sensitive, especially with regard to the formally protected characteristics, without internal review by the relevant FREC
  • without clearly describing their AI origin, together with the relevant data management, bias and validation methodologies.

Checking AI outputs

The person who used the AI tool to generate the response is responsible for checking the outputs from the AI tool.

However, in the case of outputs that already exist (say from a previous study), checking outputs is the responsibility of the person who wishes to use or publicise the output, whether internally or externally. If you re-use a response in this way, you must cite the origin of the response.

All input data, prompts and outputs should be checked (validated) for factual accuracy and bias. It is not always possible to remove all traces of bias from input/output data, but you are expected to consider at reasonable length the effects of factual error and bias and what can be done to minimise them. This should be included in any document (internal or external) reporting the use of AI tools.

Trust in AI outputs

ALL outputs from all Gen AI tools must be independently verified for truthfulness.

Most Gen AI models are created to provide a likely output based on prompts and training. Their outputs are designed to appear convincing even when there is no factual basis for the output. Consequently, ‘facts’ provided by these tools may appear to be trustworthy, but that appearance is false and so outputs must be verified.

Read more in the strengths and weaknesses of Generative AI section.