Using RAG assessment categories
We have adopted a three-tiered Red / Amber / Green (RAG) system to articulate how students can use Generative Artificial Intelligence (Gen AI) tools in summative assessments.
The three categories of red, amber and green are not defined rigidly. They are intended as a tool so that staff and students have a shared understanding of how generative AI tools can be used in a particular assessment, by how much and at what stage of the assessment process.
Applying Red / Amber / Green assessment categories
You should assign one of these categories to each summative assessment:
Please note all categories are subject to reasonable adjustments. We are currently working on what it means to provide reasonable adjustments while maintaining competence standards. In the meantime, if you have any questions or comments, please email us.
More information about these categories, and the types of assessment often associated with them, can be found in the Generative AI guidance for taught students. You may also wish to consider UCL’s resources on Designing Assessments in an AI-enabled world for how to adapt different types of assessment such as exams, essays and multiple-choice questions.
There is further information on the University of Leeds Staff Intranet on Generative AI and assessments, including the current thinking on robust assessment methods to reduce misuse of Gen AI (only available for University of Leeds staff).
Considerations when categorising assessments
Please see the key questions to consider about Gen AI and assessment (only available for University of Leeds staff through the Staff Intranet).
Digital Education Enhancement teams will provide in-person workshops for faculties exploring these questions. You are advised to attend one of these if you are involved in setting, marking or supporting assessments, whether or not you plan to change how you assess because of Gen AI tools.
RED considerations
RED
Consider these questions to inform your rationale for not allowing students to use Gen AI in any capacity:
- Would using Gen AI make it difficult or impossible to meet the learning outcomes?
- Are students being assessed on understanding or skills that it is important they can demonstrate without technological assistance?
- In many assessments, students are often asked to do several different things – are you assessing them on all of these, or are there some necessary preconditions for the assessed activity? For example, students need to summarise a text to write an analysis, but you are only directly interested in the analysis.
- Are you selecting this option because the implications of Gen AI use in your assessment or discipline are not yet sufficiently clear to be able to allow it with confidence in the integrity of the assessment?
If it is important that students do not use Gen AI in any part of the assessment whatsoever, apply the red category.
If there are elements within the assessment where using Gen AI would not adversely affect the learning outcomes, consider an amber approach.
AMBER considerations
AMBER
Consider these questions to inform your rationale for allowing students to use Gen AI in specific or limited ways:
- What is it that you want students to do to demonstrate their learning? What is it that will enable them to meet the learning outcomes?
- In what ways might using Gen AI tools undermine students’ ability to demonstrate their learning? For example, if students ask Gen AI to create content that they then copy verbatim, translate, or lightly edit.
- In what ways might using Gen AI tools enhance students’ ability to demonstrate their learning? For example, allowing AI to provide feedback on style and tone but not content may be appropriate depending on the assessment.
- Are there aspects of the assessment that are necessary preconditions to meeting the learning outcomes, but do not address them directly? For example, students need to summarise a text to write an analysis, but you are only directly interested in the analysis. If you allowed Gen AI to produce the summary, would this undermine the learning outcomes or the students' ability to demonstrate their powers of analysis?
- Is using AI tools integral to the process of learning, and the ability of students to achieve the learning outcomes, even if AI is not explicitly assessed or mentioned in the learning outcomes?
If students may use Gen AI for parts of the assessment, but are not required to do so, then apply the amber category.
Use the assessment guidelines to tell the students in which parts of the assessment they may use Gen AI tools. Since it is unrealistic to specify every possible use of AI, you might decide to focus on what students cannot use Gen AI for.
GREEN considerations
GREEN
Consider these questions to inform your rationale for expecting students to use Gen AI as an integral part of their assessment:
- Would making Gen AI integral to this assessment provide new learning opportunities and enhance student engagement with the task?
- Would making Gen AI integral to this assessment mean that students have better opportunities to demonstrate their learning or enable a higher level of learning?
- How will you support students to engage with Gen AI to complete the assessment? What training or resources can you provide or link to?
When Gen AI is an integral part of the assessment and students are expected to use it, apply the green category.
You can still place restrictions on how students can use Gen AI, which you should clearly articulate in the assessment guidelines.
Whichever category your assessments fall under, reasonable adjustments must be made under the Equality Act (2010) for students who need them.
The University's proofreading policy permits the use of dictionaries, thesauri, spelling and grammar-checking software, even where these are powered by Gen AI. Any exceptions to this must be very clearly communicated to students in the assessment brief with a strong rationale relating to programme competence standards, given that this is likely to disadvantage disabled students who may rely on such software to produce their written work