Purpose: To ensure that AI systems do not create, reinforce or legitimise unjustified disparities in how individuals or groups are assessed, treated or affected. Leading to outcomes that are fair proportionate and respectful of human dignity.
| Organisational / Technical | Measure |
|---|---|
| A. Diversity and representativeness | |
| BOTH | Data used to design, train or configure the tool is reviewed for representativeness across relevant groups and contexts |
| ORG | The organisation considered intersectional diversities across roles, locations, languages and protected characteristics |
| BOTH | Multiple stakeholder perspectives are included in the design and review of the tool |
| ORG | If there are limitations in data diversity or coverage, they are documented and communicated clearly |
| B. Objective and consistent assessment | |
| BOTH | Data that is used by the tool to generate outputs is transparent, proportionate and consistent |
| ORG | Subjective or ad-hoc judgements are minimised in how tool outputs are interpreted or acted upon |
| TECH | Tool configurations and thresholds are reviewed for unintended disparity impact |
| ORG | Decisions influenced by AI outputs can be justified and the reasoning should be documented |
| ORG | The organisation avoids labelling, penalising or profiling individuals based solely on AI outputs |
| C. Monitoring, review and correction | |
| TECH | The organisation monitors outputs for patterns of bias or unequal impact over time |
| ORG | Mechanisms exist for users to challenge, review/ correct outcomes perceived as unfair or discriminatory |
| BOTH | Identified bias triggers corrective action (e.g., adjustment to data or organisational practices) |
Source: AIOLIA deliverable 3.1