Purpose: To ensure that AI tools are designed, deployed and operated in ways that actively reduce harm, minimise foreseeable risks, and avoid causing physical, psychological or institutional damage.
| Organisational / Technical | Measure |
|---|---|
| A. Harm identification and risk awareness | |
| BOTH | Potential harms associated with the AI system (direct and indirect) have been identified and documented before deployment of tool |
| ORG | Harm assessments consider multiple dimensions of impact (e.g. safety, dignity, fairness etc) |
| BOTH | Risks are reviewed not only at design stage but also when contexts, data or usage patterns change |
| ORG | Organisation has a clear, accessible definition of “harm” in the system specific use context |
| B. Human responsibility and intervention | |
| BOTH | Human staff are aware of duties to intervene when the tool produces outputs that may cause harm |
| ORG | Clear guidance exists on when and how human intervention should occur in response to harmful or unsafe outcomes |
| ORG | Responsibility for monitoring harm indicators and responding to them is explicitly assigned |
| C. Monitoring, learning and continuous improvement | |
| TECH | Ongoing monitoring track indicators related to harm and unintended consequences over an extended time period |
| BOTH | Feedback from affected users or stakeholders is systematically collected and reviewed |
| ORG | Identified harms, near-misses or emerging risks lead to documented corrective actions and system improvements |
| ORG | Risk mitigation practices are periodically reassessed |
Source: AIOLIA deliverable 3.1