Purpose: To ensure that AI tools do not intentionally or unintentionally manipulate users’ behaviour, emotions, choices or perceptions in ways that undermine autonomy, dignity or informed decision making.
| Organisational / Technical | Measure |
|---|---|
| A. Identification of manipulation risks | |
| BOTH | Potential manipulation risks arising from the tool design, outputs or interactions have been identified and documented |
| ORG | Risk analysis considers psychological, emotional and behavioural influences – not only technical performance |
| BOTH | Attention has been given to asymmetries of power, knowledge or vulnerability between the system and its users |
| ORG | The organisation has defined what constituted unacceptable manipulation in its specific operational context |
| B. Design safeguards against undue influence | |
| TECH | System design avoids techniques intended to covertly steer user behaviour (e.g., deceptive framing, emotional pressure etc) |
| BOTH | Outputs are framed to inform and support decision-making rather than pressure, persuade or exploit cognitive bias |
| TECH | The system does not personalise influence strategies in a way that exploit individual vulnerabilities without justification |
| ORG | Legitimate behavioural influence is clearly distinguished from manipulative practices and documented |
| C. Transparency and User Agency | |
| BOTH | Users are informed when system outputs are intended to influence decisions or behaviours |
| ORG | Users maintain meaningful choice and are not penalised for rejecting, ignoring or questioning system suggestions |
| ORG | The organisation has processes to assess whether users experience system interactions as coercive or misleading |
| D. Oversight, monitoring and correction | |
| BOTH | Human oversight exists to review system outputs for manipulative effects, especially in sensitive or high-impact contexts |
| ORG | User feedback and complaints related to perceived manipulation are systematically collected and reviewed |
| ORG | Identified manipulation risks trigger corrective actions, design changes or restrictions on system use |
Source: AIOLIA deliverable 3.1