Skip to content

Freedom of expression and Non-censorship

Purpose:  To prevent AI systems/tools replacing or eroding human expertise, judgement and responsibility, and ensuring that automation supports human capabilities rather than diminishing them over time.

Organisational / Technical Measure
A. Defining legitimate boundaries
ORG The organisation has clearly defined what types of content, behaviour or expression are restricted and why
ORG Restrictions can be justified on a legal, ethical or safety basis rather than reputational or convenience of the organisation
BOTH The scope of restrictions is documented and communicated to relevant users or stakeholders
B. Human review and contestability
BOTH Humans can review, override or correct automated restrictions on expression
ORG Clear mechanisms for affected individuals to report, question or contest moderation or restriction
ORG Human review involves consideration of contextual, cultural or situational factors that automated systems may have missed
C. Transparency and monitoring
ORG Users are informed when and why expression is restricted using clear and accessible explanations
BOTH The organisation monitors patterns of restriction to detect systemic over-censorship or bias
ORG Policies governing expression and moderation are periodically reviewed as contexts, risks or norms evolve

Source: AIOLIA deliverable 3.1