Skip to content

Over-reliance and Deskilling

Purpose:  To prevent AI systems/tools replacing or eroding human expertise, judgement and responsibility, and ensuring that automation supports human capabilities rather than diminishing them over time.

Organisational / Technical Measure
A. Managing dependence on AI outputs
ORG The organisation has defined, clear boundaries and guidance for appropriate usage of AI systems/tools
BOTH Decisions with significant impact are not solely based on automated outputs
ORG Guidance exists on when AI outputs should be questioned, verified or supplemented with human input
ORG The organisation actively discourages treating AI outputs as definitive or infallible
B. Preserving human judgement in workflows
BOTH Workflows require active human engagement with checkpoints
TECH Tool interfaces avoid designs that encourage automatic acceptance (e.g. default approvals without review).
ORG There are no penalties for challenging or overriding AI outputs
C. Skills, training and competence
ORG Training supports users in understanding both the capabilities and limitations of AI outputs
ORG Opportunities exist for users to practice independent judgment rather than relying exclusively on automation
ORG The organisation periodically reassesses reliance patterns as systems evolve or scale

Source: AIOLIA deliverable 3.1