Skip to content

Findings from international use cases

Across the 4 international use cases, the AIOLIA co-creation exercise resulted in 12 ethics principles, broken into 39 components. The principles used by the use cases generally fall within the principles identified by ALTAI or combine different elements picked up by ALTAI principles (see Table below). The direct comparison of ALTAI with the UC-specific ethics principles demonstrates that the bottom-up process conducted in AIOLIA highlights very similar ethics concerns as established frameworks in international settings. This is in line with the comparison that was also made in the European cases.

Strikingly, we found that similar UC-principles can be understood and deployed in different ways depending on the context. For instance, both the Japanese and the Chinese teams identified the principle of “Fairness and non-discrimination.” Yet, the Japanese team connected fairness and a right to justification and explainability, while the Chinese team focused on the system’s bias and its impact on vulnerable, marginalized groups. This underlines that despite apparent similarities, ethics principles can be defined contextually.

Source: AIOLIA deliberable 3.2

Importantly, many UC-principles use by our international partners combine different elements identified in ALTAI. Note that this does not imply that the UC-principles are inconsistent, but rather that they use and deploy ethics principles in ways that are tailored to their use cases and that underline the synergies between ethical principles and elements. This supports the idea that high-level ethical principles can be relatively fluid in how they are operationalized and implemented. This also points toward the conclusion that ethics principles do not operate independently of one another but interact in complex ways in practice.

Table 1: Comparison of ALTAI with UC-specific ethics principles
UC Ethics principles in use cases ALTAI Principles (Requirements)
UC-principles covered in ALTAI
UC8 Reliability, Safety, and Robustness Req#2, Technical Robustness and Safety
UC8 Privacy, Consent, and Data Protection Req#3, Privacy and Data Governance
UC7 Proportionality (Refers mainly to the justifiability of the use of the AI systems by the workers) Req#4, Transparency
UC8, UC9 Fairness and Non-discrimination, Fair AI Use Req#5, Diversity, Non-discrimination, and Fairness
UC-principles addressing sub-parts of ALTAI principles and combining different elements of ALTAI principles
UC7 Fairness and non-discrimination Req#4, Transparency, Element: explainability
Req#5, Diversity, Non-discrimination and Fairness, Elements: Avoidance of Unfair Bias; Accessibility and Universal Design
UC7 Transparency and Explainability Req#1, Human Agency and Oversight, Element: Human Agency and Autonomy
Req#2, Technical Robustness and Safety, Element: General Safety
Req#4, Transparency, Element: Explainability
UC8 Human Agency, Oversight, and Social Harm Req#1, Human Agency and Oversight, Elements: Human Agency; Oversight
Req#6, Societal and Environmental Well-being, Element: Impact on Society at Large or Democracy
UC9 Safe Human-AI relationships Req#1, Human Agency and Oversight, Element: Human agency and autonomy
Req#2, Technical Robustness and Safety, Element: General Safety
UC9 Promotion of Social Welfare Req#1, Human agency and oversight, Element: Human oversight
Req#6, Societal and Environmental Well-Being, Element: Impact on society at large
UC10 Respect for Postmortem Rights Req#1, Human Agency and Oversight, Element: Human Agency and Autonomy
Req#3, Privacy and Data Governance, Element: Data Governance
UC10 Non-maleficence and beneficence Req#1, Human Agency and Oversight, Elements: Human agency and autonomy; Human Oversight
Req#3, Technical Robustness and Safety, Element: General Safety
UC10 Justice Req#5, Diversity, Non-Discrimination, and Fairness, Elements: Avoidance of Unfair Bias
Req#6, Societal and Environmental Well-being, Element: Impact on Society at Large

Across the international use cases, 12 principles and 39 components emerged. The tables below list the ethics principles and their attached components identified by each of the international use cases.

UC7: Workplaces equipped with AI tools for behavioural analysis – Osaka University (Japan)
Proportionality Fairness and Non-discrimination Transparency and Explainability
1. adequacy
2. necessity
3. proportionality strictu sensu
1. anti-bias
2. fair equality of opportunity and the difference principle
3. equal right to justification
1. safety
2. autonomy
3. legitimacy
4. respect
UC8: AI systems for smart elderly care in Wuxi City – Casted (China)
Reliability, Safety, and Robustness Privacy, Consent, and Data Protection Algorithmic Fairness and Non-Discrimination Human Agency, Oversight, and Social Harm
1. Missed Report
2. Misreport/Alert Fatigue Management
3. Human-in-the-loop as a safety net.
1. The privacy-safety paradox
2. Sensitive Area Monitoring
3. Data access control & third-party vetting
1. Dialect bias
2. Cognitive impairment & inclusivity.
1. Emotional dependency and manipulation
2. Over-reliance and deskilling
3. Accountability and legal positioning
UC9: AI systems as personal companions to assist senior citizens – STEPI (South Korea)
Safe Human-AI Relationship Fair AI Service Promotion of Social Welfare
1. Respect for autonomy & decision-making authority
2. Prevention of overdependence
3. Protection from deception and misjudgment
1. Mitigate data bias
2. Promotion of inclusivity
3. Non-objectification
1. Promotion of human-AI cooperation
2. Adaptive governance
3. Prevention of shadow labour
UC10: AI systems as grief-support personal assistants – McGill University (Canada)
Respect for Postmortem Rights Nonmaleficence and Beneficence Justice
1. Informed consent
2. Secure personal data management
3. Output/Export control
1. Retirement protocols
2. Restricted use
3. Automated monitoring and human oversight
1. Use of culturally sensitive training datasets
2. Reinforcement learning with human feedback
3. Community engagement reports.

Despite the different national contexts and the differences between the cases, recurring concerns and risks emerge either as principles or components. Anti-bias or discrimination is mentioned by all four international cases; autonomy is mentioned by UC7, UC8, and UC9; respect is mentioned explicitly by UC7, UC9, and UC10; and social justice, harm, or welfare is mentioned by UC8, UC9, and UC10 – the three use cases focused on change in human cognition and private behaviour.

Conclusions

The initial, straightforward comparison of ALTAI with the UC-specific ethics principles demonstrates that the bottom-up process conducted in AIOLIA, which started from very concrete AI use cases, highlights very similar ethics concerns as established frameworks. This interestingly hints that the concerns identified in the international use cases are similar to the ones identified in the European use cases since the comparison between European ethics principles and ALTAI led to similar conclusions.

Interestingly, the international partners have also reflected on the tensions they have identified between the ethics principles and their constitutive components, or between their principles and real-life constraints. By reflecting on how they have dealt with these tensions, the international use cases provide useful guidance on how to aim for the ethical deployment of AI systems. All four use cases underline potential risks and tensions surrounding privacy, which ought to be balanced against accuracy, effectiveness, or safety, depending on the case. They all include technical measures aimed at limiting the type of data which can be collected by the AI systems or the types of inferences that can be made by the system to protect individual privacy rights. In addition to privacy concerns, all four cases underline the importance of putting new and innovative regulatory policies or organizational measures in place to protect vulnerable users.

In these ways, the international use cases converge on the ideas that the data used by the systems should be minimized to ensure that it is strictly necessary to aim for a socially valuable goal and that we should pay special attention to protect the interests of vulnerable users that are affected by the deployment of these systems.

The conclusions in this report echo the conclusion of D3.1 that aiming for ethical AI deployment cannot be achieved by a tidy principle-based framework and requires more than the implementation of a managerial checklist. This report shows how the AIOLIA research project grounded in an applied ethics approach has been productive globally to inform the operationalization of ethics guidelines in a way that allowed the research teams to adapt to their own realities.