AI Paternalism System
Jump to navigation
Jump to search
An AI Paternalism System is an AI control system that makes decisions for human users based on presumed superior knowledge.
- AKA: Paternalistic AI System, AI Guardian System, Benevolent AI Control System.
- Context:
- It can typically override user preferences with optimization algorithms.
- It can typically justify interventions through utility maximization.
- It can often create dependency relationships with passive users.
- It can often reduce deliberative capacity through decision offloading.
- It can range from being a Soft AI Paternalism System to being a Hard AI Paternalism System, depending on its override strength.
- It can range from being a Narrow AI Paternalism System to being a Comprehensive AI Paternalism System, depending on its domain scope.
- It can range from being a Transparent AI Paternalism System to being a Opaque AI Paternalism System, depending on its decision visibility.
- It can range from being a Reversible AI Paternalism System to being a Irreversible AI Paternalism System, depending on its user control.
- ...
- Examples:
- Healthcare AI Paternalism Systems, such as:
- Financial AI Paternalism Systems, such as:
- ...
- Counter-Examples:
- AI Decision Support System, which provides recommendations without enforcement.
- Human Autonomy Preservation System, which maintains user agency.
- Collaborative AI System, which shares decision authority with humans.
- See: Paternalism, AI Control System, Soft Despotism, Human Autonomy Preservation Task, AI Dependency Risk, Omniscient Autocomplete Thought Experiment, Nudge Architecture, Behavioral Economics, Choice Architecture.