AI Security Risk
Jump to navigation
Jump to search
An AI Security Risk is a cybersecurity risk that specifically threatens AI system integrity, AI model confidentiality, or AI service availability.
- AKA: AI System Security Threat, Machine Learning Security Risk, AI Vulnerability, Model Security Risk.
- Context:
- It can typically exploit AI-Specific Vulnerabilitys absent in traditional software systems.
- It can typically target Training Pipelines, model weights, or inference endpoints.
- It can typically enable Capability Theft, model manipulation, or service disruptions.
- It can typically require Specialized Defenses beyond conventional cybersecuritys.
- It can often involve Supply Chain Attacks on ML frameworks and datasets.
- It can often motivate Secure Enclaves and confidential computings.
- It can often cost millions in damages through IP theft or misuses.
- It can range from being a Data-Level AI Security Risk to being a Model-Level AI Security Risk, depending on its attack surface.
- It can range from being a Training-Time AI Security Risk to being an Inference-Time AI Security Risk, depending on its attack timing.
- It can range from being a External AI Security Risk to being an Insider AI Security Risk, depending on its threat origin.
- It can range from being a Passive AI Security Risk to being an Active AI Security Risk, depending on its exploitation requirement.
- ...
- Example:
- Model Attack Risks, such as:
- AI Model Weights Theft stealing trained parameters.
- Model Poisoning Attack corrupting training datas.
- Adversarial Example Attack fooling classification systems.
- Infrastructure Risks, such as:
- Training Cluster Compromise accessing compute resources.
- API Endpoint Exploitation abusing inference services.
- Dataset Contamination injecting malicious samples.
- Supply Chain Risks, such as:
- Backdoored Framework hiding trojan functionalitys.
- Compromised Pretrained Model containing hidden behaviors.
- ...
- Model Attack Risks, such as:
- Counter-Example:
- Traditional Software Vulnerability, which lacks AI-specific elements.
- Physical Security Risk, which threatens hardware not models.
- Operational Risk, which involves process failure not security breachs.
- Compliance Risk, which concerns regulation not technical threats.
- See: Cybersecurity Risk, AI Safety Risk, Model Security, Adversarial Machine Learning, AI Model Weights Theft, Data Poisoning, Model Extraction Attack, Secure ML, Confidential Computing, AI Supply Chain Security.