LLM Bias

From GM-RKB
Jump to navigation Jump to search

An LLM Bias is an AI system bias that causes large language models to systematically favor certain outputs, perspectives, or patterns over others in ways that deviate from objective or balanced responses.