Automation Bias

Jump to: navigation, search

An Automation Bias is a Cognitive Bias that favors a recommendation made by an automated Decision Support System over Information with automation.




  • (Wikipedia, 2018) ⇒ Retrieved:2018-5-27.
    • Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Automation bias stems from the social psychology literature that found a bias in human-human interaction that people believe assign more positive evaluations to decisions made by humans than to a neutral object. [1] The same type of positivity bias has been found for human-automation interaction, where the automated decisions are rated more positively than neutral. [2] This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly involved computerized system monitors and decision aids. Errors of automation bias tend to occur when decision-making is dependent on computers or other automated aids and the human is in an observatory role. Examples of automation bias range from urgent matters like flying a plane on automatic pilot to such mundane matters as the use of spell-checking programs.[3]
  1. Bruner, J. S., & Tagiuri, R. (1954). The perception of people. HARVARD UNIV CAMBRIDGE MA LAB OF SOCIAL RELATIONS.
  2. Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International journal of human-computer studies, 58(6), 697-718.
  3. Skitka, Linda. "Automation". University of Illinois. University of Illinois at Chicago. Retrieved 16 January 2017.