- (Walsh, 2016) ⇒ Toby Walsh. (2016). “Turing's Red Flag.” In: Communications of the ACM Journal, 59(7). doi:10.1145/2838729
This is not the first time in history that a technology has come along that might disrupt and endanger our lives. Concerned about the impact of motor vehicles on public safety, the U.K. parliament passed the Locomotive Act in 1865. This required a person to walk in front of any motorized vehicle with a red flag to signal the oncoming danger. Of course, public safety wasn't the only motivation for this law as the railways profited from restricting motor vehicles in this way. Indeed, the law clearly restricted the use of motor vehicles to a greater extent than safety alone required. And this was a bad thing. Nevertheless, the sentiment was a good one: until society had adjusted to the arrival of a new technology, the public had a right to be forewarned of potential dangers.
Inspired by such historical precedents, I propose that a law be enacted to prevent AI systems from being mistaken for humans. In recognition of Alan Turing's seminal contributions to this area, I am calling this the Turing Red Flag law.
Turing Red Flag law: An autonomous system should be designed so that it is unlikely to be mistaken for anything besides an autonomous system, and should identify itself at the start of any interaction with another agent.
There are two parts to this proposed law. The first part of the law states that an autonomous system should not be designed to act in a way that it is likely to be mistaken there is a human in the loop. Of course, it is not impossible to think of some situations where it might be beneficial for an autonomous system to be mistaken for something other than an autonomous system. An AI system pretending to be human might, for example, create more engaging interactive fiction. More controversially, robots pretending to be human might make better caregivers and companions for the elderly. However, there are many more reasons we don't want computers to be intentionally or unintentionally fooling us.
The second part of the law states that autonomous systems need to identify themselves at the start of any interaction with another agent. Note that this other agent might even be another AI. This is intentional. ...
|2016 TuringsRedFlag||Toby Walsh||Turing's Red Flag||10.1145/2838729||2016|