Artificial intelligence and smart robots are on a course to transform industries and replace human contact. However, what happens when they fail?
Smart robots may malfunction, or a self-driving car could lock its passenger inside. Hollywood has already made predictions for robot relationships with Her — but what if we developed an emotional attachment to an AI and then it went rogue?
Philosophers, engineers and computer scientists wrestle with similar questions. Since robots can, by design, live forever, what are the ethics involved in designing a kill switch?
For this reason, European lawmakers proposed mandating kill switches to prevent robots from causing harm. The proposal also included creating a legal status called “electronic persons” for intelligent robots to help address liability issues if robots misbehave.
The kill switches are to exist to stop information from getting into the wrong hands, and to prevent AI or robots from going rogue. Microsoft’s chatbot Tay is a good example of how quickly biases can multiply in AI. It was easy to turn off the teenage Twitter bot after users taught her a racist vocabulary, but that might not always be the case.
Because AI doesn’t experience human emotion, it makes decisions and executes tasks based exclusively on rationality. Theoretically, this could be bad.