I started this post right about the time this story dropped, but somehow it never made it to being posted. As I was going though to clear out drafts of posts that I did not finish (for whatever reason…) I ran across this gem to be shared. As a side note, in all the time since this was posted, there has not been a follow up.
I saw an article the other day discussing an experiment about where participants worked with a robot to perform a list of tasks. Now working with the robot is not so odd, but the twist is that after the tasks were done the participants were asked to turn off the robot. The robot began to ask or beg not to be turned off, and a significant number of the participants would not turn the robot off.
What if there was a robot that was meant to kill instead of solve puzzles, say a “kill bot”. Now what if that code to “beg for its life” was added to the kill bot and it started a fight with a human. The human could get the advantage on the kill bot and while the human were ready to deliver the ending blow the kill bot begs for its life. If the results of this study were to scale then most humans would feel sympathy and not deliver the blow giving the bot the chance to turn the tables and kill the human instead.
The experiment reminds me a lot of the Milgram experiments in the 1960’s. Sure, bit of a stretch, but what if?