A robot that can “decide” to harm humans is the latest attempt to provoke debate about artificial intelligence. But it’s debatable whether there’s any intelligence involved.
The “First Law” robot is the work of Alexander Reben, a designer at University of Berkely, California who “designs robots and novel interfaces to explore our evolving relationship with technology.”
The robot is little more than a machine that, when it detects a fingertip placed under it, will sometimes prick the finger, sometimes to the point of drawing blood. Reben told the BBC that the robot “makes a decision that I as a creator cannot predict. I don’t know who it will or will not hurt.”
However, it appears that the robot is simply acting randomly when pricking or not pricking, which means the philosophical debate is arguably less whether or not it’s OK to make a robot that can decide to inflict pain and more about whether taking a random action really counts as making a decision.
The robot is named after science fiction writer Isaac Asimov’s three laws of robotics, as described in the short story Runaround, the first of which reads “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Reben has a history of offbeat projects, most recently launching a website that randomly selects and mixes text from the US patent database to create and publish new ideas under a creative commons license. The theory is that while most will be complete nonsense, there’s a chance some will be close enough to a future invention that it can be cited as prior art and scupper a patent application.