Isaac Asimov and the Three Laws of Robotics

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

[Via Neatorama]

Advertisements
Advertisement




5 Responses to Isaac Asimov and the Three Laws of Robotics

  1. There is a near sub-genre of science fiction devoted to the misapplication and loopholes in the three laws. But don't blame Asimov for the laws, his editor, John W. Campbell, formulated them after telling Asimov they were implicit in his robot stories.

  2. Interesting thing about the laws is that in order for a robot to truly "understand" the laws it must understand concepts, thus an A.I. The problem with that is an A.I. would be self aware and you cannot really truly ever program the laws into something like this. It doesn't work the way people think it would. You cannot 'program' these laws into something that would have absolutely no choice to follow them. The 'robot' would have to follow these laws as a free thinking being, just like a human therefore not making these laws absolute like they're meant to be. Of course I think most of the readers here realize that these laws could never truly be implemented.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.