As robots become ever more present in daily life, the question of how to control their behaviour naturally arises. Does Asimov have the answer?



In 1942, the science fiction author Isaac Asimov published a short story called Runaround in which he introduced three laws that governed the behaviour of robots. The three laws are as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

He later introduced a fourth or zeroth law that outranked the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Since then, Asimov’s laws of robotics have become a key part of a science fiction culture that has gradually become mainstream.

In recent years, roboticists have made rapid advances in the technologies that are bringing closer the kind of advanced robots that Asimov envisaged. Increasingly, robots and humans are working together on factory floors, driving cars, flying aircraft and even helping around the home.

And that raises an interesting question: do we need a set of Asimov-like laws to govern the behaviour of robots as they become more advanced?

Today, we get an answer of sorts from Ulrike Barthelmess and Ulrich Furbach at the University of Koblenz in Germany. These guys review the history of robots in society and argue that our fears over their potential to destroy us are unfounded. Because of this, Asimov’s laws aren’t needed, they say.

More at Technology Review

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s