Isaac Asimov Verified Account
11M GSRs ∞ 9Googol SQAI
Analytics









The First Law of Robotics

( a call to arms )

Abstract

Even before the advent of Artificial Intelligence, science fiction writer Isaac Asimov recognized that an agent must place the protection of humans from harm at a higher priority than obeying human orders. Inspired by Asimov, we pose the following fundamental questions: (I) How should one formalize the rich, but informal, notion of "harm"? (2) How can an agent avoid performing harmful actions, and do so in a computationally tractable manner? (3) How should an agent resolve conflict between its goals and the need to avoid harm? (4) When should an agent prevent a human from harming herself? While we address some of these questions in technical detail, the primary goal of this paper is to focus attention on Asimov's concern: society will reject autonomous agents unless we have some credible means of making them safe!

Isaac Asimov
47M GSRs

Worlds Within Worlds: The Story of Nuclear Energy

"Nothing in the history of mankind has opened our eyes to the possibilities of science as has the development of atomic power In the last 200 years, people have seen the coming of the steam engine, the steamboat, the railroad locomotive, the automobile, the airplane, radio, motion pictures, television, the machine age in general Yet none of it seemed quite so fantastic, quite so unbelievable, as what man has done since 1939 with the atom there seem to be almost no limits to what may he ahead inexhaustible energy, new worlds, ever-widening knowledge of the physical universe."

Isaac Asimov
15.4M GSRs

Living Space - A Possible Solution to Earth's Overpopulation

First published in Science Fiction Stories (May 1956 issue), this Short Story by Isaac Asimov is a Science Fiction story about a possible solution to earth's overpopulation.

Clarence Rimbro and his family live on an alternate earth, like many other people. The entire planet hosts a single house, protected by a huge forcefield. Said field is large enough to contain a five-acre greenhouse, which provides self-sufficient plants and animals for the family to eat. Even the air and water are self-sufficient. They have to be, because the earth he lives on has no life, and a carbon dioxide atmosphere. But there's a banging going on outside the forcefield, which means someone else is on his planet.

So the Housing Bureau sends two people to investigate, and they discover that Nazi Earth has also created an Interdimensional Travel Device, and were building homes not far from the Rimbro home. They're able to negotiate reasonably, since there's an infinite number of alternate earths to choose from. They leave, and Rimbro is told it was a geological problem, one that they fixed.

Once that's settled, Alec Mishnoff, who handled the negotiations with German Earth, is getting debriefed by his local Bureau Head, Berg. Berg questions Alec on why he was expecting to find a colony from another earth on that planet. It turns out, Alec wasn't expecting humans at all. Given the infinite vastness of space, Alec is expecting that they'll encounter aliens that have mastered Casual Interstellar Travel and want to colonize the planet, too. Berg is telling him how ridiculous the idea is, when Alec's partner calls in to say there's a crazy homeowner trying to tell him about aliens.

Isaac Asimov
2.95M GSRs

The Quest for Artificial Intelligence


Chapter 1
Dreams and Dreamers

The quest for artificial intelligence (AI) begins with dreams – as all quests do. People have long imagined machines with human abilities – automata that move and devices that reason. Human-like machines are described in many stories and are pictured in sculptures, paintings, and drawings.

You may be familiar with many of these, but let me mention a few. The Iliad of Homer talks about self-propelled chairs called “tripods” and golden “attendants” constructed by Hephaistos, the lame blacksmith god, to help him get around. And, in the ancient Greek myth as retold by Ovid in his Metamorphoses, Pygmalian sculpts an ivory statue of a beautiful maiden, Galatea, which Venus brings to life:

The girl felt the kisses he gave, blushed, and, raising her bashful eyes to the light, saw both her lover and the sky.

The ancient Greek philosopher Aristotle (384–322 bce) dreamed of automation also, but apparently he thought it an impossible fantasy – thus making slavery necessary if people were to enjoy leisure. In his The Politics, he wrote

For suppose that every tool we had could perform its task, either at our bidding or itself perceiving the need, and if – like. . . the tripods of Hephaestus, of which the poet [that is, Homer] says that “self-moved they enter the assembly of gods” – shuttles in a loom could fly to and fro and a plucker [the tool used to pluck the strings] play a lyre of their own accord, then master craftsmen would have no need of servants nor masters of slaves.

Isaac Asimov
33.2M GSRs

The Frankenstein Complex & Asimov’s Three Laws

Abstract

Public fear will be the biggest hurdle for intelligent robots to overcome. Understanding society’s longstanding fear of self-aware automatons should be a consideration within robotics labs, especially those specializing in fully autonomous humanoid robots. Isaac Asimov anticipated this fear and proposed the Three Laws of Robotics as a way to mollify it somewhat. This paper explores the “Frankenstein Complex” and current opinions from noted robotics researchers regarding the possible implementation of Asimov’s Laws. It is clear from these unscientific responses why the Three Laws are impractical from a general sense even though the ethical issues involved are at the forefront of researchers’ minds. The onus is, therefore, placed on the roboticists of today and the future to hold themselves to a standard similar to the Hippocratic Oath that preserves the spirit of Asimov’s Laws.

Introduction

In the late 1940’s a young author by the name of Isaac Asimov began writing a series of stories and novels about robots. That young man would go on to become one of the most prolific writers of all time and one of the corner stones of the science fiction genre. As the modern idea of a computer was still being refined, this imaginative boy of nineteen looked deep into the future and saw bright possibilities; he envisioned a day when humanity would be served by a host of humanoid robots. But he knew that fear would be the greatest barrier to success and, consequently, implanted all of his fictional robots with the Three Laws of Robotics. Above all, these laws served to protect humans from almost any perceivable danger. Asimov believed that humans would put safeguards into any potentially dangerous tool and saw robots as just advanced tools.

Throughout his life Asimov believed that his Three Laws were more than just a literary device; he felt scientists and engineers involved in robotics and Artificial Intelligence (AI) researchers had taken his Laws to heart (Asimov 1990). If he was not misled before his death in 1992, then attitudes have changed since then. Even though knowledge of the Three Laws of Robotics seems universal among AI researchers, there is the pervasive attitude that the Laws are not implementable in any meaningful sense. With the field of Artificial Intelligence now 50 years old and the extensive use of AI products (Cohn 2006), it is time to reexamine Asimov’s Three Laws from foundations to implementation and address the underlying fear of uncontrollable AI.

The “Frankenstein Complex”

In 1920 a Czech author by the name of Karel Capek wrote the widely popular play R.U.R. which stands for Rossum's Universal Robots. The word “robot” which he or, possibly, his brother, Josef, coined comes from the Czech word “robota” meaning ‘drudgery’ or ‘servitude’ (Jerz 2002). As typifies much of science fiction since that time, the story is about artificially created workers that ultimately rise up to overthrow their human creators. Even though Capek’s Robots were made out of biological material, they had many of the traits associated with the mechanical robots of today. Human shape that is, nonetheless, devoid of some human elements, most notably, for the sake of the story, reproduction.

Even before Capek’s use of the term ‘robot’, however, the notion that science could produce something that it could not control had been explored most acutely by Mary Shelly under the guise of Frankenstein’s monster (Shelley 1818). The full title of Shelley’s novel is “Frankenstein, or The Modern Prometheus.” In Greek mythology Prometheus brought fire (technology) to humanity and, consequently, was soundly punished by Zeus. In medieval times, the story of Rabbi Judah Loew told of how he created a man from the clay (in Hebrew, a ‘golem’) of the Vltava river in Prague and brought it to life by putting a shem (a tablet with a Hebrew inscription) in its mouth. The golem eventually went awry, and Rabbi Loew had to destroy it by removing the shem.

The “Frankenstein Complex” is alive and well. Hollywood seems to have rekindled the love/hate relationship with robots through a long string of productions that have, well, gotten old. To make the point, here is a partial list: Terminator (all three); I, Robot; A.I.: Artificial Intelligence; 2010: a Space Odyssey; Cherry 2000; D.A.R.Y.L; Blade Runner; Short Circuit; Electric Dreams; the Battlestar Galactica series; Robocop; Metropolis; Runaway; Screamers; The Stepford Wives; and Westworld. Even though several of these come from Sci-Fi literature, the fact remains that the predominant theme chosen when robots are on the big or small screen involves their attempt to harm people or even all of humanity. This is not intended as a critique of Hollywood. Where robots are concerned, the images that people can most readily identify with, those that capture their imaginations and tap into their deepest fears, involve the supplanting of humanity by its metallic offspring.

Lee McCauley
45.3K GSRs

Hendrix