22. September 2022 Piramid

3 Laws of Robotics

The laws proposed by Asimov are designed to protect humans from interaction with robots. These are: three laws of robotics, rules developed by science fiction author Isaac Asimov, who wanted to create an ethical system for humans and robots. The laws first appeared in his short story „Runaround“ (1942) and later became very influential in the science fiction genre. In addition, they then found relevance in discussions about technology, including robotics and AI. In the face of all these problems, Asimov`s laws offer little more than founding principles for someone who wants to create robot code today. We need to follow them with a much broader set of laws. However, without significant developments in AI, implementing such laws will remain an impossible task. And that, before even thinking about the potential for injury, humans should fall in love with robots. The other big problem with laws is that we need significant advances in AI for robots to actually track them. The goal of AI research is sometimes described as the development of machines that can think and act rationally and like a human. So far, the imitation of human behavior in the field of AI has not been well studied, and the development of rational behavior has focused on limited and well-defined areas. In the 1990s, Roger MacBride Allen wrote a trilogy set in the fictional universe of Asimov.

Each title has the prefix „Isaac Asimov`s,“ because Asimov had approved of Allen`s outline before his death. [Citation needed] These three books, Caliban, Inferno and Utopia, introduce a new series of the Three Laws. The so-called New Laws are similar to Asimov`s originals with the following differences: the First Law is amended to remove the „inaction clause“, the same modification made in „Little Lost Robot“; the second law is amended to require cooperation instead of obedience; the Third Law is amended so that it is no longer replaced by the Second (i.e. a robot of the „New Law“ cannot be ordered to destroy itself); Finally, Allen adds a fourth law that orders the robot to „do what it wants,“ as long as it doesn`t conflict with the first three laws. The philosophy behind these changes is that the „New Law“ robots should be partners rather than slaves of humanity, according to Fredda Leving, who designed these New Law Robots. According to the introduction to the first book, Allen wrote the New Laws in conversation with Asimov himself. However, the Encyclopedia of Science Fiction states, „With Asimov`s permission, Allen reconsidered the three laws and developed a new sentence.“ [25] Randall Munroe discussed the Three Laws in various cases, but perhaps most directly in one of his comics entitled The Three Laws of Robotics, which considers the consequences of each arrangement different from the three existing laws. Here`s the computer scientist`s point of view: laws have never worked, not even in fiction. Asimov`s books on robots „all deal with how these laws go wrong, with different consequences.“ In a 2007 editorial in the journal Science on „Robot Ethics,“ SF author Robert J.

Sawyer, that since the U.S. military has been a major source of funding for robotic research (and already uses armed unmanned aerial vehicles to kill enemies), it is unlikely that such laws will be incorporated into their designs. [49] In a separate essay, Sawyer generalizes this argument to cover other industries, explaining: Lyuben Dilov`s 1974 novel, Icarus`s Way (aka The Trip of Icarus), introduces a fourth law of robotics: „A robot must establish its identity as a robot in all cases.“ Dilov justifies the fourth protective measure as follows: „The latest law put an end to the costly aberrations of designers to give psychorobots as human a form as possible. And the resulting misunderstandings. [30] Marc Rotenberg, president and executive director of the Electronic Privacy Information Center (EPIC) and professor of privacy law at Georgetown Law, argues that robotics laws should be expanded to include two new laws: Brendan Dixon comes in and says: It`s even worse than he says! „Laws“ are ambiguous, even for a human being. What does it mean, for example, not to „harm“? Actually quite sticky to train. Authors other than Asimov often created additional laws. Here`s an interesting data point: I recently learned that the law of Hammurabi (circa 1750 BC, engraved in a stele now in the Louvre) was the most copied ancient book of law. And yet, in all the legal cases we have discovered, it is never referenced or used. What for? Because the old codes of law tried to teach users how to think about making smart decisions instead of coding specific rules about the decisions you should make.

This highlights the challenge of robotics: robots would require us to be able to code the decisions they were supposed to make into rules, but moral decisions require wisdom, a thought-trained mind in order to be able to handle each case properly. David Langford proposed a set of ironic laws:[51] Jack Williamson`s short story „With Folded Hands“ (1947), which was later rewritten in the novel The Humanoids, deals with robotic services whose main instruction is: „Serve and obey and protect men from evil.“ While Asimov`s robot laws are designed to protect humans from harm, the robots in Williamson`s story have taken these instructions to the extreme; They protect people from everything, including unhappiness, stress, an unhealthy lifestyle, and any actions that could be potentially dangerous. All this man has to do is sit with his hands together. [26] Asimov`s fans tell us that the laws were implicit in his earlier stories. For example, „robots“ made of DNA and proteins could be used in surgery to correct genetic disorders. Theoretically, these devices should really follow Asimov`s laws. But in order for them to track commands via DNA signals, they would essentially have to become an integral part of the human being they were working on. This integration would then make it difficult to determine whether the robot is independent enough to fall under the laws or operate outside of them. And on a practical level, it would be impossible for her to determine whether the orders she would receive would cause harm to the man if they were carried out. Eric Holloway offered: I find it ironic that although there are supposedly objective moral laws for robots, humans themselves do not have objective moral laws The three laws of Asimov obeying robots (Asenion robots) can experience irreversible mental depression when forced into situations where they cannot obey the First Law, or when they discover that they have raped her without knowing it. The first example of this error mode can be found in the story „Liar!“, which introduced the First Law itself and introduced failure by dilemma – in this case, the robot will hurt humans if it says something to them and hurt them if it does not.

[44] This mode of error, which often irreparably ruins the positronic brain, plays an important role in Asimov`s SF crime novel The Naked Sun. Here, Daneel describes activities that violate one of the laws but support another, such as overloading certain circuits in a robot`s brain – the equivalent of pain in humans. The example he uses is the forced command of a robot to perform a task outside its normal parameters, a task that it must do without in favor of a robot specialized in that task. [45] The third law fails because it leads to permanent social stratification, with the enormous amount of potential exploitation embedded in this legal system.