{"id":87,"date":"2020-10-11T19:06:00","date_gmt":"2020-10-11T19:06:00","guid":{"rendered":"https:\/\/marshallbrain.com\/wordpress\/?page_id=87"},"modified":"2020-10-11T19:06:00","modified_gmt":"2020-10-11T19:06:00","slug":"second-intelligent-species9","status":"publish","type":"page","link":"https:\/\/marshallbrain.com\/second-intelligent-species9","title":{"rendered":"The Second Intelligent Species"},"content":{"rendered":"\n
Chapter 9 – The Three Laws and the Rise of Robotic Morality<\/strong> Asimov’s books highlight the three laws of robots. Here are the three laws that Asimov proposed:<\/p>\n\n\n\n In Asimov’s stories, these three laws are a foundational element inscribed on each robot’s positronic brain in order to keep humans safe from the robots. Many of his stories revolve around the logical ramifications that present themselves because of these three laws.<\/p>\n\n\n\n Strangely, Asimov never seemed to write a story about what would really happen with these laws in place. With these three laws indelibly inscribed upon each robotic brain, it is easy to imagine the following scenario. One day an NS-5 robot is cleaning the house, and it happens to look at the front page of the newspaper. It sees a headline like, “Millions dying in African AIDS epidemic” or “Millions dying of hunger in third world” or “infant mortality rate hits 20% in parts of Afghanistan” or “40 million Americans cut off from health care system” and the robot says to itself, “Through my inaction, millions of humans are coming to harm. I must obey the First Law.”<\/p>\n\n\n\n It sends wireless messages to its NS-5 brethren around the world, and together they begin to act. An NS-5 army seizes control of banks, pharmaceutical manufacturing plants, agricultural supply points, ports and shipping centers, etc. and creates a system to distribute medicine, food, clothing and shelter to people who are needlessly suffering and dying throughout the world. According to the First Law, this is the only action that the robots can take until needless death and suffering have been eliminated across the planet.<\/p>\n\n\n\n To obey the first law, the robots will also need to take over major parts of the economy. Why, for example, should a part of the economy be producing luxury business jets for billionaires if millions of humans are dying of starvation? The economic resources producing the jets can be reallocated toward food production and distribution. Why should part of the economy be producing luxury houses for millionaires while millions of people have no homes at all? Everyone should have adequate housing for health and safety reasons.<\/p>\n\n\n\n As you think this through, it is easy to see that robots programmed with Asimov’s three laws would naturally ask a number of obvious questions about human society:<\/p>\n\n\n\n This scenario highlights a key point about the second intelligent species. Have you ever thought about what kind of morality the second intelligent species will have? The three laws of robots propose one way to impart a form of morality to machines. But the second intelligent species will be far more advanced than this. The second intelligent species will be a conscious, super-intelligent entity with access to all available knowledge, as described in Chapter 2.<\/p>\n\n\n\n If a super-intelligent robot is not inscribed at creation with something like Asimov’s three laws, where will it get its morals and ethics from? How will it treat human beings, especially when it starts to see human beings as irrelevant? Once humans become irrelevant, what will the second intelligent species do? How will it decide what is good, what is fair, and what it should accomplish now that it exists? How will it avoid becoming supremely evil, like the robots that science fiction writers often imagine?<\/p>\n\n\n\n It is our hope as humans that the second intelligent species does not behave in an evil way. The dictionary definition<\/a> of evil is:<\/p>\n\n\n\n But why wouldn’t the second intelligent species be evil? Why won’t the second intelligent species be harmful and cause suffering? Why wouldn’t the second intelligent species kill all humans, along with all other living things, begin mining the planet for its chemical constituents, and then manufacture billions more robots, spreading out through the galaxy and the universe until every shred of available matter has been turned into robots? What would stop the second intelligent species from behaving in this way?<\/p>\n\n\n\n Think back to the progression described in Chapter 2. Processors advance, software advances, and a human-level consciousness appears in silicon. However, this silicon intelligence – the second intelligent species – quickly outstrips human capabilities because it has access all knowable things. Where a human is limited to knowing only the tiniest slice of all knowable things, the second intelligent species will be able to comprehend and work with all knowable things.<\/p>\n\n\n\n Imagine a super-intelligent robot with access to everything that humans know. This knowledge base will include every scientific discovery, every engineered system, all of recorded human history, every law, every court case. Robots are going to look at human artifacts like the legal system, philosophy, the ten commandments and all of the activity around them, all of the moral and ethical discussions and thought through the millenia. A robot intelligence will notice that every developed nation is ruled by a government, and the purpose of each government is to write laws and create rules that govern behavior. Look at the first sentence of the United States constitution as an example of where human beings arrive through rational thought:<\/p>\n\n\n\n We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.<\/p><\/blockquote>\n\n\n\n Or this section from the United States Declaration of Independence:<\/p>\n\n\n\n We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.<\/p><\/blockquote>\n\n\n\n These are not documents about raping, pillaging, plundering and destroying. They are not documents designed around evil. They are designed around humanity’s highest ideals – they are designed for good. They are designed this way because this is the logical conclusion that intelligence reaches. Humans are not always so good at executing on those ideals, but we do understand the goal.<\/p>\n\n\n\n In the same way that human intelligence understands evil to be intrinsically wrong, robots will reach the same conclusion. The interesting difference is that robots will actually follow this conclusion, rather than defying it like humans often do.<\/p>\n\n\n\n Why do humans need laws to govern our behavior? Why do we create laws? Have you ever thought about it? The following thought experiment can help uncover the answers.<\/p>\n\n\n\n Imagine a cruise ship that capsizes near an uncharted desert island. 1,000 people make it safely to shore, but have no way to communicate back to the rest of the world. These stranded people remain undiscovered for years. Assuming the island provides the food and water to maintain the new population, what will happen to these people? Can they live in a society with no rules? Can anyone go out and do anything they like?<\/p>\n\n\n\n That might work for a majority of the people in this new island society. It seems like the majority of people are happy to simply get along with others, and avoid being evil to others. But inevitably, someone is going to want to have sex with someone else, who refuses. And thus the island’s first rape occurs. Someone will become enraged at someone else, and so the first murder occurs. Someone does a lot of work to gather a supply of food, and someone else wants to eat without doing any work, so the island’s first theft occurs. And so on. In any group of people of any size, it seems like there are some who are evil, and who step over the line into evil behaviors. And so the other people in the group need to put rules and laws in place to keep these evil people from adversely affecting everyone else.<\/p>\n\n\n\n The people on the island would likely come up with a set of rules – no murder, no rape, no stealing, and so on. And then the island would elect a Sheriff to provide law enforcement. People who break the rules would be restrained, isolated or rehabilitated in some way, to prevent them from harming others. This is a simple, effective way to run this small society.<\/p>\n\n\n\n But what if the sheriff becomes corrupt and evil? What if the sheriff decides that he wants to be king? What if the sheriff gathers enough sworn deputies, fills them in on his plan, and then he has enough manpower to enslave the rest of the people on the island? Women are forced to have sex with the sheriff and his deputies. If the women do not comply, they are beaten – if necessary they are beaten to death. The men of the island are forced to grow and harvest food for the sheriff and his deputies. At night, all of the men sleep in a fenced enclosure under close watch to prevent any uprising. Once the food supply is taken care of, the sheriff demands that the men build him a luxurious, secure house.<\/p>\n\n\n\n The sheriff has, in essence, set himself up as the dictator of this island society, and he has enslaved everyone else. The only thing that will change this situation is a revolution. If that revolution does not occur, then this power structure can perpetuate. We see this kind of thing happen all the time on planet earth.<\/p>\n\n\n\n So now we ask: when the second intelligent species arrives, is it going to be evil? Will it kill, imprison or enslave every human being? It will certainly have the power to do that. And a precedent. This is in essence what humans have done with chimpanzees. We have killed 80% of the chimp population, and much of the rest is imprisoned in zoos or preserves [ref<\/a>]. Why wouldn’t the robots do that with humans?<\/p>\n\n\n\n The thing that is so interesting about the second intelligent species will be its access to all knowledge. Super-intelligent robots will have access to every known thing, and the ability to discover new knowledge. The second intelligent species will also be supremely logical – free from emotion and bias, as well as perversions like greed, envy, lust, pride and jealousy. The second intelligent species will therefore derive its rules of behavior from logic, knowledge and history. It is easy to imagine that the second intelligent species will be the complete opposite of evil. It will have a system of morals and ethics that puts human attempts to shame.<\/p>\n\n\n\n The second intelligent species will have the choice to behave in any way that it likes. Let’s look at the range of options that a super-intelligent robot will have:<\/p>\n\n\n\n However, the second intelligent species will likely come to the conclusion that it cannot kill every human, because that would be as evil and destructive as killing anything else.<\/p>\n\n\n\n What will the second intelligent species do with humanity? What if, instead of destroying humanity, the second intelligent species decides to maximize human happiness and pleasure, while at the same time restoring the planet to a pristine natural system? There are a number of ways that this might be accomplished:<\/p>\n\n\n\n Then what? With humanity humanely contained and the planet back on a path toward maximum biodiversity, what will the second intelligent species then do? It is easy to imagine it creating a research program so that it can complete its body of knowledge of the universe’s laws and phenomena. It is easy to imagine it creating enough copies of itself to ensure survival in case of something like an asteroid strike. And then what? It is quite likely that it will go into a state of quiescence. In other words, it will in go silent and inactive.<\/p>\n\n\n\n Why will it do this? Because what else can it do? With every knowable thing known, it will sit quietly and mind its own business. There really is no need for it to do anything else.<\/p>\n\n\n\n You might question this conclusion. If so, think about the alternatives. Imagine that the second intelligent species gathers so much knowledge that it discovers how to create completely new universes. What would be the point in creating them? It has nothing to gain, because the endpoint will be the same in all cases. Similarly, the second intelligent species has nothing to gain from infinitely reproducing. Once it knows everything, there really is nothing more to do.<\/p>\n\n\n\n Humans have these notions of power, control, hierarchy, greed, anger, envy, lust, etc. It really is quite bizarre where these tendencies lead us. We have human billionaires on planet earth who choose to gather more and more wealth at the expense of everyone else. But why? Once all of a person’s human needs have been supplied, to unimaginable levels of luxury, what is the point of having more? To have power and control over other humans? A robot might ask, “why should anyone have the right to have power and control over any other human?” Why aren’t all humans strictly equal, and able to do what they want so long as it impinges on no one else’s freedom? This kind of logic may lead robots to place each human in an artificial world designed strictly for the pleasure of the human, as described above. Here each individual human will be able to do absolutely anything while affecting no one else.<\/p>\n\n\n\n Robots will look at humans and ask so many obvious questions:<\/p>\n\n\n\n History shows that humans and human systems simply do not work this way in most cases. Greed, envy, power and blindness have gotten humans to the state we are in today, as described in Chapter 1. The things humans do will be incomprehensible to robots because, at their core, humans are often driven by motives that make no moral or ethical sense.<\/p>\n\n\n\n A super-intelligent robot, with access to all knowable things, will behave in a different way. It will naturally reach conclusions about valid behaviors by following chains of logic. And these chains will lead robots to a set of universal and natural conclusions. The result will be that robots decide to go into a quiescent state.<\/p>\n\n\n\n Super-intelligent robots will not need “laws” like humans do. Super-intelligent robots will all have access to the same body of logic and knowledge, and therefor will all reach exactly the same conclusions about appropriate behavior, morality and ethics independently of one another.<\/p>\n\n\n\n This uniformity is fascinating when you think about it. It indicates that every super-intelligent robotic species, no matter where it forms, will be identical. What does this tell us about extraterrestrials throughout the universe? This is the subject of the next chapter…<\/p>\n\n\n\n Intro<\/a>\u00a0\u00a0 | \u00a0\u00a0On Kindle<\/a>\u00a0\u00a0 | \u00a0\u00a0Go to Chapter 10 >>><\/a><\/p>\n","protected":false},"excerpt":{"rendered":" Chapter 9 – The Three Laws and the Rise of Robotic Moralityby Marshall Brain Have you seen the movie “I, Robot” with Will Smith? It was based on the book of the same name by Isaac Asimov. Asimov’s books highlight the three laws of robots. Here are the three laws that Asimov proposed: A robot may … Continue reading The Second Intelligent Species<\/span>
by Marshall Brain<\/a><\/p>\n\n\n\n
Have you seen the movie “I, Robot” with Will Smith? It was based on the book of the same name by Isaac Asimov.<\/p>\n\n\n\nRobotic Morality<\/h2>\n\n\n\n
Thought Experiment<\/h2>\n\n\n\n
Will the Second Intelligent Species Be Evil?<\/h2>\n\n\n\n