{"id":41,"date":"2020-10-11T00:59:43","date_gmt":"2020-10-11T00:59:43","guid":{"rendered":"https:\/\/marshallbrain.com\/wordpress\/?page_id=41"},"modified":"2020-10-17T09:51:54","modified_gmt":"2020-10-17T09:51:54","slug":"manna3","status":"publish","type":"page","link":"https:\/\/marshallbrain.com\/manna3","title":{"rendered":"Manna – Two Views of Humanity’s Future – Chapter 3"},"content":{"rendered":"\n

by Marshall Brain<\/a><\/p>\n\n\n\n

If you have not read chapter 1 yet, please start there<\/a>.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

No one really thought of the Manna software as a robot at all. To them, Manna was just a computer program running on a PC. When most normal people thought about robots, they thought about independent, autonomous, thinking robots like the ones they saw in science fiction films. C-3PO and R2-D2 were powerful robotic images, and people would not believe they were looking at a robot until robots looked like C-3PO.<\/p>\n\n\n\n

The mechanical chassis for a C-3PO type robot had been around since the turn of the century. Honda did the trailblazing with its ASIMO robot, and once Honda had proven the concept many other manufacturers followed Honda’s lead. ASIMO could walk up and down stairs, kick a ball and so on, and it looked completely natural. The problem was that ASIMO needed a human operator pushing a joystick to tell it what to do.<\/p>\n\n\n\n

<\/a>The thing that held robots back was vision. Nearly everything a person does is aided by vision — so much so that we take vision completely for granted. But if you close your eyes and try to do anything, you realize just how important vision is.<\/p>\n\n\n\n

For example, when you enter a room where the light is dim, you think in your head, “I need to turn on the lights.” You use your eyes to look on the wall for a light switch. When you find it you use your eyes to guide your hand to the switch. You then use your eyes to figure out what kind of switch it is. Is it a toggle switch? A push-button switch? A dimmer switch with a knob? A dimmer switch with a slider? None of the above? Once you figure it out, you use your eyes to guide your fingers to manipulate the switch in the appropriate way. Or maybe you look at the wall and there is no switch to be found. Now you start looking for a lamp in the room. Is it a touch lamp? Or is the switch on the base of the lamp? Maybe the switch is near the bulb, and you have to push it or twist it or pull a chain… Your vision guides you every step of the way. It is nearly impossible to do anything in a complex environment without vision. And turning on a lamp is a very simple thing. It gets a lot more complicated when you are trying to run through a forest, ride your bicycle down a busy a street or find your way to a particular address in a large subdivision.<\/p>\n\n\n\n

Without vision, robots could not move around or manipulate objects. All of the other hardware was there. Legs and balance systems to allow bipedal motion had been in place for decades. Robotic fingers and hands with very fine motor control were easy to create. AI software to set goals and make decisions was getting more powerful every day. Everything was there but the vision system.<\/p>\n\n\n\n

You could see that society was ready for the robots to arrive. The first real robotic system installed in a human position of trust was in the airline industry. The terrorist attack on the World Trade Center in 2001 had been a wake-up call. Then there was a run of six airline accidents, all attributed to pilot or ATC error, which made everyone nervous. Then the unthinkable happened. Two airline pilots, both sleeper agents for an Asian terrorist organization, flew their planes into massive U.S. targets almost simultaneously and killed nearly 50,000 people. One hit a basketball arena full of spectators, and the other ripped through the Democratic national convention. That was the end of human pilots in the cockpit.<\/p>\n\n\n\n

As it turned out, the transition to robotic planes was remarkably easy. Airplanes were already controlled by autopilots while enroute. Radar systems on the ground and in the planes were already taking off and landing the planes automatically. An airplane did not need a vision system — its “vision” was radar, and radar had been around for more than half a century. There was also a secondary backup system that gave airplanes a form of consciousness. Airplanes could detect their exact location using GPS systems. These GPS systems were married to very detailed digital maps of the ground and the airspace over the ground. The maps told the airplane where every single building and structure was on the ground. So even if the autopilot failed and told the plane to go somewhere unsafe, a “conscious” plane would refuse to fly there. It was, quite literally, impossible for a conscious plane to fly into a building — the plane “knew” that flying into a building was “wrong.” If the autopilot went insane, the conscious plane shut it off and radioed for help. If all the engines failed or fell off, the plane knew what was on the ground in the vicinity and did its best to crash into an unpopulated area.<\/p>\n\n\n\n

Soon there were no human airline pilots and no human air traffic controllers in the system. Everything about flying through the air was automated. The cockpit was stripped out of airplanes and the space became a lounge or a seating area. With human beings out of the loop, the safety record of the airline industry improved and people came to trust the airlines again. No one cared at all that there was no human pilot in the cockpit — people actually trusted machines more than human beings.<\/p>\n\n\n\n

The first breakthrough in true computer vision came from a university. The newest video game consoles came out, and these consoles had extremely powerful CPUs able to process 10 trillion operations per second. By adding 100 gigabytes of RAM to the console and then networking 1,000 of these video game consoles together, a university research team created a machine able to process 10 quadrillion operations per second on 100 trillion bytes of RAM. They had created a $500,000 machine with processing power approaching that of a human brain. With that much processing power and memory on tap, the researchers were finally able to start creating real vision processing algorithms.<\/p>\n\n\n\n

<\/a>Within a year they had two demonstration projects that got a lot of media attention. The first was an autonomous humanoid robot that, given an apartment number, could walk through a city, find the building, ride the elevator or walk up the steps and knock on that apartment door. The second was a car that could drive itself door-to-door in rush hour traffic without any human intervention. By combining the walking robot and the self-driving car, the researchers demonstrated a completely robotic delivery system for a pizza restaurant. In a widely reported publicity stunt, the research team ordered a pizza and had it delivered by robot to their lab 25 minutes later.<\/p>\n\n\n\n

A network of 1,000 video game consoles was not exactly portable, so the demonstration robots that this research team created did not have the brain onboard. The robots talked to the VBrain through wireless connections. However, this research team had proven that machine vision was possible and workable in some of the most complex and real-world tasks imaginable.<\/p>\n\n\n\n

The more significant breakthrough came a few years later. Researchers at a chip company had followed the work of the vision team, and they realized that the 64-bit floating point operations in the video game console were not the optimal unit of calculation for a vision processing machine. Instead, they created a new computer architecture could handle the problem much more efficiently. This realization made massively parallel chip designs for vision very easy to manufacture. The chip company released its first vision processing module — a 10 petaop custom vision processor \u2013 shortly thereafter. The OEM price for the module was $8,000.<\/p>\n\n\n\n

That module opened the floodgates. Within a year, hundreds of manufacturers were showing prototype robots. There were delivery robots, cleaning robots, cooking robots, construction robots, baggage handling robots, welding robots, landscaping robots, truck-driving robots, retail robots, taxi robots, security robots, etc.<\/p>\n\n\n\n

Take something as simple as painting a room. You could stick one of the new painting robots in the room with 5 gallons of paint. Two hours later the entire room was perfectly painted. You didn’t have to cover the furniture or even move the furniture. The robot did everything, and the job was perfect. Not one drop of paint was spilled, not one streak could be seen on the molding. Every line, every corner, every painted surface was faultless. There were also new robots to frame a house, side it, stack bricks and put on the roof.<\/p>\n\n\n\n

The automotive industry demonstrated cars with the vision and control systems built right into the vehicle. The new robotic cars could drive themselves door to door, drop off the passengers and then drive down the block to park themselves. It meant you could read or watch TV on your way to work, and the car did all the driving. There was no reason to have a “driver’s seat” and a steering wheel in these new vehicles, so the interior of a car became much more functional — the front seat could face the back of the car, and it could fold out into a bed. The automated cars promised to reduce traffic congestion, dramatically improve highway safety and make the drive to work much more comfortable. There were also automated taxis and robotic trucks.<\/p>\n\n\n\n

In the retail and fast food industries, the number of prototype robots boggled the mind. Robots could empty a customer’s cart, scan the tags on the products and put them into bags. Robots could stock items on the shelves. Robots could sweep the floors and clean the restrooms. Within two years, Burger-G was demonstrating and debugging a completely robotic Burger-G restaurant at the same location where they had first deployed Manna. Instead of telling human employees what to do, Manna told the robots what to do.<\/p>\n\n\n\n

All of the hardware and general intelligence for these robots had been in place for a decade. What was missing was vision. As soon as the inexpensive vision module became available, the number of robots in the marketplace exploded.<\/p>\n\n\n\n

<\/a>The effect that the robotic explosion had on the employment landscape was startling. Most large retailers began replacing human employees with robots as fast as they could. The robots stocked the shelves, swept the floors, helped customers with questions and carried the customers’ purchases out to their cars. Every fast food restaurant was doing the same thing. Construction sites started to switch to robots for every repetitive task: framing, siding, roofing, painting, etc. Robotic cars and trucks took to the highways and accident rates started to decline. It was easy to see that the completely robotic airport, amusement park, grocery store and factory were on the way.<\/p>\n\n\n\n

The switchover to robots was proceeding with remarkable speed, and for some reason it seemed like no one had really thought about the effects of the transition. All of these people being replaced by the robots needed some form of income to survive, but the job pool was shrinking. The American “service economy” was what replaced the “factory economy”, and America now had about half of its workers wrapped up in low-paying service sector jobs. These were the jobs perfectly suited for the new robots. The question was, what would happen to the half of the population being displaced from their service sector jobs?<\/p>\n\n\n\n

Intro<\/a>\u00a0\u00a0 | \u00a0\u00a0On Kindle<\/a>\u00a0\u00a0 | \u00a0\u00a0Go to Chapter 4 >>><\/a><\/p>\n\n\n\n

\n