<\/a>Within a year they had two demonstration projects that got a lot of media attention. The first was an autonomous humanoid robot that, given an apartment number, could walk through a city, find the building, ride the elevator or walk up the steps and knock on that apartment door. The second was a car that could drive itself door-to-door in rush hour traffic without any human intervention. By combining the walking robot and the self-driving car, the researchers demonstrated a completely robotic delivery system for a pizza restaurant. In a widely reported publicity stunt, the research team ordered a pizza and had it delivered by robot to their lab 25 minutes later.<\/p>\n\n\n\nA network of 1,000 video game consoles was not exactly portable, so the demonstration robots that this research team created did not have the brain onboard. The robots talked to the VBrain through wireless connections. However, this research team had proven that machine vision was possible and workable in some of the most complex and real-world tasks imaginable.<\/p>\n\n\n\n
The more significant breakthrough came a few years later. Researchers at a chip company had followed the work of the vision team, and they realized that the 64-bit floating point operations in the video game console were not the optimal unit of calculation for a vision processing machine. Instead, they created a new computer architecture could handle the problem much more efficiently. This realization made massively parallel chip designs for vision very easy to manufacture. The chip company released its first vision processing module — a 10 petaop custom vision processor \u2013 shortly thereafter. The OEM price for the module was $8,000.<\/p>\n\n\n\n
That module opened the floodgates. Within a year, hundreds of manufacturers were showing prototype robots. There were delivery robots, cleaning robots, cooking robots, construction robots, baggage handling robots, welding robots, landscaping robots, truck-driving robots, retail robots, taxi robots, security robots, etc.<\/p>\n\n\n\n
Take something as simple as painting a room. You could stick one of the new painting robots in the room with 5 gallons of paint. Two hours later the entire room was perfectly painted. You didn’t have to cover the furniture or even move the furniture. The robot did everything, and the job was perfect. Not one drop of paint was spilled, not one streak could be seen on the molding. Every line, every corner, every painted surface was faultless. There were also new robots to frame a house, side it, stack bricks and put on the roof.<\/p>\n\n\n\n
The automotive industry demonstrated cars with the vision and control systems built right into the vehicle. The new robotic cars could drive themselves door to door, drop off the passengers and then drive down the block to park themselves. It meant you could read or watch TV on your way to work, and the car did all the driving. There was no reason to have a “driver’s seat” and a steering wheel in these new vehicles, so the interior of a car became much more functional — the front seat could face the back of the car, and it could fold out into a bed. The automated cars promised to reduce traffic congestion, dramatically improve highway safety and make the drive to work much more comfortable. There were also automated taxis and robotic trucks.<\/p>\n\n\n\n
In the retail and fast food industries, the number of prototype robots boggled the mind. Robots could empty a customer’s cart, scan the tags on the products and put them into bags. Robots could stock items on the shelves. Robots could sweep the floors and clean the restrooms. Within two years, Burger-G was demonstrating and debugging a completely robotic Burger-G restaurant at the same location where they had first deployed Manna. Instead of telling human employees what to do, Manna told the robots what to do.<\/p>\n\n\n\n
All of the hardware and general intelligence for these robots had been in place for a decade. What was missing was vision. As soon as the inexpensive vision module became available, the number of robots in the marketplace exploded.<\/p>\n\n\n\n
<\/a>The effect that the robotic explosion had on the employment landscape was startling. Most large retailers began replacing human employees with robots as fast as they could. The robots stocked the shelves, swept the floors, helped customers with questions and carried the customers’ purchases out to their cars. Every fast food restaurant was doing the same thing. Construction sites started to switch to robots for every repetitive task: framing, siding, roofing, painting, etc. Robotic cars and trucks took to the highways and accident rates started to decline. It was easy to see that the completely robotic airport, amusement park, grocery store and factory were on the way.<\/p>\n\n\n\nThe switchover to robots was proceeding with remarkable speed, and for some reason it seemed like no one had really thought about the effects of the transition. All of these people being replaced by the robots needed some form of income to survive, but the job pool was shrinking. The American “service economy” was what replaced the “factory economy”, and America now had about half of its workers wrapped up in low-paying service sector jobs. These were the jobs perfectly suited for the new robots. The question was, what would happen to the half of the population being displaced from their service sector jobs?<\/p>\n\n\n\n
Intro<\/a>\u00a0\u00a0 | \u00a0\u00a0 On Kindle<\/a>\u00a0\u00a0 | \u00a0\u00a0 Go to Chapter 4 >>><\/a><\/p>\n\n\n\n