Previous Top

5.5 Implications

It appears that self-replicating systems may have numerous economical applications on Earth, in near-earth and lunar space, throughout the Solar System, and perhaps even in the interstellar realm, for future exploration and utilization. The main advantage of SRS is their tremendous capability for performing any desired task at almost any remote location starting with a relatively small investment of time, money, energy, and mass. This suggests that replication technology may have significant social and economic impacts on American and human society, as discussed below. A number of philosophical and ethical implications may derive from replicating systems techniques. Various issues regarding the future of human and machine evolution must be addressed, together with the "cosmological" implications of SRS.

As the time allotted to consideration of the implications of machine replication was relatively small, the team was not able to examine many intriguing questions in depth. In many cases, it was possible only to frame questions regarding general classes of social and cultural impacts, as no satisfactory answers were immediately apparent. Consequently, this section must be regarded simply as a blueprint for further study in this area.

5.5.1 Socio-Economic Implications of Self-Replicating Systems

The history of technology on this planet is the record of man's constant attempts to control his environment through the use of extrasomatic tools. The development of SRS in this context will be revolutionary, with impacts equal to or exceeding those engendered by other "revolutions" in human history. For the first time, mankind will be creating, not merely a useful paradigmatic tool (e.g., the scientific method, Copernican revolution), organizational tool (eg, centralized cultivation, agricultural revolution) or energy-harnessing tool (eg, steam power, industrial revolution), but rather a wholly new category of "tool" - a device able to use itself intelligently and with minimum human intervention. In many respects, with SRS mankind is creating a technological partner rather than a mere technical implement.

Superautomation on Earth and in space. The use of self replicating systems on Earth poses many problems. A compact, freely replicating system released on the surface of the Earth potentially could compete with humans for resources such as living space and energy. It could also smother us in its waste products. Even if kept under control, a terrestrial SRS could wreak economic havoc by manufacturing products for which the consumers who will use the products will not have to pay. Unfortunately, we will probably have to deal with this problem regardless of whether replicating systems technology per se is ever developed. If industrial automation continues in the direction it seems to be headed now, global commerce soon will reach a state of "superautomation" (Albus, 1976) in which an entire national industrial base has become automated and is, for all practical purposes, a terrestrial SRS. Such a system may function without the need for significant inputs of human labor. Eventually it should be possible to deal with the attendant economic dislocations, but the transition is certain to be excruciatingly painful.

In Earth orbit and on the lunar surface, however, the situation is quite different. In the environment of space SRS would not be in competition with an established human presence. Instead, they would provide a powerful "tool" by which humans can manipulate that environment to their advantage. One can envision building vast antenna arrays (for radio astronomy and SETI), solar power satellites, or even lunar, orbital, or free-flying habitations. These applications should enhance, rather than destroy, the economic fabric of terrestrial civilization, just as colonies in the New World enhanced the economies of their parent nations. By expanding into space, mankind has the potential to gain, rather than lose, from extensive automation. Instead of doing the same amount of "work" that is required to sustain terrestrial existence (and doing it with fewer and fewer people), by moving into space even more people can be kept occupied than before while at the same time extending into a redundant habitat. This seems perhaps the best way to sustain the least trauma in the years ahead.

The development of the necessary artificial intelligence, robotics, and automation techniques will likely have enormous short range impacts on Earthbound activities. If our economy is to be transformed by such revolutionary technologies in a fairly short period of time, how can the United States (and the entire industrialized global community) prepare for and avoid or mitigate potentially vast dislocations? Will we need a new academic discipline of "revolution management"?

Economics of replicating systems. Whether supported by public or private sources, the development of SRS must make good economic sense or else it will never be attempted. Self-replicating Factories on Earth or in space may appear theoretically capable of creating bountiful wealth and endless supplies of goods and services for all (Bekey and Naugle, 1980; Heer, 1979). However, this utopian ideal must be tempered with the cold logic of cost-benefit analyses and indices of profitability if it is to gain some measure of credibility in the business world.

Let us assume that a financial consortium invests a sufficient quantify of capital to research, design, build, and successfully deploy the first SRS. This consortium may represent an association of private businesses (eg, the Alaskan Pipeline), an intergovernmental entity (e.g., the International Monetary Fund), or individual public agencies (e.g., NASA). Deployment may occur on Earth, in orbit, on the Moon, or even on the surfaces of other planets or the asteroids. After a relatively brief period (T years) of growth, the capacity of the initial SRS expands a thousandfold by self-replication, and commercial production begins.

Assume that the original investment is $X and the original factory could produce useful manufacturing output with an annual value of a$X. After the SRS undergoes thousandfold expansion, its output is worth 1000a$X per year (provided demand remained unaffected). The value of the original investment after T years is $X(1+I)T, where I is the mean annual inflation rate during the period of investment. Thus, to repay the original investment and achieve economic breakeven will require approximately $X(1+I)T/1000a$X years of production following the period of nonproductive factory growth. The results of this simple calculation for T = 20 years are shown in table 5.5 for several representative values of a and I.

Table 5.5 - Economics Of Self-Replicating Factories

Relative specific productivity, a($/yr-$)(1) Repayment period of original investment, for an adult seed(2)
Inflation = 0% Inflation = 10% Inflation = 50% Inflation = 100%
0.01 1 mo 8 mo 330 yr 100,000 yr
0.1 4 d 1 mo 33 yr 10,000 yr
1.0 9 hr 2 d 3 yr 1,000 yr
10 50 min 6 hr 4 mo 100 yr
100 5 min 35 min 12 d 10 yr
1000 30 sec 4 min 1 d 1 yr

(1) a = fraction of original value of seed that the adult LMF can produce per year.
(2) Repayment period = $X * (1+I)20/(1000a*$X), assuming an initial 20 year nonproductive period.

What is a reasonable value for a? The Lunar Manufacturing Facility developed in an earlier section replicates its own mass (of similar components) in one year, or a = 1. Waldron et al. (1979) propose a semireplicating factory which can produce its own mass in metal products in less than 6 days, for a maximum a = 60. Nevertheless, table 5.5 shows that even if a = 0.01 (corresponding to extraordinarily low productivity) the repayment time is still less than a year in a national or global economy with low-to-moderate inflation or interest rates (10% or less). in an economy with interest rates up to 50%, reasonable repayment times - on the order of typical plant lifetime, about 30 years in usual industrial practice - remain available for a > 0.1 (also a fairly pessimistic lower limit on productivity) Under conditions of hyperinflation (100% and higher) a 30-year breakeven can be obtained only for highly robust, productive systems with a > 35.

Economic feasibility, however, is not limited to amortization of costs. A net profit must be made, markets established and maintained, production managed in a reliable and flexible manner, and so forth. Given the tremendous power of SRS, severe economic distortions are conceivable across the board. If a replicating factory system is used to flood a market with products, the prices of these products will fall, carrying profits downward as demand saturates in an unregulated economic environment. On the of her hand, in a tightly controlled economy the well.known problem of inferior production control feedback would be exacerbated, leading possibly to wild fluctuations in supply and demand for SRS products. These relationships should be investigated more thoroughly by economists.

If control of Earth-deployed replicating factories is retained by national or subnational entities, governments lacking this technology will seek equitable licensing agreements. One interesting problem is ownership of SRS offspring grown from the soil of one country but generated by a leased parent machine owned by another. Should licensing arrangements require return of offspring? Perhaps the offspring should be allowed to remain the property of the licensee, but with royalties levied against production in favor of the owner of the parent machine? Clearly such arrangements could become quite complex in just a few generations of cross-licensing (SRS capable of "sexual" reproduction present a host of additional theoretical complications.) From the businessman's point of view, it might be better just to sell a "mule SRS" - an infertile factory with the capacity for rapid automated manufacturing but which lacks some vital software element or process necessary for replication. Of course, this is an open invitation to black market traffic in "bootstrap kits" which allow users to restore fertility to their neutered systems. It is difficult to see how the rapid spread of such technology, once introduced in any form, could be held in check for long by any governmental, corporate, or private entity.

Social aspects of SRS cornucopia. How will humankind deal with what has been termed, with some justification, "the last machine we need ever build?" How might people's lives be changed by a replicative universal constructor system capable of absorbing solar energy and raw dirt and manufacturing, as if by magic, a steady stream of finished building materials, television sets and cars, sheet metal, computer components, and more robots - with little or no human intervention required? Just as the invention of the telephone provided virtually instantaneous long-distance communication, and television permits instant knowledge of remote events, and the automobile allows great individual mobility, the autonomous SRS has the potential to provide humanity with virtually any desired product or service and in almost unlimited quantities. Assuming that global human population does not simply rise in response to the new-found replicative cornucopia and recreate another equilibrium of scarcity at the original per capita levels, supply may be decoupled from demand to permit each person to possess all he wants, and more. The problems of social adjustment to extreme sudden wealth have been documented in certain OPEC nations in recent years. Much attention has also been given to the coming "age of leisure" to be caused by superautomation. What more difficult psychological and social problems might emerge in an era of global material hyperabundance?

If the enterprise of establishing an automated lunar mining and manufacturing facility is successful, there might thereby be made available to humanity a vast supply of energy and useful products. By exporting heavy industry to the Moon, the Earth might be allowed to revert to a more nearly natural state of "controlled wilderness." This should permit the preservation of the animals and plants which people have for so long enjoyed. Although contrary to the historical evidence, on the negative side people may take their new prosperity as license to exercise their natural biological proclivities and yet further overwhelm this planet with teeming human billions. If this occurs, eventually we shall find that although we might make our Earth into a parkland, the actual effect will be more like Yosemite National Park on a midsummer weekend. This is one problem we must not export to other worlds.

Is there a similar danger that the SRS project, though completely successful as a technological and financial venture, will (much like penny-per-gallon gasoline) encourage profligate behavior heedless of catastrophic negative consequences? What unfortunate things might we do, possessing almost unlimited energy and material resources? Will the possibility of hyperabundance lead not to continued national resolve and focus, but rather to a pervasive national complacency, mating us think that all is well, that all has been solved, that things always get solved, and that henceforth we need do little or nothing more to improve our lot? If the system works, and we come to depend on it, growing once more to the limits of our productive inventiveness, will we not be dangerously subject to catastrophic damage as a vital, progressive race?

If space offers any solution to this contradiction between the "good life" and our innate breeding proclivities, it probably will involve the establishment of orbital human colonies. To be practical, these habitats must approach replicating factories in the range of goods and services which they produce. The expense of maintaining a large human colony with direct Earth-based support would be immense, so automated factories most likely must provide the goods and services to support such an operation Once more the need for SRS facilities in the future of humanity becomes apparent.

Replicating factory systems have the potential to severely disrupt or disable most all modern national economies. The concept of "rate of return" on investments may have to be replaced with the notion of "acceleration of return" for nonterrestrial exponentiating SRS. Will present-day governments and other national and international economic entities support the replicating factory concept if it is seen as a potential threat, capable of rendering obsolete the entire global economic order which now exists and under which they now operate?

Environmental impacts. It has been suggested by Dyson (1979) that it might be possible to design a compact replicating robot which can itself serve as part of an enormous energy-collecting grid. Each machine consists of solar panels on top, power transformers and a universal power grid bus connector, some means of mobility such as tracks or wheels, and manipulators and other subsystems necessary for self-replication. Released, say, in the Arizona desert, one or two SRS could rapidly multiply into a "free" gigawatt-capacity generating system in just a few years. This could then be tapped by power-hungry municipal utilities or even by individual users.

Moore (1956) also discussed the possibility of replicating machine "plants" turned loose on Earth. In Moore's scenario, a single floating self-reproducing barge is released into the oceans; a few years later, it has multiplied itself into a population of millions, with each unit periodically commuting to shore bearing useful products for mankind derived from the sea (salts, minerals, gold). Reviewing this scenario, Dyson noted that such seagoing SRS might become so numerous that frequent crowding and collisions would occur between them. The "dead bodies" of machines involved in major accidents could slowly accumulate in the ocean near and along the coastline, causing congestion and representing a menace to navigation. The introduction of machine cannibalism to clean up the mess introduces fresh complications into an already difficult situation - ownership and proper recognition of "dead" machines, destruction control and failsafe mechanisms, nonrecyclable parts, violations of national economic zones, and military applications of the technology.

Environmentalists might perhaps regard SRS released on Earth merely as automated strip-mining robots - yet another sophisticated instrumentality in the hands of those who would mercilessly rape the Earth of its limited resources, leaving behind the ugly scars of profit. There are two responses to this shortsighted view of SRS. First, in the Age of Plenty ushered in by these machines, human society will be sufficiently wealthy to regard environmental integrity and beauty as indispensable outputs of any manufacturing system. These functions may be designed into machines as a matter of course, SRS can be preprogrammed first to strip mine, then reclaim, the land they work. Second, machine replication will make possible significant advances in recycling technology. Given sufficient energy, junkpiles and city dumps may be regarded as low grade "ores" - materials processing robots could be turned loose to analyze, separate, and extract valuable resources. Collection and distribution systems would be streamlined by the use of robot workers constructed at an enormous rate by a sessile self-growing factory complex.

Utilization of the Moon by SRS as proposed in earlier sections may be viewed with outrage by other nations as a predatory attempt to secure a part of the "common heritage of all mankind" for the benefit of America alone. Very drastic alteration of the lunar surface is proposed, raising a question of whether there ought to be reserved areas. Should there be more exploration to determine which regions should be exploited and which should not? Must an environmental impact statement be prepared? As on Earth, lunar surface despoilment in theory may be largely reversed - the machines could be programmed to photograph the original landscape in detail and to restore if after mining operations are finished in that area. A potentially more serious environmental impact is the possible creation of an appreciable lunar atmosphere during the course of industrial operations conducted on the Moon (Johnson and Holbrow, 1977; Vondrak, 1976, 1977), Even small leakages of gas from millions of SRS could create enough atmosphere to disable or seriously disrupt the operation of mass drivers and other manufacturing facilities requiring vacuum conditions.

5.5.2 Implications for Human Evolution

When contemplating the creation of large, imperfectly understood systems with which we have no prior experience, it is prudent to inquire as to the possibility of unforeseen dangers to our continued existence. In particular, artificial intelligences could conceivably become adversaries, whether they reproduce or not. Similarly, SRS might become a threat. independent of their intelligence Because of the imminence of advanced Al and replicating systems technologies in the next several decades, such questions are no longer merely theoretical but have a very pragmatic aspect.

We must begin to examine the possible problems in creating artificial intelligences or replicating systems which could conceivably become our adversaries or competitors for scarce resources. It is not too early to begin considering the possible kinds of behaviors which advanced machines might display, and the "machine sociobiology" which may emerge. It seems wise to try to identify early any fundamental distinctions between intelligent "natural" biological and advanced "artificial" machine systems. Finally, we should consider the significance of the development of advanced machine technologies to the future of human evolution and also to the broader sweep of cosmic evolution.

To serve mankind. The most immediate, urgent impetus for the development of automation and machine replicative techniques is to improve, protect, and increase the productivity of human society. One way of achieving the goal of human preservation and improvement is to make our mechanical creations intelligent, so that they can automatically do what is good for us. We want them to do this even if we have forgotten to specify what "good" is in each instance. Perhaps we don't even know in all cases how to define "good." for example, consider what would happen if a physically capable, literal-minded idiot were put at the controls of a bulldozer (e.g., Pvt. Zero in the "Beetle Bailey" comic strip, present-day computers, etc.). If told to "drive the bulldozer into the parking lot," the idiot would do exactly that, regardless of whether or not the lot happened to be full of automobiles.

One rather compact statement of what is required for our protection already exists. This has come to be known as "Asimov's Three Laws of Robotics":

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with tile First or Second Laws (Asimov, 1950).

This is an excellent prescription of what is required but not of how to accomplish it. Exactly what do these laws entail? The following list of conditions is certainly necessary for the Three Laws to hold:

  1. A robot may not injure a human being.
  2. A robot must use common sense.
  3. A robot must be intelligent.
  4. A robot must be conscious.

Common sense, intelligence, and consciousness are the essence of artificial intelligence research. Even if we cannot exhaustively enumerate the ways to harm a human in any and all circumstances, a robot with the above four properties would protect people to the best of its ability. If it ever did injure a human being it would be because neither we, nor it, foresaw that possibility. But it would immediately perceive its error and would never make the same kind of mistake again. We can do no better ourselves, and usually we do worse.

At the present time we have only the most rudimentary knowledge of what common sense, intelligence, and consciousness are, let alone how to insert there qualities into a robot (Good, 1965). As our computers become ever more complex and pervasive, there is the distinct possibility that these characteristics will arise spontaneously. In this case we would he involved in a totally uncontrolled experiment (Hogan, 1979). If conditions 1-3 were not yet fulfilled, but condition 4 was, the outcome could be catastrophic for mankind. For reasons of self-preservation, we must pursue AI research with the goal of ensuring that capabilities 1-3 are achieved first.

The problem with this entire approach is that any machine sufficiently sophisticated to engage in reproduction in largely unstructured environments and having, in general, the capacity for survival probably must also be capable of a certain amount of automatic or self-reprogramming. Such SRS in theory may be able to "program around" any normative rules of behavior (such as the Three Laws) with which it is endowed by its creators. Thus, it might modify its own patterns of behavior, as determined by its basic goals and learned motivational structure.

It is possible to conceive of a machine design containing "read-only" hard-wired goal structures. But hardware specialists will admit that such procedures can be circumvented by sufficiently clever software in large, complex systems. Further. since SRS must be quite adept at physical manipulation it is likely that it will be able to re.wire its own structure in accordance with perceived operational objectives - assuming it can analyze the functions of its own components as needed for repair or maintenance operations. It may be of no use to try to distribute the hard-wired functions throughout the whole machine, or a large subset thereof, in hopes that the system will be unable to comprehend such a large fraction of itself simultaneously. Omitting the special functions from the machine's stored genetic description of itself would probably be equally ineffectual. Laing (1975, 1977) has shown that machine reproduction by complete self-inspection wherein the parent knows virtually nothing about its own structure to begin with - is quite possible, and has provided several logical designs for such machines. Consequently, it is not possible to logically exclude the possibility of conscious alteration of hard-wired robot "commandments" by intelligent self-replicating machines desirous of doing so.

It would therefore appear nearly impossible, as with people, to absolutely guarantee the "good" behavior of any common-sense, intelligent, conscious SRS. However, also like people, machines can be taught "right" and "wrong," as well as the logical rationales for various codes of behavior. And they can probably be expected to remain even more faithful to a given moral standard than people.

SRS population control. An exponentially increasing number of factories (even if the rate is not sustained indefinitely) will seem especially threatening and psychologically alarming to many. Such a situation will draw forth visions of a "population explosion," heated discussions of lebensraum, cancerous growth, and the like. Nations not possessing replicating systems technology will fear an accelerating economic and cultural gulf between the haves and the have-nots. On another level altogether, humankind as a species may regard the burgeoning machine population as competitors for scarce energy and material resources, even if the net return from the SRS population is positive.

Of course, self-replicating factories are not ends in themselves but have specific purposes - say, to produce certain desired products. The quantity of these products is determined by needs and requirements and is the basis for designing an SRS. Depending on the type of product, factors such as the time when these products need to be available, the production time, and replication time per replica determine the optimum number of replica factories per primary and the number of generations required. The following controls might be used to achieve this condition:

  1. The "genetic" instructions contain a cutoff command after a predetermined number of replicas. After each replica has been constructed one generation command is marked off until at the last predetermined generation the whole process is terminated after the final replica is completed. Besides all this, engineers may have their hands full keeping the SRS replicating on schedule and functioning properly. It is not likely that they will soon be able to do much more than we expect.
  2. A predetermined remote signal from Earth control over a special channel can easily cut the power of the main bus for individual, groups, or all SRS at any time. Replication energy production shows one of the fundamental differences between biological and mechanical replicating systems as presently conceived. In biological systems energy is generated in distributed form (in each living cell throughout the entire organism) whereas in mechanical systems such as SRS energy is produced centrally in special parts (e.g., power plant, solar cells) and then is distributed to wherever it is needed. This should make control of mechanical systems comparatively easy.

For replicating systems much smaller than factories, say, in the 102 -104 kg category, the situation may be somewhat different. One potential problem with such devices is that once started, their multiplication may be difficult to stop. As a reasonably large population accumulates, it may become almost physically impossible for humans to maintain any semblance of control unless certain precautions are taken to severely limit small-machine population expansion. In many ways a large population of low-mass SRS resembles a biological ecology. While the analogy is imperfect, it serves to suggest some useful ideas for automata population control once people determine that direct control of the situation has somehow been lost.

Predation is one interesting possibility. Much as predator animals are frequently introduced in National Parks as a population control measure, we might design predator machines which ate either "species specific" (attacking only one kind of SRS whose numbers must be reduced) or a kind of "universal destructor" (able to take apart any foreign machine encountered, stockpiling the parts and banking the acquired information). Such devices are logically possible, as discussed earlier in section 5.2., but would themselves have to be carefully controlled. Note that a linear supply of predators can control an exponentiating population of prey if the process of destruction is, as expected, far more rapid than that of replication.

Clearly it is easier to design the solution to this problem into the SRS from the start, as suggested above in reference to larger factory systems. For instance, machines might be keyed to respond to population density, becoming infertile, ceasing operations. reporting (like lemmings) to a central disassembly facility, or even resorting to dueling or cannibalism when crowding becomes too severe. However, a method by which the materials and information accumulated by SRS units during their lifespans can be preserved would be in the best interests of human society.

The unpluggability problem. Many people, suspicious of modern computers and robotics technology, take solace in the fact that "no matter what goes wrong, we can always pull out the plug." Such individuals might insist that humankind always retain ultimate Life-and-death control over its machines, as part of the social price to be paid to permit their development. Whether this is advisable, or even necessary, is a question which requires further study. Certainly it is true that our civilization all too easily becomes habituated to its machines, institutions, and large organizations. Could we unplug all our computers today? Could we "unplug" the Social Security Administration? It is difficult, or impossible, and in many cases ill-advised, to retreat from a social or mechanical technology once it has been widely introduced and a really significant change has taken place. Many individuals in our society would prefer to turn back the clock on the industrial revolution, but today this could not be done without the sacrifice of hundreds of millions of human lives and extreme trauma to global civilization.

Further, we must assume that we cannot necessarily pull the plug on our autonomous artificially intelligent species once they have gotten beyond a certain point of intelligent development. The one thing the artificial system may learn is how to avoid a human being "pulling its plug out," in the same way that human beings come to understand how to defend themselves against other people (George, 1977). Consequently, it is imperative that we study ways to assure ourselves that our technological creations will serve to our benefit rather than to our detriment, as best we can, prior to their widespread adoption.

Assuming we wish to retain ultimate control over our creations (by no means a foregone conclusion), the team first considered, as a theoretical issue, the following intriguing problem:

Is it logically possible to design an internal mechanism which permits normal SRS functioning to be interrupted by some external agency, yet which is impossible for the SRS itself to circumvent either by automatic reprogramming or by physical selfreconstruction?

That is, is it impossible to build a machine whore "plug" cannot be "pulled"?

Machine capabilities of the future span a wide spectrum of sophistication of operation. As systems become more complex, individual human beings will come to understand decreasing fractions of the entire machine function. In addition, very advanced devices such as SRS may need to be programmed with primitive survival instincts and repair capabilities if they are to function autonomously. At some point the depth of analysis and sophistication of action available to a robot system may exceed the abilities of human minds to defeat it (should this become necessary). If there is even the slightest possibility that this is so, it becomes imperative that we learn exactly what constellation of machine capabilities might enable an SRS to cross the subtle threshold into "theoretical unpluggability."

To this end, the team subsequently reformulated the unpluggability question as follows:

What is the least sophisticated machine system capable of discovering and circumventing a disabling mechanism placed within it?

While no specific firm conclusions were reached, the team concluded that the simplest machine capable of thus evading human control must incorporate at least four basic categories of intelligence or Al capabilities (Gravander, personal communication, 1980):

  1. Class invention, concept formation, or "abduction"
  2. Self-inspection
  3. Automatic programming
  4. Re-configuration or re-instrumentation capability (especially if the "plug" is in hardware, rather than software)

These four characteristics are necessary preconditions for theoretical unpluggability - a machine lacking any one of them probably could not figure out how to prevent its own deactivation from an external source. Whether the conditions are sufficient is an urgent subject for further research.

Sociobiology of machines. The creation of replicating manufacturing facilities, remotely sited, and for long times left under their own control, poses some very special problems. In order to eliminate the use of humans as much as possible in a harsh environment, these systems of machines should be designed to seek out their own sources of materials; to decide on this basis to invest in duplicates of themselves; to determine their power requirements and see to the construction of requisite new power sources; to monitor their own behaviors; to undertake the repair of machines, including themselves; to determine when machines have, under the conditions obtaining, reached the end of their useful working lives; and so forth. They must operate reliably and resist corrupting signals and commands from within their own numbers, and from without. They must be able to discern when machines (whether of their own sort or not) in their neighborhood are, by their behavior, disrupting or endangering the proper functioning of the system. Since we cannot foresee all of the ways in which the system may be perturbed, we shall have to supply it with goals, as well as some problem-solving or homeostatic capabilities, enabling the machines to solve their own difficulties and restore themselves to proper working order with little or no human assistance.

As SRS make duplicates of themselves, the offspring will, if suited to surroundings different from those of their parents, differ somewhat from them. More of one sort of submachine or subordinate machine may be required, fewer of another. The main "generic" program will undoubtedly increase in size, generation by generation. At removed locations constructor-replicators may symbiotically combine with miners, surveyors, and fabricators to form satellite machine communities differing considerably from the original population.

At this stage it may be that some of the claims made by evolutionary biologists as to the likely origin of complex, social behavior of animal populations may begin to apply to machine populations. Indeed, it may be that the arguments of the sociobiologists will be more applicable to machines than to animals and humans. In the case of animals, and especially in regard to humans, the opponents of the evolutionary biologists insist on the priority of alternative sources for social behavior - namely, individual learning. Behavior need not have its origins in the genome. These opponents of the evolutionary biologists constantly challenge them to specify where in the genome is the locus of selfishness, distrust of strangers, aggression, and the like. This is not really readily done.

However, in the case of machines the locus of behavior can indeed be specified: it is in the program of instructions, and these programs can, like genes, be modified and transmitted to offspring. Though we may not be mere machines driven by our genes, mere real machines are indeed driven by their gene-like programs, and for them, some of the evolutionary biological predictions of the likely resulting system behaviors may apply.

Thus, at the most elementary level, if some one of the SRS machines capable of duplicating itself begins to concentrate on this reproductive activity to the neglect of all other tasks we intend for it, its progeny (possessing the same trait) might soon become dominant in the machine population. But far more complex aberrations and consequent elaboration of machine behavior can arise and be propagated in machine populations of the sophistication we may be forced to employ.

Thus, our machines can reproduce themselves as well as tell whether an encountered machine is "like" themselves or could be one of their offspring. If the structure of an encountered machine is examined and found to be similar to or identical to the machines of one's own making or of one's own system of machines, then such machine should be welcomed, tolerated, repaired, supplied with energy and consumables, and put to work in the common enterprise. If, on the other hand, the structure of an encountered machine deviates greatly from that of any of one's own system of machines even if it is in fact a device of one's own construction which has suffered severe damage or defect of construction - then prudence suggests it should be disabled and dismantled.

It is interesting to note that this "reasonable" kin-preferring behavior could arise generally throughout the machine population quite without it having been made a deliberate part of the programs of machines of the system (Hamilton, 1964). If a single machine of the sort which reproduces ever chances upon the program "trait" of tolerating machines like itself, or aiding or repairing them while ignoring, disabling, or dismantling machines unlike itself and its offspring, then this machine species will tend to increase its numbers at the expense of other reproducing machines (all other things being equal) so that after a few generations all machines, quite without having been given the goal or purpose of preferring their own kind, will have this kin-preferring property. Other types of machines that are less kin-supportive would not leave relatively so many of their kind to further propagate. This is the familiar biological selection principle of differential reproduction.

This argument can be carried further. In a society of machines in which it "pays" to know which machines are your "relations," it will became risky to undertake or to submit to close structural inspection as this will reveal what sort of machine you really are - friend or foe. Instead, behavioral cues will likely develop that signal whether a machine is kin or not. Unfortunately, such signals can equally well be used to deceive. A machine could learn to give the kinship sign even though it is not at all a relation to the encountered machine, or friendly either. It may use the conventional sign of friendship or kinship merely as a means of soliciting undeserved assistance (e.g., repair, materials, energy) from the deceived machine and the system of subordinate machines with which it is associated, or may even use the signals of kinship or friendship as a means of approaching close enough to disable and dismantle the deceived machine.

The evolutionary argument should be cast as follows. Any machine which chances upon a behavioral sign that secures the assistance of a machine or a population of machines will be spared efforts at survival it would otherwise have to undertake on its own, and thus will possess extra resources which can be utilized to undertake the construction of more machines like itself. If the "deceitful signal" behavior is transmitted in the genetic-construction program, then its offspring will also be able to employ the deceitful signal, and will thus produce proportionately more of their kind. The deceitful gene-program machines will increase their numbers, relative to the others, in the machine population. In turn, those machines which chance upon ways of detecting this deceit will be protected against the cheating machines, and will themselves increase their numbers vis-a-vis their "sucker" related machines who will soon be spending more and more time aiding, servicing, and supplying cheaters (thus have fewer resources in the form of time, energy, and materials to reproduce their own kind).

It is even possible that in a largely autonomous system of reproducing machines a form of reciprocal altruism will arise, in which machines behave in seemingly unselfish fashion toward other machines which are not kin (and are not deceitfully posing as kin). The evolutionary biologists, especially Trivers (1971), have argued have no prior experience, it is prudent to inquire as to the possibility of unforeseen dangers to our continued existence. In particular, artificial intelligences could conceivably become adversaries, whether they reproduce or not. Similarly SRS might become a threat. independent of their intelligence Because of the imminence of advanced Al and replicating systems technologies in the next several decades, such questions are no longer merely theoretical but have a very pragmatic aspect.

We must begin to examine the possible problems in creating artificial intelligences or replicating systems which could conceivably become our adversaries or competitors for scarce resources. It is not too early to begin considering the possible kinds of behaviors which advanced machines might display, and the "machine sociobiology" which may emerge. It seems wise to try to identify early any fundamental distinctions between intelligent "natural" biological and advanced "artificial" machine systems. Finally, we should consider the significance of the development of advanced machine technologies to the future of human evolution and also to the broader sweep of cosmic evolution.

To serve mankind. The most immediate, urgent impetus for the development of automation and machine replicative techniques is to improve, protect, and increase the productivity of human society. One way of achieving the goal of human preservation and improvement is to make our mechanical creations intelligent, so that they can automatically do what is good for us. We want them to do this even if we have forgotten to specify what "good" is in each instance. Perhaps we don't even know in all cases how to define "good." for example, consider what would happen if a physically capable, literal-minded idiot were put at the controls of a bulldozer (e.g., Pvt. Zero in the "Beetle Bailey" comic strip, present-day computers, etc.). If told to "drive the bulldozer into the parking lot," the idiot would do exactly that, regardless of whether or not the lot happened to be full of automobiles.

One rather compact statement of what is required for our protection already exists. This has come to be known as "Asimov's Three Laws of Robotics":

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with tile First or Second Laws (Asimov, 1950).

This is an excellent prescription of what is required but not of how to accomplish it. Exactly what do these laws entail? The following list of conditions is certainly necessary for the Three Laws to hold:

  1. A robot may not injure a human being.
  2. A robot must use common sense.
  3. A robot must be intelligent.
  4. A robot must be conscious.

Common sense, intelligence, and consciousness are the essence of artificial intelligence research. Even if we cannot exhaustively enumerate the ways to harm a human in any and all circumstances, a robot with the above four properties would protect people to the best of its ability. If it ever did injure a human being it would be because neither we, nor it, foresaw that possibility. But it would immediately perceive its error and would never make the same kind of mistake again. We can do no better ourselves, and usually we do worse.

At the present time we have only the most rudimentary knowledge of what common sense, intelligence, and consciousness are, let alone how to insert there qualities into a robot (Good, 1965). As our computers become ever more complex and pervasive, there is the distinct possibility that these characteristics will arise spontaneously. In this case we would he involved in a totally uncontrolled experiment (Hogan, 1979). If conditions 1-3 were not yet fulfilled, but condition 4 was, the outcome could be catastrophic for mankind. For reasons of self-preservation, we must pursue AI research with the goal of ensuring that capabilities 1-3 are achieved first.

The problem with this entire approach is that any machine sufficiently sophisticated to engage in reproduction in largely unstructured environments and having, in general. the capacity for survival probably must also be capable of a certain amount of automatic or self-reprogramming. Such SRS in theory may be able to "program around" any normative rules of behavior (such as the Three Laws) with which it is endowed by its creators. Thus, it might modify its own patterns of behavior, as determined by its basic goals and learned motivational structure.

It is possible to conceive of a machine design containing "read-only" hard-wired goal structures. But hardware specialists will admit that such procedures can be circumvented by sufficiently clever software in large, complex systems. Further. since SRS must be quite adept at physical manipulation it is likely that it will be able to re.wire its own structure in accordance with perceived operational objectives - assuming it can analyze the functions of its own components as needed for repair or maintenance operations. It may be of no use to try to distribute the hard-wired functions throughout the whole machine, or a large subset thereof, in hopes that the system will be unable to comprehend such a large fraction of itself simultaneously. Omitting the special functions from the machine's stored genetic description of itself would probably be equally ineffectual. Laing (1975, 1977) has shown that machine reproduction by complete self-inspection wherein the parent knows virtually nothing about its own structure to begin with - is quite possible, and has provided several logical designs for such machines. Consequently, it is not possible to logically exclude the possibility of conscious alteration of hard-wired robot "commandments" by intelligent self-replicating machines desirous of doing so.

It would therefore appear nearly impossible, as with people, to absolutely guarantee the "good" behavior of any common-sense, intelligent, conscious SRS. However, also like people, machines can be taught "right" and "wrong," as well as the logical rationales for various codes of behavior. And they can probably be expected to remain even more faithful to a given moral standard than people.

SRS population control. An exponentially increasing number of factories (even if the rate is not sustained indefinitely) will seem especially threatening and psychologically alarming to many. Such a situation will draw forth visions of a "population explosion," heated discussions of lebensraum, cancerous growth, and the like. Nations not possessing replicating systems technology will fear an accelerating economic and cultural gulf between the haves and the have-nots. On another level altogether, humankind as a species may regard the burgeoning machine population as competitors for scarce energy and material resources, even if the net return from the SRS population is positive.

Of course, self-replicating factories are not ends in themselves but have specific purposes - say, to produce certain desired products. The quantity of these products is determined by needs and requirements and is the basis for designing an SRS. Depending on the type of product, factors such as the time when these products need to be available, the production time, and replication time per replica determine the optimum number of replica factories per primary and the number of generations required. The following controls might be used to achieve this condition:

  1. The "genetic" instructions contain a cutoff command after a predetermined number of replicas. After each replica has been constructed one generation command is marked off until at the last predetermined generation the whole process is terminated after the final replica is completed. Besides all this, engineers may have their hands full keeping the SRS replicating on schedule and functioning properly. It is not likely that they will soon be able to do much more than we expect.
  2. A predetermined remote signal from Earth control over a special channel can easily cut the power of the main bus for individual, groups, or all SRS at any time. Replication energy production shows one of the fundamental differences between biological and mechanical replicating systems as presently conceived. In biological systems energy is generated in distributed form (in each living cell throughout the entire organism) whereas in mechanical systems such as SRS energy is produced centrally in special parts (e.g., power plant, solar cells) and then is distributed to wherever it is needed. This should make control of mechanical systems comparatively easy.

For replicating systems much smaller than factories, say, in the 102 -104 kg category, the situation may be somewhat different. One potential problem with such devices is that once started, their multiplication may be difficult to stop. As a reasonably large population accumulates, it may become almost physically impossible for humans to maintain any semblance of control unless certain precautions are taken to severely limit small-machine population expansion. In many ways a large population of low-mass SRS resembles a biological ecology. While the analogy is imperfect, it serves to suggest some useful ideas for automata population control once people determine that direct control of the situation has somehow been lost.

Predation is one interesting possibility. Much as predator animals are frequently introduced in National Parks as a population control measure, we might design predator machines which ate either "species specific" (attacking only one kind of SRS whose numbers must be reduced) or a kind of "universal destructor" (able to take apart any foreign machine encountered, stockpiling the parts and banking the acquired information). Such devices are logically possible, as discussed earlier in section 5.2., but would themselves have to be carefully controlled. Note that a linear supply of predators can control an exponentiating population of prey if the process of destruction is, as expected, far more rapid than that of replication.

Clearly it is easier to design the solution to this problem into the SRS from the start, as suggested above in reference to larger factory systems. For instance, machines might be keyed to respond to population density, becoming infertile, ceasing operations. reporting (like lemmings) to a central disassembly facility, or even resorting to dueling or cannibalism when crowding becomes too severe. However, a method by which the materials and information accumulated by SRS units during their lifespans can be preserved would be in the best interests of human society.

The unpluggability problem. Many people, suspicious of modern computers and robotics technology, take solace in the fact that "no matter what goes wrong, we can always pull out the plug." Such individuals might insist that humankind always retain ultimate Life-and-death control over its machines, as part of the social price to be paid to permit their development. Whether this is advisable, or even necessary, is a question which requires further study. Certainly it is true that our civilization all too easily becomes habituated to its machines, institutions, and large organizations. Could we unplug all our computers today? Could we "unplug" the Social Security Administration? It is difficult, or impossible, and in many cases ill-advised, to retreat from a social or mechanical technology once it has been widely introduced and a really significant change has taken place. Many individuals in our society would prefer to turn back the clock on the industrial revolution, but today this could not be done without the sacrifice of hundreds of millions of human lives and extreme trauma to global civilization.

Further, we must assume that we cannot necessarily pull the plug on our autonomous artificially intelligent species once they have gotten beyond a certain point of intelligent development. The one thing the artificial system may learn is how to avoid a human being "pulling its plug out," in the same way that human beings come to understand how to defend themselves against other people (George, 1977). Consequently, it is imperative that we study ways to assure ourselves that our technological creations will serve to our benefit rather than to our detriment, as best we can, prior to their widespread adoption.

Assuming we wish to retain ultimate control over our creations (by no means a foregone conclusion), the team first considered, as a theoretical issue, the following intriguing problem:

Is it logically possible to design an internal mechanism which permits normal SRS functioning to be interrupted by some external agency, yet which is impossible for the SRS itself to circumvent either by automatic reprogramming or by physical selfreconstruction?

That is, is it impossible to build a machine whore "plug" cannot be "pulled"?

Machine capabilities of the future span a wide spectrum of sophistication of operation. As systems become more complex, individual human beings will come to understand decreasing fractions of the entire machine function. In addition, very advanced devices such as SRS may need to be programmed with primitive survival instincts and repair capabilities if they are to function autonomously. At some point the depth of analysis and sophistication of action available to a robot system may exceed the abilities of human minds to defeat it (should this become necessary). If there is even the slightest possibility that this is so, it becomes imperative that we learn exactly what constellation of machine capabilities might enable an SRS to cross the subtle threshold into "theoretical unpluggability."

To this end, the team subsequently reformulated the unpluggability question as follows:

What is the least sophisticated machine system capable of discovering and circumventing a disabling mechanism placed within it?

While no specific firm conclusions were reached, the team concluded that the simplest machine capable of thus evading human control must incorporate at least four basic categories of intelligence or Al capabilities (Gravander, personal communication, 1980):

  1. Class invention, concept formation, or "abduction"
  2. Self-inspection
  3. Automatic programming
  4. Re-configuration or re-instrumentation capability (especially if the "plug" is in hardware, rather than software)

These four characteristics are necessary preconditions for theoretical unpluggability - a machine lacking any one of them probably could not figure out how to prevent its own deactivation from an external source. Whether the conditions are sufficient is an urgent subject for further research.

Sociobiology of machines. The creation of replicating manufacturing facilities, remotely sited, and for long times left under their own control, poses some very special problems. In order to eliminate the use of humans as much as possible in a harsh environment, these systems of machines should be designed to seek out their own sources of materials; to decide on this basis to invest in duplicates of themselves; to determine their power requirements and see to the construction of requisite new power sources; to monitor their own behaviors; to undertake the repair of machines, including themselves; to determine when machines have, under the conditions obtaining, reached the end of their useful working lives; and so forth. They must operate reliably and resist corrupting signals and commands from within their own numbers, and from without. They must be able to discern when machines (whether of their own sort or not) in their neighborhood are, by their behavior, disrupting or endangering the proper functioning of the system. Since we cannot foresee all of the ways in which the system may be perturbed, we shall have to supply it with goals, as well as some problem-solving or homeostatic capabilities, enabling the machines to solve their own difficulties and restore themselves to proper working order with little or no human assistance.

As SRS make duplicates of themselves, the offspring will, if suited to surroundings different from those of their parents, differ somewhat from them. More of one sort of submachine or subordinate machine may be required, fewer of another. The main "generic" program will undoubtedly increase in size, generation by generation. At removed locations constructor-replicators may symbiotically combine with miners, surveyors, and fabricators to form satellite machine communities differing considerably from the original population.

At this stage it may be that some of the claims made by evolutionary biologists as to the likely origin of complex, social behavior of animal populations may begin to apply to machine populations. Indeed, it may be that the arguments of the sociobiologists will be more applicable to machines than to animals and humans. In the case of animals, and especially in regard to humans, the opponents of the evolutionary biologists insist on the priority of alternative sources for social behavior - namely, individual learning. Behavior need not have its origins in the genome. These opponents of the evolutionary biologists constantly challenge them to specify where in the genome is the locus of selfishness, distrust of strangers, aggression, and the like. This is not really readily done.

However, in the case of machines the locus of behavior can indeed be specified: it is in the program of instructions, and these programs can, like genes, be modified and transmitted to offspring. Though we may not be mere machines driven by our genes, mere real machines are indeed driven by their gene-like programs, and for them, some of the evolutionary biological predictions of the likely resulting system behaviors may apply.

Thus, at the most elementary level, if some one of the SRS machines capable of duplicating itself begins to concentrate on this reproductive activity to the neglect of all other tasks we intend for it, its progeny (possessing the same trait) might soon become dominant in the machine population. But far more complex aberrations and consequent elaboration of machine behavior can arise and be propagated in machine populations of the sophistication we may be forced to employ.

Thus, our machines can reproduce themselves as well as tell whether an encountered machine is "like" themselves or could be one of their offspring. If the structure of an encountered machine is examined and found to be similar to or identical to the machines of one's own making or of one's own system of machines, then such machine should be welcomed, tolerated, repaired, supplied with energy and consumables, and put to work in the common enterprise. If, on the other hand, the structure of an encountered machine deviates greatly from that of any of one's own system of machines even if it is in fact a device of one's own construction which has suffered severe damage or defect of construction - then prudence suggests it should be disabled and dismantled.

It is interesting to note that this "reasonable" kin-preferring behavior could arise generally throughout the machine population quite without it having been made a deliberate part of the programs of machines of the system (Hamilton, 1964). If a single machine of the sort which reproduces ever chances upon the program "trait" of tolerating machines like itself, or aiding or repairing them while ignoring, disabling, or dismantling machines unlike itself and its offspring, then this machine species will tend to increase its numbers at the expense of other reproducing machines (all other things being equal) so that after a few generations all machines, quite without having been given the goal or purpose of preferring their own kind, will have this kin-preferring property. Other types of machines that are less kin-supportive would not leave relatively so many of their kind to further propagate. This is the familiar biological selection principle of differential reproduction.

This argument can be carried further. In a society of machines in which it "pays" to know which machines are your "relations," it will became risky to undertake or to submit to close structural inspection as this will reveal what sort of machine you really are - friend or foe. Instead, behavioral cues will likely develop that signal whether a machine is kin or not. Unfortunately, such signals can equally well be used to deceive. A machine could learn to give the kinship sign even though it is not at all a relation to the encountered machine, or friendly either. It may use the conventional sign of friendship or kinship merely as a means of soliciting undeserved assistance (e.g., repair, materials, energy) from the deceived machine and the system of subordinate machines with which it is associated, or may even use the signals of kinship or friendship as a means of approaching close enough to disable and dismantle the deceived machine.

The evolutionary argument should be cast as follows. Any machine which chances upon a behavioral sign that secures the assistance of a machine or a population of machines will be spared efforts at survival it would otherwise have to undertake on its own, and thus will possess extra resources which can be utilized to undertake the construction of more machines like itself. If the "deceitful signal" behavior is transmitted in the genetic-construction program, then its offspring will also be able to employ the deceitful signal, and will thus produce proportionately more of their kind. The deceitful gene-program machines will increase their numbers, relative to the others, in the machine population. In turn, those machines which chance upon ways of detecting this deceit will be protected against the cheating machines, and will themselves increase their numbers vis-a-vis their "sucker" related machines who will soon be spending more and more time aiding, servicing, and supplying cheaters (thus have fewer resources in the form of time, energy, and materials to reproduce their own kind).

It is even possible that in a largely autonomous system of reproducing machines a form of reciprocal altruism will arise, in which machines behave in seemingly unselfish fashion toward other machines which are not kin (and are not deceitfully posing as kin). The evolutionary biologists, especially Trivers (1971), have argued that in situations where the reproducing entities have (1) long lifespans, (2) remain in contact with others of their group, and (3) experience situations in which they are mutually dependent, reciprocal altruism may arise out of chance variation and evolutionary selection. In human terms, if helpful actions can be taken which are low risk to the giver and have a high value to the receiver (high/low risk defined relative to the impact on individual reproductive potentials) and there is the likelihood that the individuals will remain in fairly close association for a long time, then any genetic predisposition to take altruistic actions will tend to spread in the population. For, in effect, it will lead to reciprocal assistance in times of need, to the greater survival (and hence increased breeding opportunity) of those members of the populations bearing this genetic trait. A good example is that of an individual saving another from drowning by reaching out a branch. The risk to the giver is small, and the benefit to the receiver is great, and over a long time the benefits (in terms of increased numbers of offspring) are likely to be great, to those members of the population genetically predisposed to behave in this reciprocally altruistic fashion.

Needless to say, the opportunities for deceit and cheating in the case of hoped-for altruistic reciprocity are even more numerous and complex than for kin selection strategies. In particular, each individual(animal or machine) must possess the memory capacity to remember the altruistic acts and the partners in them, since the opportunity for reciprocity may not arise for some time. Also some cost/benefit analysis must take place in which the value of the act, the character of the reciprocity partner, the capacity of this partner to repay, and the likely lifespan of the giver and receiver all must be carefully weighed. Some evolutionary biologists would go so far as to claim that purely genetic (and hence "mechanical") workings out of such subtle relationships drove the hominid brain, in a few million years, from dullness to sophistication. A few even suggest that the origins of human language lie in the process of making claims of kinship (while possibly being no relation at all), of offering friendship (while possibly intending harm), and promising future assistance (while intending, when called upon, to turn away).

If our machines attain this level of behavioral sophistication, it may finally not be amiss to ask whether they have not become so like us that we have no further right to command then for our own purposes, and so should quietly emancipate them.

Entropy, SRS, and biology. Nature has provided on Earth an example of the primary generation of self-replicating biological systems from energy and matter alone. The second law of thermodynamics states that the entropy of energy continually increases. At the moment of the Big Bang, it may have been zero and today it spreads between a lower boundary that covers neutron stars and black holes and an upper boundary indicated by the 3 K background radiation. At the same time matter decreased in entropy from practical infinity at the moment of the "Big Bang" to a lower boundary evolving from hydrogen atoms to light elements, heavy elements, life, to the human brain - towards ever more complex structures, generally more intelligent matter, limited by the upper boundary of elemental particles. Matter tends to evolve toward greater complexity at the expense of energy, which in turn acquires increasing entropy. (See fig. 5.25.)

Figure 5.25 -
Natural evolution of complexity of matter in the cosmos.

The generation of a desired material order which may represent an SRS and its self-description would recapitulate biology-like evolution in engineering terms. However, there may be one fundamental difference between the two. Living organisms have two separate information systems that help determine their behavior: DNA and the brain. Between these two there is no direct information transfer, perhaps instead only indirect sociobiological influences. DNA information is initially provided to the organism, whereas brain information is gradually acquired through diverse environmental interactions. In SRS there is not this differentiation - initially provided information is the principal driver of actions and is accessible to the SRS intelligence (Fig. 5.26).

Figure 5.26 -
Accessibility of biological and machine stored information.

Man-machine co-evolution. In the very long term, there are two possibilities for the future of mankind: Either we are a biological "waystation" in the evolutionary scheme of the universe, or else we are an evolutionary dead end. If we continue to be limited to our exceedingly fragile existence on spaceship Earth, a natural disaster or our own jingoistic or ecological foolhardiness is almost certain to terminate our existence perhaps centuries or millennia from today. Barring these unpleasant circumstances. our civilization, without the challenge of a frontier. may stagnate while other beings nourish and populate the universe.

Replicating systems technology gives humanity other options for continued and fruitful evolution. We can create autonomous (unmanned) SRS - in a very real intellectual end material sense our offspring - and send them out into the cosmos. Alternatively, we could create a symbiotic human-machine system in which people would inhabit a vast self-reproducing habitat. This is analogous to creating an artificial Earth which replicates itself whenever its population of humans fills the available space or saturates the energy supply or waste disposal facilities. In the process of working to achieve the second goal, mankind could use SRS to attempt terraforming other worlds. Experiments could he performed on planetary-scale weather modification with relevance to maintaining or changing the Earth's climate.

At present, machines already "reproduce" themselves but only with human help. Such mutualism is commonplace in biology - many flowering plants require crosspollination by insects to survive. The most successful organism is one which can enlist the most reproductive assistance from other creatures. It has often been suggested that an extraterrestrial biologist who chose Los Angeles as the site for his field study of Earth might well conclude that the automobile was the dominant lifeform on this planet and that humans represented its detachable brains and reproductive organs. Indeed, further observation might suggest that many people are redundant although the human population of Los Angeles has remained relatively constant during the past decade, the car population has continued to increase.

This issue has tremendous importance to the question of human survival and long-term evolution. Asks Burhoe (1971): "Will we become the 'contented cows' or the 'household pets' of the new computer kingdom of life? Or will Homo sapiens be exterminated as Homo sapiens has apparently exterminated all the other species of Homo?" Perhaps machine-wrecking New Luddites of the future will band together to form secret organizations devoted to "carbon power" and the destruction of all silicon microelectronic chips and robotic devices.

Are we creating a new "kingdom of life," as significant as the emergence and separation of plant and animal kingdoms billions of years ago on Earth? Or perhaps such an event has even greater import, since "machine life" is of a totally different material substance than either animal or plant life, and because "machine life" very possibly is a form which cannot evolve by direct natural routes but instead requires a naturally evolved biological creator. In addition, while human brains process data at a rate of about 1010 bits/sec/kg, silicon computer microprocessors operate at 1016-1020 bits/sec/kg. This enormous disparity in potential intelligence has given some people great cause for alarm. For example, according to Werley (1974):

In terms of the 4.5 billion years of carbon-based life on Earth, the advent of machinery has been amazingly abrupt. Yet the evolution of machines is subject to the same laws as the evolution of ordinary carbon-based life. Machines have also evolved toward an increased biomass, increased ecological efficiency, maximal reproduction rate, proliferation of species, mobility, and a longer lifespan. Machines, being a form of life, are in competition with carbon-based life. Machines will make carbon-based life extinct.

Not everyone is so unduly pessimistic. Of course, if we create SRS then we will find ourselves co-inhabiting the universe with an alien race of beings. But the ultimate outcome is unknown: we could dominate them, they could dominate us, we could co-exist as separate species, or we could form a symbiotic relationship. This last is the most exciting possibility. Humankind could achieve the simultaneous perpetuation and development of its being and expansion of its niche of the Universe. At the price of being a part of a larger system mankind could achieve immortality for itself. The Earth was a gift of creation, but someday people may have the opportunity to make many more such systems of their own choosing.

Automated space habitats could serve as extraterrestrial refuges for humanity and other terrestrial lifeforms that man might choose to bring along as insurance against global terrestrial catastrophes. The dispersal of humankind to many spatially separated ecosystems would ensure that no planetary-scale disaster, and, as people travel to other stars, no stellar-scale or (ultimately) galactic-scale event could threaten the destruction of the entire species and its accomplishments.

5.5.3 Philosophical, Ethical and Religious Questions

New developments in science and technology frequently have profound religious and philosophical consequences. The observation that, rather than being the center of a rather small universe, the Earth is but a small frail speck of a spacecraft in an unimaginably enormous universe is only just now beginning to be appreciated and woven into the fabric of human religion, philosophy, and culture. The existence of an alien race of beings, as alive as we are, would similarly challenge our old beliefs. We may encounter this alien race either through SETI or through our own technological creation.

According to British Agriculture Minister Peter Walker, "Uniquely in history, we have the circumstances in which we can create Athens without the slaves." However, if robots gain intelligence, sensitivity, and the ability to replicate, might not they be considered legal persons, hence slaves? Is mankind creating a new form of life to enthrall to its bidding? Is it immoral to subjugate an entity of one's own creation? (Modern jurisprudence, in contrast to ancient Roman law, does not permit a parent to enslave his child.) Questions of "machine rights" or "robot liberation" undoubtedly will arise in the future. And if the intelligence or sensitivity of robots ever exceeds that of humankind, ought we grant them "civil rights" superior to our own? Many ethical philosophers, particularly those who support the contemporary "animal liberation" movement, might answer in the affirmative.

Could a self-reproducing, evolving machine have a concept of God? It must understand the concept of creation, since it itself creates other machines during the processes of self-replication and growth. Thus, it should recognize the role of creator. If it was aware that mankind had created it, would it view its creator as a transcendent active moral entity simply because of our role as creator? Or would it tend to view humanity much as we view lemurs and chimpanzees - ancient species that served as an important link in an evolutionary chain, but which is now merely another "lower order" of life? Would humankind be seen as nothing more than an evolutionary precursor?

Perhaps not. Homo sapiens evolved from more primitive mammals, not by conscious design but rather by evolution acting through differential reproduction in response to arbitrary environments. It would be silly for people to revere mammals as their gods - these animals did nothing to actively cause the emergence of the human race. On the other hand, humans may purposely engender the creation of intelligent reproducing machines whose emergent philosophy we are considering. Our role is clearly much more than that of passive precursor; rather, it is one of active creator - conceiving, planning, designing, developing, building, programming, and deploying the SRS. It seems plausible that, for this reason, mankind might also expect to play a more active role in any "machine theology" that might ultimately develop.

Related theological issues include: Could conscious, intelligent machines have a soul? Or, what is for many purposes equivalent, will they think they have a soul? How will human religions respond to the prospect of an intelligent machine capable of self-replication? Are there any Scriptural prohibitions or pronouncements applicable in this matter? Is it possible to view the machine as possessing a "soul"?

What of man's view of himself? He now takes pride in his uniqueness. How will he adjust to being just an example of the generic class "intelligent creature"? On the other hand, the concept of "God" may take as much a beating as the notion of "man." After all, He is special now because He created us. If we create another race of beings, then are we not ourselves, in some similar sense, gods?

Is ethics as a concept of moral behavior a purely human or purely biological construct, or is the notion tied to evolutionary universals and environmental/developmental imperatives which will prove equally applicable to advanced intelligent machines? If machines are capable of developing their own systems of "ethics," it would probably appear as alien to human eyes as does the behavior of other animal species (e.g., the apparent "cruelty" of many insect species).

Will advanced machines have any artistic urges, a sense of humor, curiosity, or a sense of irony, or are these kinds of responses confined exclusively to biological creatures capable of displaying emotion? It is unknown whether machines even need emotionality - we are only beginning to understand the functions of these responses in mammals and humans.

Will a vast industrialized lunar complex of interacting systems be vulnerable to catastrophic accidents and breakdowns, or to attack, subversion, or disruption, either by unexpected machine responses generated out of the complexity of their interactions, or by the interference of one or more unfriendly powers on Earth? Are there subtle ways in which the lunar complex could be subverted? SRS systems, to the extent they are highly sophisticated machines and autonomous, may be subject to some forms of attack and subversion not hitherto realized. Spurious signals may be injected, or foreign machines may enter the works, for example. How might subversive signals and invading software "viruses" be detected and resisted? What identification of friendly and unfriendly machines should be employed? Which is most reliable? What means of information and control message security should be adopted? These questions will take on greater urgency as SRS come to represent ever-increasing shares of the global industrial economy.

Finally, might replicated robot warriors, war machines, or other SRS-derived combat systems make war "too horrible to contemplate"? Perhaps machine wars will still be fought, but will be exported into space to preserve the Earth. Maybe all conflicts will be fought only in computer simulations as "war games"? Or, the availability of sophisticated autonomous fighting machines might lead instead to an increase at least in small-scale wars, because of the low cost of such devices, the unlikelihood of human injury in autonomously waged conflicts, and because of possible increasing human boredom in a society of extreme physical safety and material hyperabundance due to superautomation.

5.5.4 Cosmological Implications

According to Valdes and Freitas (1980), any sentient extraterrestrial civilization desiring to explore the Galaxy beyond 100 light-years from its home star should find it more efficient and economical to use self-replicating star probes because of the benefits of exponentiation. This will secure the largest quantity of data about extrasolar systems by the end of an exploration program of some fixed duration. The entire Galaxy can be explored in times on the order of 106 years assuming interstellar cruising speeds on the order of O.1c, now considered feasible using foreseeable human technology (Martin, 1978). Many who have written on the subject of theoretical galactic demographics have suggested that most extraterrestrial races probably will be found 100 to 1000 light-years from Earth and beyond. Hence it may be concluded that the most likely interstellar messenger probe we may expect to receive will be of the reproducing variety.

One of the tremendous advantages of interstellar probes over interstellar beacons in the Search for Extraterrestrial Intelligence (SETI) is that probes may serve as cosmic "safety deposit boxes" for the cultural treasures of a long-perished civilization (Freitas, 1980d). The gold-anodized Voyager records are a primitive attempt to achieve just this sort of cultural immortality (Sagan, 1978). Starfaring self-replicating machines should be especially capable of maintaining themselves against the disordering effects of long periods of time, hence SRS will be preferentially selected for survival over nonreproducing systems. This fact, together with the aforementioned preference for using SRS for very long-term, large-distance galactic exploration implies that any alien machine we might find in our own solar system (as part of a dedicated SETI effort; see Freitas and Valdes, 1980) still in adequate working order will most probably be a replicating system.

A number of fundamental but far-reaching ethical issues are raised by the possible existence of replicating machines in the Galaxy. For instance, is it morally right, or equitable, for a self-reproducing machine to enter a foreign solar system and convert part of that system's mass and energy to its own purposes? Does an intelligent race legally "own" its home sun, planets, asteroidal materials, moons, solar wind, and comets? Does it make a difference if the planets are inhabited by intelligent beings, and if so, is there some lower threshold of intellect below which a system may ethically be "invaded" or expropriated? If the sentient inhabitants lack advanced technology, or if they have it, should this make any difference in our ethical judgment of the situation?

Oliver (1975) has pointed out that the number of intelligent races that have existed in the pant may be significantly greater than those presently in existence. Specifically, at this time there may exist perhaps only 10% of the alien civilizations that have ever lived in the Galaxy - the remaining 90% having become extinct. If this is true, then 9 of every 10 replicating machines we might find in the Solar System could be emissaries from long-dead cultures (fig. 5.27).

Figure 5.27 -
Population of extraterrestrial civilizations as a function of galactic time.

If we do in fact find such machines and are able to interrogate them successfully, we may become privy to the doings of incredibly old alien societies long since perished. These societies may lead to many others, so we may be treated, not just to a marvelous description of the entire biology and history of a single intelligent race, but also to an encyclopedic travelogue describing thousands or millions of other extraterrestrial civilizations known to the creators of the probe we are examining. Probes will likely contain at least an edited version of the sending race's proverbial "Encyclopedia Galactica," because this information is essential if the probe is to make the most informed and intelligent autonomous decisions during its explorations.

Further, if the probe we find has been waiting near our Sun for long enough, it may have observed such Solar System phenomena as the capture of Phobos, the upthrusting of the Rocky Mountains or the breakup of Pangaea, the formation of the Saturnian rings, the possible ejection of Pluto from Neptunian orbit, the possible destruction of a planet in what is now the Asteroid Belt. the origin of the Moon, or even the formation of our own planetary system. Perhaps it could provide actual visual images of Earth during the Jurassic or Carboniferous eras. or data on the genomes of long extinct reptiles (e.g., dinosaurs) or mammals, possibly based on actual samples taken at the time. There are countless uses we could make of an "intelligent eye" that has been watching our planet for thousands or millions of years, meticulously recording everything it sees.

SRS probes can be sent to other star systems to reproduce their own kind and spread. Each machine thus created may be immortal (limitlessly self-repairing) or mortal. If mortal, then the machines may be further used as follows. As a replicating system degrades below the point where it is capable of reproducing itself, it can sink to a more simple processing mode. In this mode (useful perhaps as a prelude to human colonization) the system merely processes materials, maybe also parts and subassemblies of machines, as best it can and stockpiles them for the day when human beings or new machines will arrive to take charge and make use of the processed matter which will then be available. As the original machine system falls below even this level of automation competence, its function might then be redirected to serve merely as a link in an expanding interstellar repeater network useful for navigation or communications. Thus, at every point in its lifespan, the SRS probe can serve its creators in some profitable capacity. A machine which degrades to below the ability to self-reproduce need not simply "die."

The SRS is so powerful a tool that it could have implications on a cosmological scale. With the SRS humanity could set in motion a chain reaction of organization sweeping across the Universe at near the speed of light. This organized part of the Universe could itself be viewed as a higher level "organism." Instead of merely following the laws of mechanics and thermodynamics, something unique in our knowledge would occur. The degree of cosmic organization would increase. Life could become commonplace, whereas now it seems quite rare. New rules, the rules of life, would spread far and wide.

Next