Previous Top

Appendix 5K: Issues And Concepts For Further Consideration

During the present study the Replicating Systems Concepts Team considered numerous concepts relating to the problems of self-replicating systems (SRS). The following is a partial list of various notions, ideas, suggestions, and research directions which came to the team's attention but which could not be adequately explored in the time available.

5K.1 Definitions

  1. Reproduction - What is a good, precise definition of "self-reproduction'' or "self-replication"? What exactly is a "self-replicating system"? Does replication include any assembly of a physical copy of self? A copy of patterns? Is full assembly from molecular or atomic primitives required? Shall minimal reproduction be defined in terms of basic functions, bits of information processed, or some other measure? Is there some irreducible minimum necessary for "reproduction"? Most regard simple autocatalysis or Ashby's falling dominoes as not representative of "true" replication. However, perhaps a New Guinea islander would regard the cafeteria tray line (with seemingly equal justification) as "not real" when the source of human reproduction - viewing our environment as "too well-ordered to be believable."
  2. Growth - Exactly what is the distinction between growth and reproduction? What is the difference between these concepts and the notion of "self-organization''? What about "self-assembly"? These are common terms in regular use, and need to be more precisely characterized.
  3. Repair - What is the difference between self-repair and self-reproduction? Ordinarily replication involves duplication of the whole system, whereas repair involves replacement of only some subset of it. But at what point does "repair" become "reproduction"? Is machine self-repair or self-reproduction more difficult from a technical standpoint, and why? (Self-repair may require an analytical intelligence, whereas much of reproduction can be accomplished by "rote.")
  4. Telefactor, teleoperator, intelligent tools, autonomous, etc. - precise definitions are needed. Is there a clear dividing line between biological reproductive systems and advanced self-replicating robot systems?

5K.2 Evolutionary Development

  1. Which theoretical models would be easiest to cast into physical engineering form: the von Neumann kinematic model, the Laing self-inspection approach, the Thatcher methodology, or some other alternative? Under what conditions would each be desirable from a pragmatic engineering standpoint? The Laing approach, for instance, may prove superior to the von Neumann kinematic model in the case of extremely large, complex self-reproducing systems where the universe of components is so vast that self-inspection becomes essential to maintain order or where rapid evolution is desired.
  2. Specific "unit growth" and "unit replication" models of SRS were considered in detail during the present study. Under what conditions is one or the other optimum? Are there any fundamental differences between the two in terms of performance, stability, reliability, or other relevant factors? What might SRS emphasizing "unit evolution" or "unit repair" be like?
  3. Can SRS be designed to have few or no precision parts? Can milling and turning operations be eliminated? What substitutes might be found for the usual precision components such as ball bearings, tool bits, metering instruments, micron-feature computer chips, etc.? It is possible to Imagine Stirling engines, solar mirrors, electromagnets, and mechanical gear-trains using only native lunar basalt, iron, and gases with no chemical processing - but are complete (but simple) SRS possible using just two or three nonchemically recovered elements/minerals? Could SRS be patterned after terrestrial biological protein synthesis, in which the factory is made up of perhaps two dozen fundamental "building blocks" (similar in function to amino acids) assembled in virtually limitless combinations?
  4. To what extent is intelligence a prerequisite for reproduction? (Amoebas appear to replicate well with almost no intelligence at all.) Does increasing intelligence make more efficient the processes of biological, and potentially machine, replication? Is there a law of diminishing returns, or does more intelligence always produce a superior reproductive entity?
  5. What forms of machine intelligence might possibly be required for a fully autonomous SRS, that are not now being adequately pursued by artificial intelligence researchers? A few possibilities include learning, memory structure, advanced task planning, adaptivity, association, creativity, intuition and "hunch" formation, hypothesis generation, self-awareness, survival motives, sophisticated database reasoning, symbolic meaning of knowledge, autonomous problem solving, and insight. Similarly, the state-of-the-art in robotics and automation from the viewpoint of SRS development needs to be examined.
  6. What is the least complex biological self-replicating system? How does it work? Can similar processes and analogies be drawn upon for use in the development of selfreplicating machine technology? What is the minimum critical mass for a stable ecosystem? For a machine economy with closure?
  7. What is the possibility of semisentient workpieces? This concept is sometimes referred to as "distributed robotics." Perhaps each workpiece in an assembly process could be imbued with some small measure of machine intelligence using advanced microelectronic circuitry. Parts could then assist in their own assembly and subsequent installation and maintenance.
  8. Can computers be programmed to write their own self-assembly software? Perhaps an "artificial intelligence expert system" is required?
  9. What can be said about the possibility of machine "natural" or "participatory" evolution? How fast might machines "evolve" under intelligent direction? Is there any role for the concept of "sex" in machine replicating systems?
  10. Competing machines of different types, loyalties, or functions may interact destructively. For example, machines could disassemble others and cannibalize the parts. This might be viewed as adaptive or aggressive, if the disabled machine is willing; ecological if the stricken device is already dysfunctional and of no further use, etc. Or, competing machines could inject neighbors with senility software to accelerate deterioration as a prelude to subsequent cannibalism; "Frankenstein programs" in which the infected machine returns to its point of origin and adversely affects its creators; "hidden defect programs" which cause output of defective product so that the affected machine will be retired early; or "virus programs" which cause the host machine to begin producing output as directed by the invader to the exclusion of all else.

5K.3 Cost Effectiveness

  1. What are the proper tradeoffs among production, growth, and reproduction? Should these proceed serially or simultaneously? Should the LMF be permitted to grow indefinitely, or should useful production be siphoned off from the start? How big is big enough? What are the tradeoffs between "litter size" and number of generations in terms of efficiency and cost effectiveness? How long a replication time or doubling time is economically acceptable and feasible? Are there "diseconomies of scale" that make a small seed factory difficult to achieve? Should whole systems, or just their components, be replicated? At what point should factory components specialize in particular functions? Should these components be permitted to replicate at different rates within the expanding factory complex? What is the optimum mix of special-purpose and general-purpose robots? What are the other relevant factors involved?
  2. How and under exactly what conditions can a replicating system "exponentiate"? What should be exponentiated - economic value, number of items, quality, or complexity of product? What are the fundamental limitations and most significant factors? What are the important considerations of reliability, mean lifespan, replication time, unit and system costs? How does component reliability relate to replicating system lifespan? Multiple redundancy increases the mean time to failure but concurrently increases system complexity, which might lead to higher costs and added difficulty in overall design and coordination. How can error propagation in SRS be quantified and analyzed mathematically? Should evolutionary biological notions such as "mutation" and "survival of the fittest" be machines "evolve" under intelligent direction? Is there any made a part of SRS designs?
  3. How can closure be defined, studied, and achieved? What are the different aspects of closure? How can closure be demonstrated? Is less than full closure acceptable in some applications? What might be the principles of "closure engineering"? To what extent should/can/must reproducing machines be energetically and materially self-sufficient? How many "vitamin parts" can be imported from Earth and still retain economic viability? Can artificial deposits of special materials be created on other worlds for the convenience of SRS machines (e.g., crash a comet into the Moon)?
  4. What sorts of useful output might self-replicating robot systems produce? Would there be an emphasis on services or products? Would terrestrial or extraterrestrial consumption dominate?

5K.4 Man and Machine

  1. What is the most appropriate mix of manned and automated functions in complex, self-replicating machine systems? Does this optimum mix vary from mission to or can certain general categories be established? For manned functions, what is the most efficient mix of physical and mental labors?
  2. What is the cost tradeoff between man and machine? Is, say, a fully automated lunar factory cheaper to design, deploy, and operate than one which is fully manned, or remotely teleoperated? Is a lunar base populated by humans cheaper than a "colony" of replicating machines? Is the oft-heard assertion that "in a factory with automation, productivity is inversely proportional to the number of human workers involved" true? What should be the ratio of biomass/machine mass in SRS factories?
  3. Is it possible that very highly advanced machines could evolve to the point where humans could no longer understand what their machines were doing? Would "their" interests begin to diverge from ours? Would they replace us in the biosphere, or create their own and not displace us? Would they keep us happy, feeding us the information we request while spending most of their time on higher-order operations "beyond our understanding"?

Next