_____________________________________________________________________
Newsletter of the Molecular Manufacturing Shortcut Group of the National Space Society _____________________________________________________________________
Volume 4, Number 4 Fourth Quarter, 1996 _____________________________________________________________________
Contents

The Space Studies Institute View on Self-Replication


George Friedman <gfriedma@mizar.usc.edu>

Research Director, SSI


The Assembler reported in second quarter '96 that SSI has approved Quest for Self-Replicating Systems as a worthwhile project, but that it had not found any specific research that meets the criterion for financial support. Von Neumann's seminal work distinguished between two models of self-replication. The first is a mathematically unwieldy but real-world kinetic model, while the second is a computationally tractable but very abstract cellular automata model. Almost all the research in the Artificial Life field has been done on the latter, and seems to indicate that even at the purely logical level, full self-replication is more difficult than originally thought.


The first of my "easy/hard" examples strongly influenced our underlying strategy: Most technological forecasters of the 1960's were predicting that reducing the cost of putting a pound of payload into earth orbit by a factor of ten would be easy, while increasing the density and switching speed of electronic circuits by a factor of a million would be hard. (As a graduate student in engineering at UCLA, I was accused of being "irresponsibly optimistic" when I suggested that a factor of a thousand was likely before the end of the century.) What actually happened was, despite the investment of hundreds of billions of dollars by governments, the cost to orbit hasn't decreased at all, while the speed-density product of electronics has increased by over a factor of a billion, mostly due to commercial investment.


Therefore SSI felt that a greater emphasis on systems which relied on advanced electronics would place us on a more robust path to the fulfillment of O'Neill's High Frontier. This means critically examining previous assumptions regarding the balance of man/machine tasking and realizing that the cold war motivations for pushing man into space in spite of the danger and cost should be substantially diminished. Where tele-operated machines can do the job, we should let them be humanity's precursors for science, exploitation and colonization.


Recent studies at JPL concluded that -- if automation and robotics can perform the mission satisfactorily -- replacing manned missions with unmanned missions would reduce the cost by an order of magnitude. We feel that this can be dramatically improved. A challenging, yet realistic goal would be to miniaturize today's typical meter-size, 1000 kg space payloads down to under 1 kg. This is the objective of a Ph.D. research project at the University of Southern California (USC) which I helped define and SSI plans to fund. This Sub-Kilogram Intelligent Tele-robot (SKIT) will perform scientific, exploration and exploitation missions and be able to be tele-operated with long time lags and navigate in the low gravity environment of an asteroid. SKIT is judged to be a challenging, but essentially doable project. Thus, it's easy.


However, even this additional factor of 1000 may be insufficient. Another Ph.D. candidate at USC (MMSG President Tom McKendree)-- who will not require SSI funding -- will examine the incredible potential of molecular nanotechnology (MNT) for space transportation and manufacturing. Although fulfilling Eric Drexler's visions for MNT should be considered hard, the Ph.D. effort is an analysis only, thus it's deemed, easy.


I am on the Ph.D. committees of both these candidates and have been delegated the responsibility of being the primary focal point for the detailed definition of the research plans.


The Board also agreed that certain SSI members -- such as John Lewis and myself -- be encouraged to represent SSI at conferences and workshops devoted to accelerating the discovery of near-earth objects (NEOs) since this will increase our knowledge of the resources available to support human habitats in earth's immediate vicinity enormously. John and I both spoke at the Planetary Defense Workshop at the Livermore Labs last year and I am AIAA's chairman of their planetary defense subcommittee. We do not plan financial investment in this area.


I feel relatively comfortable with the three projects summarized above; the fourth, the Quest for Self-Replicating Systems (QSRS), is the one which troubles me.


The high frontier motivation for self replicating systems (SRS) is obvious: even beyond the 1000-fold leverage of SKIT and the far greater leverage of MNT lies the astounding leverage of SRS-- filling the entire universe, limited only by available matter and energy.


I can't repeat the many times I've been reassured that self-replication was easy. After all, John Von Neumann, way back in the 40's, clearly defined the logic of self-replication and all we have to do is implement his "blueprint". His automata theory anticipated the later research on biological reproduction. In the early 1980's, there was a flurry of activity -- especially the Robert Freitas papers and the NASA summer study of 1980 (Advanced Automation for Space Missions, held at University of Santa Clara, NASA CP2255) -- which strongly advocated that NASA should embark on a new technological strategy, embracing automation, robotics and self-replication. In 1985, after Tihamer Toth-Fejel showed him the 1980 NASA summer study, Gregg Maryniak wrote an excellent article on SRSs in the SSI Update. Recently two rather optimistic books on the emerging field of Artificial Life were published: Steven Levy's Artificial Life and Claus Emmeche's The Garden in the Machine. Within this past year, Lackner and Wendt, obviously inspired by chapter 18 of Freeman Dyson's Disturbing the Universe, published the paper with the exciting title, "Exponential Growth of Large Self-Reproducing Machine Systems".


Furthermore, it seems apparent to me that of all the processes that we observe in the biological world, self-replication is relatively easy. Despite the "inevitability reassurances" of Stuart Kauffman and Christian de Duve, I view the origin (or origins if you wish) of life as incredibly hard. If autocatalysis and other spontaneous appearances of great complexity are so inevitable, why can we not observe them in nature or in the laboratory? Biological self-replication on the other hand can be observed any time we choose to look. Other processes such as homeostasis, morphogenesis, epigenesis, endosymbiosis, evolution, cognition, consciousness and conscience are, in my opinion, also far harder than self-replication.


And finally, engineering designs can learn from biology, but certain practical simplifications are possible (and desirable to reduce cost and complexity) as we try to apply SRS to space colonization. For example, we need not go to the trouble of incorporating the blueprint for the entire system into every subsystem. We need not assemble the system in a tortuous series of incremental developments which recapitulate earlier design generations. We need not design for inheritable mutations. Indeed, as we concentrate on the colonization of the solar system, we can practically maintain all genetic control of all space-borne self-replicating systems by human scientists and engineers on Earth. Essentially, as Ralph Merkle puts it, we can "broadcast" the genetic information from Earth rather than encode it within the generations of SRS.


So if self-replication is so easy, where are all the SRSs?


Thinking back to the 1940's again, Von Neumann's digital computer architecture logic was originally viewed with alarm by many computer scientists because allowing the arithmetic unit to manipulate programs as if they were data appeared superficially illogical and could be the source of great computational instability. However, this architecture has been the world standard for many decades and millions of programmers have not only mastered this logic but yearn for the greater complexity of asynchronous, parallel processing.


On the surface, Von Neumann's self-replication logic does not appear to be more difficult than his computer architecture logic, but I'm unable to find a single researcher who is willing to undertake a mechanization of his kinematic model. The recently formed field of Artificial Life seems to have absorbed the earlier research in self-replication, but has decided to concentrate on Von Neumann's Cellular Automata model -- which presently has little utility to advance the High Frontier -- rather than the Kinematic model. Indeed, Chris Langton's Artificial Life, an Overview, claims to provide a comprehensive definition of this new field and yet does not even acknowledge reproduction or self-replication as the focus of any of its 17 chapters. The A-Life Proceedings published by the Santa Fe Institute -- also edited by Langton -- do have several references to self-replication, but they are invariably of the Cellular Automata (CA) type. Many "A-Lifers" are sharpening their philosophical and semantic arguments by claiming that a CA and a computer virus are really life.


While still at JPL, Jim Burke (who, after Von Neumann's death, completed his mentor's famous book on self replication) tried repeatedly and unsuccessfully to interest the scientific community at Cal Tech and JPL in establishing a prize for the first autonomous self-replicating system. At meetings I attended with the head of JPL's Automation and Robotics activity and the director of USC's robotics laboratory, I was told that -- for the scientific and exploration missions they were addressing -- self-replication was too difficult, required too long a research schedule and was not necessary anyway.


The 1980 NASA summer study (Robert A. Freitas Jr., William P. Gilbreath, eds., Advanced Automation for Space Missions, Proceedings of the 1980 NASA/ASEE, Summer Study, NASA CP-2255, 1982) involved about a million man-hours of effort by NASA, the academic world and consultant engineers, and recommended a very aggressive research and development program to build an SRS on the lunar surface, requiring only a 100-ton seed (the Apollo programs landed about 150 tons). This program was not funded, with little rationale in the record regarding the enormity of the study and the insignificant attention its findings received. I have interviewed two of the study's participants who made derogatory remarks on the SRS chapter.


The Lackner and Wendt paper, Exponential Growth of Large Self-Reproducing Machine Systems had in my opinion a misleading title. It was a very detailed and apparently competent analysis of the chemistry and thermodynamics of the mineralogical, atmospheric, water and sunlight resources (earth, air, water and fire?) in the Arizona desert required to construct large solar arrays. The logic of self-replication was touched on only lightly and superficially -- 1% of the paper at most. A more accurate subtitle of the paper would be: If SRSs Could be Built, the Desert Has All the Resources Necessary to Make Large Solar Arrays. A problem I have with many SRS papers is that self-replication is assumed (Von Neumann 'proved it') and the author then immediately launches into mathematical excursions of exponential (or Fibonaccian) population expansion and fantasies of interstellar and intergalactic colonization. At the SSI Board meeting, we joked about the famous Harris cartoon: A Miracle Happens. Another key issue that clouds SRS thinking is the definition of the level of complexity of the soup of nutrient parts the SRS is immersed in. If the SRS is merely a loosely federated assembly of complex subassemblies which can found by searching a warehouse shelf, then the problem of self-assembly and self-replication becomes relatively easy. However, this formulation is not useful to SSI; what we need is complete autotrophy -- the ability to self-replicate employing raw materials down to the elemental, or at least mineralogical level. This requirement increases the challenge enormously. SRSs reproducing in an essentially pre-artifact world would be as difficult as humans living and reproducing in a pre-biotic world. In both cases, cannibalism could be a useful tactic for a sustained population but certainly not feasible for growth.


Let me summarize by saying that this quest for SRS has been very exciting personally and has been so interesting that for the first time in five decades I have not read a single science fiction story all year! I believe we are faced with a real operational need coupled with an unusual opportunity: it appears that most or all of the world's "A-Lifers" are concentrating on the CA approach and few if any on actually building a "kinematic model" useful for advancing the high frontier. We could possibly have an opportunity to do original and useful research by combining the rapidly advancing field of computer modeling with CA, augmented by thermochemistry studies such as Lackner's. This research strategy would encourage a domain within Artificial Life that concentrates on how CAs could evolve into a fully autotrophic, mechanizable, autonomous self-replicating system employing broadcasted genetic control, all outside the computer. Rather than going through the expense and risk of actually building such a system, a practical intermediate step would be to develop a highly detailed computer simulation of the kinematic SRS hardware. Design trades and operations analyses could be performed on this math model. Then, finally, we would have the confidence to build an SRS useful for space colonization.


I would deeply appreciate any comments on any aspect of the above. Most importantly, is there any relevant work that I have missed? If we accomplish the plan I have outlined, would it really be original work? Is the scope of the research in accordance with the traditions of SSI? Where should I look next for ongoing research and resources appropriate to start implementing such a plan?


Readers with good ideas are urged to contact George Friedman directly at:

gfriedma@mizar.usc.edu or hprimate@aol.com

5084 Gloria Ave

Encino, CA 91436

818 981-0225



______________________________________________

Superoxides, Dismutase, and Aging


by Brian Drozdowski <bdroz@umich.edu>

University of Michigan Biochemistry Department


Molecular Nanotechnology has the potential for impressive technical accomplishments, but it can do nothing about the speed of light. And Space is Big. Mind-bogglingly Big. If the human race is to explore the stars, we must choose between multi-generational arks, the deep freeze, or increased life spans. The third option is the most exciting because it applies to everyone on Earth, whether or not they leave their home planet. The second quarter issue of The Assembler covered the function of telomeres; here we cover another process of cellular aging.


Many investigators now view aging not as a consequence of time but as an actual disease. Like other diseases, it may be possible to halt--or possibly reverse--the aging process. For years, scientists have been searching for what causes organisms to age. It has been shown that the secrets for aging lie in the molecular biology of individual cells.


One of the most strongly supported and compelling theories of aging is free radical damage to individual cells. Free radicals are molecules that have become dangerous because they lack a single electron. These molecules bounce around the cell and steal their missing electron from other molecules. If enough free radicals exist within a cell, the cumulative damage could cause the normal cellular machinery to slow down or stop. The inevitable consequence is cellular aging and death.


The most common free radicals are molecules that include oxygen. The paradox of aerobic life, called the 'Oxygen Paradox', is that higher organisms cannot exist without oxygen, but oxygen is inherently dangerous to their existence. Common cellular products of life in an aerobic environment include the dangerous superoxide anion free radical and the extremely reactive hydroxyl radical. These agents are responsible for oxygen toxicity.


What do these organisms do to protect themselves from these dangerous molecules? To survive in the unfriendly oxygen environment, organisms generate--or garner from their surroundings--a variety of "antioxidant" compounds (oxidant is another word for free radical). These compounds give an electron back to a free radical, thereby disarming it. In addition, a series of enzymes is produced by organisms to help deactivate the reactive oxygen free radicals. Two enzymes that fuel the antioxidant machinery include superoxide dismutase (SOD) and catalase. Both SOD and catalase are now sold in health food stores. Some of the enzymes produced actually deal with the damage that has already occurred. These enzymes repair protein and DNA.

Biology itself is imprecise, and cellular damage occurs even with these free radical removal mechanisms in place. In recent years, free radical stress has been implicated in a wide variety of degenerative processes, including the following: cancer, arteriosclerosis, heart attacks, strokes, arthritis, cataracts, Parkinson's disease and Alzheimer's dementia and factors leading to the aging process itself. Some experts say an estimated 80 to 90 percent of all degenerative diseases involve free radical activity.


As an organism ages, it produces less and less of the protective antioxidant compounds. Results of a 1995 Southern Methodist University study indicate first that the level of molecular oxidative damage to DNA and proteins increases with age, and second, that the increased oxidative damage is due to both an elevation in the rates of oxidant generation and an increase in the susceptibility of tissues to oxidative damage.


Compounds found in certain foods may be able to bolster biological resistance against oxidants. Currently, great interest centers on the protective value of a wide variety of plant-derived antioxidant compounds, particularly those from fruits and vegetables. Many doctors themselves take mega-doses of Vitamins E and C, and Beta Carotene. These vitamins are naturally occurring antioxidants. It makes sense to have more of these compounds around in the cells. "The more that are present to protect cells' fragile membranes, proteins and genetic DNA, within toxic limits, the less able the free radicals are to strike and impose their damage," says Jean Carper, author of Stop Aging Now.


Many experts assert that we cannot outwit the aging process entirely or even outlive the built-in life span limits for our species, which many experts put at 120 years. The goal thus far has been to increase our "functional" life span, in which our minds are alert and our bodies healthy. With the recent advancements in our knowledge of aging, science has made dramatic leaps in making that goal a reality. And when it does, the stars will be that much closer. In addition, their longer expected life span may prod people into making longer-term plans for their future. These long term plans will undoubtedly include the possibility of the High Frontier. With such a cultural change, the stars will again be that much closer.

______________________________________________



Self-Replication and Nanotechnology


A discussion between Robert Freitas, Tom McKendree, Anthony G. Francis; Jr., and Tihamer Toth-Fejel


Tom McKendree: (quoting the argument Tihamer made in "LEGOs to the Stars"): There is an argument that self-replicating machines can only be built with nanotechnology, since components have to worry about quality control on their components, which have to worry about quality control on their subcomponents, and so on, leading to an otherwise infinite regress that only comes to an end when you get to atoms, which for identical isotopes can be treated as perfectly interchangeable.


Anthony G. Francis: Interesting argument, but I don't think it is valid. In the first place, it confuses the difficult problem of building a self-replicating machine with building a self-replicating machine that is completely resistant to error in the duplication process. Note that the second problem is (a) impossible and (b) requires solving the first problem first.

Tihamer Toth-Fejel: Good point. Except that self-replication is solved. Or maybe its not solved at all, not even by animals. Please bear with me. I'm not as confused as I sound. :-) Imagine a line of identical robots, of which only the first is powered on. At your command, it reaches over to the 2nd robot, turns it on, and instructs it with the command that you gave it (move arm to point xyz in its reference frame). As long as the robots are set up so that "moving to xyz along the default path" will turn on the next robot, you are replicating "turned on robots" in a very rich environment. Its a silly example, but it points out that we haven't really defined self-replication because we haven't differentiated between self and environment Von Neumann's kinetic automata "feeds" on component parts that just happen to have been manufactured to exact specifications and organized in bins. In some respects, no advanced life-form on Earth self-replicates, because we depend on the rest of ecosystem to survive. The exceptions are those organisms -- bacteria and plants mostly -- that don't depend on other organisms for their subcomponents (proteins, carbohydrates, etc.), but instead can self-replicate from raw energy and simple chemicals. Our goal is in-situ extra-terrestrial resource utilization, so that means that the subcomponents need to be simple molecules -- i.e. nanotech.


Francis: I agree with you up to your last word, but nanotech is not the only system we have that takes raw materials from the environment and produces simple compounds as starting points ... chemistry has been doing the same thing in bulk for a while now.


If you want a self-replicating factory complex that can start from molecules up, the first thing in the pipeline would have to be a mining and chemical extraction subcomplex to produce the raw materials for the rest of the factory.


Hogan's Code of the Lifemaker talks about a factory-type complex of this scale that eventually begins to have replication failures and starts to evolve.


On a side note, I'm sure that any system that could extract resources from the environment (nano or otherwise) could be placed in an environment that didn't have the stuff it needed to survive. We must be fair here; no-one is going to complain if your nanoreplicators or my chemofactory fail to replicate and extract resources when dropped into a gas giant made of pure argon. :-)


Perfect self replication is, as far as we can understand our limits, impossible (in this universe). Imperfect self replication in specific environments with some possibility of error has already been achieved by animals, and to some extent some manufactured systems (if you count computer viruses, and certain simple mechanical self-replicators).


Toth-Fejel: But chemistry has been taking raw materials from the environment and producing simple compounds only while under direction of entities that replicate by processes that operate at a molecular level.


Quality Control


Francis: On the topic of quality control, macroreplicators should be able to avoid this without difficulty. If you imagine a self-replicating machine on the level of a large factory, with assembly lines for robots, computers, construction machinery, and other necessary construction and maintenance subcomponents, then all you need is a good controlling AI or set of AIs that implement good quality control on the products of the assembly lines before shipping the parts out to build new factories.


The potential for quality control failure exists, but as long as the blueprints survive, the factories can keep testing and repairing themselves indefinitely.

Toth-Fejel: Not indefinitely. If the MTBF of the components is shorter than the mean time to repair, especially in critical systems, its dead, Jim. Or soon will be. What do you think about this, Robert?


Robert Freitas: More precisely, the device is dead if MTBF of components is shorter than the mean time to repair OR REPLACE components. Replacement usually takes less time than repair. In biological self-replicating systems, subsystems are usually replaced, not repaired (except for DNA). In many cases, biological parts are changed out (and thrown away) periodically regardless of their functional status. On a per nanogram (of failed part) basis, diagnosis and repair of corrupted DNA consumes a LOT of cellular metabolic resources and time.


In a mechanical replicator, a critical-parts inventory could be tapped to change out a failed part very rapidly, without having to diagnose and repair the failed part. This may be a much simpler and more reliable strategy than attempting to make real-time repairs. Using this strategy, a significant fraction of total onboard manufacturing capacity must be employed to continuously manufacture failure-prone critical parts at a throughput rate high enough to maintain the critical-parts inventory above a minimum "buffer" size. Critical parts may be manufactured using either fresh input materials or cannibalized failed-part subparts. The latter requires disassembled sub-part testing to ensure proper specifications compliance, thus consumes additional onboard resources. If input energy and materials are plentiful and new parts are quickly and easily built, the most efficient approach is probably to just throw away failed parts.


Francis: The MTBF argument is true. But the same thing could be true of a nanotechnological system ... large complex molecules like replicators will break down, of course, as will some of the larger subcomponents (or the smaller ones, if the components are metastable). If the rate of breakdown of replicators exceeds the manufacturing rate, then the system will shut down.


But, again looking in the mirror and out the window at all the large structures and green growing things (I lie; I have no window in this room), I have confidence that with a sufficiently large energy input, sufficient amounts of resources, and a well designed / engineered / evolved system, that either a nanotech system or a chemofactory would have the capability to rebuild itself faster than it degraded.


The argument that nanotech will solve this problem seems even more shaky --- quality control includes not only the quality of components, but of assembly, potential breakage, and so on --- and with cosmic rays flitting around at speeds that could knock an atom out of the interior of a structure after inspection, you're not going to gain much.


Toth-Fejel: I agree that nanobots will need self-test and self-repair algorithms. Macro robots have the same problem, but at some point, you have to decide when to stop testing sub-sub-components -- unless you go ahead and test at the atomic level, as nanobots and biologicals do. I ran into this same problem when doing my master's thesis on self test of macro systems, and I complained to Eric Drexler that an Electrical Engineer like myself shouldn't be doing chemistry. But he countered by observing that "God has pretty good quality control on atoms." Unfortunately, I didn't listen to him at the time -- the concept of nanotechnology just boggled my mind too much.


Francis: A nanobot might not necessarily be able to inspect the interior of some complex structures nondestructively --- it might have to resort to assembly/disassembly.


Toth-Fejel: But if you don't do self-test at the atomic level, then wear, micro-cracks, and dirt from sources below the level of testing will force you to take MTBF into account.


Francis: Again, so will comic rays. And there will be potential for other kinds of failure --- state transitions to other forms, occasional bonds breaking, and so on. MTBF will follow us to the ends of time, I'm afraid...


Toth-Fejel: But I think you're right -- this aspect of nanotech self-rep does not gain us much at this level. On the other hand, there is only one level of self-test and self-repair for nanotech, as opposed for many levels for macro robots.


Francis: Not necessarily. If your nanosystems have subcomponents (which might have internal failures) you could have several levels, all still at the nano level (is it the right atom? is it the right bond? is it the right molecule? is the assembly structurally correct (no square pegs in round holes)? is the assembly semantically correct (all the inputs and outputs wired correctly) and so on...


Francis: In fact, you'd need a whole set of interlocking AI's and general-purpose robots, almost a community (or, dare I say, an artificial ecosystem :-) which is what I suspect you'd need for any significant self-replicating nanotechnological system that produced complex results.


And on the topic of science v. engineering, I would have to say nanotechnology is definitely at the stage of new science, at least until the problem of general molecular assembly is solved.


What Next?


Toth-Fejel: The easiest next step is to do research on assemblies of virtual components into virtual machines which self -replicate, as the components which always behave as modeled. But only as an easy project (when you have no funding and need to keep your day job). Someone also has to do work in the real world to make sure the virtual components are realistic.


Francis: I agree. Of course.


Toth-Fejel: Maybe that is what NASA should do.


Francis: I've always been fond of NASA Ames... they do good work in AI, why not nanotech too?


Freitas: As it turns out, NASA/Ames IS doing good work in nanotechnology right now, under the able leadership of Al Globus. They have an $11 million/year budget to do computational nanotechnology -- "virtual nanoparts" simulations -- within the Numerical Aerodynamics Simulation Systems (NAS) Division at Ames, NASA's lead center on supercomputers. As I recall there are at least a dozen researchers involved in this effort, and they've been up and running for a year or two.


Toth-Fejel: We need more complex real-world examples than the switch-turning robot that I mentioned earlier. The next step, of a robot assembling components of itself, supposedly has been taking place in Japan for years, but I have never seen a reference which specifies which factory that is, or what they learned.


Francis: I'd like to see it if you find it... I've heard about it too, but never saw the reference...


Freitas: The original robotic robot factory, built by Fujitsu FANUC Ltd., went on-line in 1980-81 in Yamanashi Prefecture near Lake Yamanaka. There's a lengthy paragraph on this in the 1980 NASA Study report (NASA CP-2255, Appendix 5F, p. 302).


Toth-Fejel: After that comes the Von Neumann kinetic model, followed by the self replicating machine shop (which still requires ICs) mentioned in the 1980 NASA Summer Study.


Freitas: Correction about those integrated circuit (IC) "vitamin parts": Unlike previous studies which assumed only 90-96% closure, our theoretical design goal for the self-replicating lunar factory was 100% parts and energy (but not necessarily information) closure. This included on-site chip manufacturing, as discussed in Section 4.4.3 of the NASA report and in Zachary (1981), cited in the report. Of the original 100-ton seed, we estimated the chip-making facility would mass 7 tons and would draw about 20 kW of power (NASA CP-2255, Appendix 5F, p.293).


100% materials closure was achieved "by eliminating the need for many...exotic elements in the SRS design... [resulting in] the minimum requirements for qualitative materials closure....This list includes reagents necessary for the production of microelectronic circuitry." (NASA CP-2255, pp. 282-283)


Toth-Fejel: I stand corrected. My point is that the self-replicating machine shop, plus mineral identification, mining, and refining systems, plus the lights-out semiconductor plant results in a Lunar Factory. Note that in the micro-gravity of the asteroid belt, Bernal's Santa Claus machine would simplify the mining and refining tremendously (by just ionizing all incoming material, and separating the atoms by their charge/mass ratio in what is essentially a large mass spectrometer). Navigation and transportation becomes a bit more problematic, but computers and solar sails are good at that.


The kinetic model is simpler than the self-replicating machine shop, and it is much simpler than the self-replicating factory. But imagine a KA in which, instead of being organized in bins, the components are scattered all over the floor, and some of them are bolted down. This is all the extra complexity that an assembler needs to handle.


Francis: That depends on the assembler. Just as I can't guarantee that I'll be able to build an AI that will be able to solve any problem, you can't guarantee that you can construct an assembler that doesn't require significant support (which, hopefully, will all be nanotech). I think we'll have assemblers in vats and assembler gels far before we'll have self-replicating assemblers that we can cut loose in the asteroid belt.



Artificial Intelligence


Toth-Fejel: A self-replicating factory would require a significant amount of process control, and I would venture to say that it might require new mathematics, not just advances of current engineering.


One important question is, "Where do you get the raw materials from?" Are you going to use smelters to turn the lunar rocks into pure elements? If so, how are you going to identify the hundreds of different ores?


Francis: Returning to my friendly HAL-10000 powered factory from above, I'm sure that the factory could use standard Chemical Engineering techniques to do that kind of work, as a preprocessing step.


Toth-Fejel: Let's say you can build an AI pattern recognition/classification system to do that.


Francis: Or a general-purpose robot with a chemical engineering degree downloaded, plus a battery of specialized equipment. Much harder to build than a simple AI...


Toth-Fejel: The AI needs hardware on which to run, so that means it needs to build semiconductors, which means manufacturing pure silicon wafers, purifying sources of dopant and etchant, making IC masks, etc. None of these manufacturing processes run even close to automatically now, so you'd need another AI for each of these processes.


Francis: And if you had an AI for each process, you would effectively have an automatically running system.


Toth-Fejel: But what kind of AI? Do you mean rule or frame-based systems, or are you rejecting the physical symbol paradigm entirely and relying on neural networks (which can't be debugged at the knowledge base level)?


Francis: Just any old generic off-the-shelf twenty-first or twenty-second century AI will do, thank you --- I'm an applications man, and don't have a religious commitment on the whole PSSH / PDP / behaviorist / situated / dynamicist debate. :-)


I'm confident (based on looking at my own behavior in the mirror, my understanding of cognitive science and its underlying physics, and my understanding of computer science and its underlying physics) that future AI systems will have at least the command, control and discretionary capabilities of a human being, and since I would hope that humans would be able to set up self-replicating factories (colonies) on other worlds that could replicate not only us humans, but also our technology even if we didn't have nanotech, I similarly hope that an artificially intelligent factory (with appropriate roving robots) could replicate itself if it was smart enough (and well designed enough to get in motion).


Toth-Fejel: Until the frame problem is solved, AI is new science, not new engineering.


Francis: Ah! We return to an area where I actually have some expertise! :-) The original frame problem has been solved (check out The Frame Problem in Artificial Intelligence, ed. Frank Brown, Morgan Kaufmann, 1987, for several approaches).


Of course, the frame problem has bloated to cover a wide range of topics on what should change in a knowledge representation when an action is performed, and this problem is unfortunately unsolvable in any realistic domain.


Humans fail at the frame problem all the time --above and beyond our cognitive frailties. Of course, we have imperfect information about a dynamic universe, so it's impossible to always compute correctly the appropriate changes to a knowledge base based on an action.


AI in general is a split-brained field; half science (my favorite end), half engineering and applications. That's why the major conference here in the US is split into two components: AAAI (the science end) and IAAI (Innovative Applications in Artificial Intelligence).


Self-Modification and Self-Replication


Francis: Personally, I don't think we have a chance of building self-modifying and self-replicating nanomachines until we can do it with macromachines (or at least macrosystems), since we are far less experienced with nanotech.


Freitas: Regarding "self-replication", Mr. Francis is probably correct. After all, macroscale (~1-10 cm) self-replicating machines have already been designed, built, and successfully operated (e.g. Jacobson, 1958; Morowitz, 1959; Penrose, 1959; references in NASA CP-2255, 1982). (True, these were all very SIMPLE replicators. But they DID replicate.) Additionally, (nanoscale) replicating molecular systems have been designed, constructed, and successfully operated by chemists. Only the microscale realm remains "unreplicated" by human technology. Since self-replication can now be done at the macro scale, we are now conceptually prepared to implement this function at the micro scale via nanotech, once our nanoscale tools become more refined.


(B) Regarding "self-modification," Mr. Francis is almost

certainly incorrect. Of course, any machine can be programmed or designed to randomly modify itself. (e.g., "once every minute, unscrew a bolt and throw it away.") However, since self-modifications that degrade performance or shorten life are useless, I presume Mr. Francis is referring to an evolutionary process in which a device responds positively to challenges arising from its operating environment, modifying its hardware or software to maintain or to enhance its fitness or survivability in its changing environment.


To the extent such evolution is an undirected or blind process, many trials involving minor changes are required to find, by pseudo-random search, a single design change that will prove helpful. Basic physical scaling laws mandate that, all else being equal, physical replication speed is a linear function of size. Thus the smaller the replicator, the greater the number of trial offspring it can sire, and test for fitness, per unit time interval. (This is because manipulator velocity is scale invariant, while travel distance decreases linearly with size.) Macroscale replicators (using blind trials) are far less likely to be able to generate enough trial offspring to stumble upon beneficial modifications, in any reasonable amount of time. Microscale replicators, on the other hand, should be able to generate offspring a million times faster (or more), thus are far more likely to randomly turn up productive modifications, hence to "evolve."


To the extent that such evolution is an intelligently directed process, microscale replicators still enjoy the same tremendous scaling advantage in computational speed. Microscale replicators built using nanotech will use nanocomputers (e.g. diamondoid rods with nanometer-scale features) to design, build, and analyze their directed-modification offspring machines. Per unit mass or per unit volume, these nanocomputers will operate at least a million times faster than computers built using macroscale tools (e.g. silicon chips with ~micron-scale features) that will direct the macroscale replicators.


Of course, macroscale replicators can be evolved slowly using macroscale computers. (Indeed, this is called "the history of human technology.") Or, nanocomputers could be used to direct the evolution of macroscale replicators, which will go a bit faster. But clearly the theoretically fastest evolutionary speed will come from nanoscale computers directing nanoscale replicators.


Since self-modifying replicators should actually be easier to implement at the nanoscale than at the macroscale, macroscale experience with self-evolving mechanical systems is probably unnecessary.


Infinite Regress and the Fallacy of the Substrate


Freitas: This argument that self-replicating machines can only be built with nanotechnology, since components have to worry about quality control on their components, and their subcomponents, etc., has at least two fundamental flaws.


First, since self-replicating machines have already been built using macrotechnology (see above references), it is therefore already a fact that nanotechnology is NOT required to build replicators. QED. End of discussion.


Second, Mr. Toth-Fejel assumes that "components have to worry about quality control on their components." He has fallen prey to what I call the "Fallacy of the Substrate." I shall explain.


Many commentators, whether implicitly or explicitly, assume that replication -- in order to qualify as "genuine self-replication" -- must take place in a sea of highly-disordered (if not maximally disordered) inputs. This assumption is unwarranted, theoretically unjustifiable, and incorrect.


Toth-Fejel: In the required application of space development, that is the case. Though for theoretic understanding, you're right.


Freitas: The most general theoretical conception of replication views replication as akin to a manufacturing process. In this process, a stream of inputs enters the manufacturing device. A different stream of outputs exits the manufacturing device. When the stream of outputs is specified to be identical to the physical structure of the manufacturing device, the manufacturing device is said to be "self-replicating."


Note that in this definition, there are no restrictions of any

kind placed upon the nature of the inputs. On the one hand, these inputs could consist of a 7,000oK plasma containing equal numbers of atoms of all the 92 natural elements -- by some measures, a "perfectly random" or maximally chaotic input stream. On the other hand, the input stream could consist of cubic-centimeter blocks of pure elements. Or it could consist of pre-rolled bars, sheets, and wires. Or it could consist of pre-formed gears, ratchets, levers and clips. Or it could consist of more highly refined components, such as premanufactured motors, switches, gearboxes, and computer chips. A manufacturing device that accepts any of these input streams, and outputs precise physical copies of itself, is clearly self-replicating.


During our 1980 NASA study on replicating systems, one amusing illustration of the Fallacy of the Substrate that we invented was the self-reproducing PUMA robot. This robot is conceptualized as a complete mechanical device, plus a fuse that must be inserted into the robot to make it functional. In this case, the input substrate consists of two distinct parts: (1) a stream of 99.99%-complete robots, arriving on one conveyor belt, and (2) a stream of fuses arriving on a second conveyor belt. The robot combines these two streams, and the result of this manufacturing process is a physical duplicate of itself. Hence, the robot has in fact "reproduced." You may argue that the

replicative act somehow seems trivial and uninspiring, but the act is a reproductive act, nonetheless.


Therein lies the core of the Fallacy of the Substrate:

"Replication" can occur on any of an infinite number of input substrates. Depending on its design, a particular device may be restricted to replication from only a very limited range of input substrates. Or, it may have sufficient generality to be able to replicate itself from a very broad range of substrates. In some sense this generality is a measure of the device's survivability in diverse environments. But it is clearly fallacious to suggest that "replication" occurs only when duplication of the original manufacturing device takes place from a highly disordered substrate.


From a replicating systems design perspective, two primary questions must always be addressed: (1) What is the anticipated input substrate? (2) Does the device contain sufficient knowledge, energy, and physical manipulatory skills to convert the anticipated substrate into copies of itself? Macroscale self-replicating devices that operate on a simple, well-ordered input substrate of ~2-5 distinct parts (and up to ~10 parts per device, I believe) were demonstrated in the laboratory nearly 40 years ago. Japanese robot-making factories use the same robots as they produce, inside the factory, hence these factories may be regarded as at least partially self-replicating. I have no doubt that a specialized replicating machine using an input substrate of up to ~100 distinct (modularized) components (and ~1000 total parts) could easily be built using current technology. Future advances may gradually extend the generality of this input substrate to 104, 106, perhaps even to 108 distinct parts.


Of course, the number of distinct parts is not the sole measure of replicative complexity. After all, a nanodevice which, when deposited on another planet, can replicate itself using only the 92 natural elements is using only 92 different "parts" (atoms).


If you remain frustrated by the above definition of replication, try to imagine a multidimensional volumetric "substrate space", with the number of different kinds of parts along one axis, the average descriptive complexity per part along another axis, the relative concentration of useful parts as a percentage of all parts presented on still another axis, the number of parts per subsystem and the number of subsystems per device on two additional axes, and the relative randomicity of parts orientation in space (jumbleness) along yet another axis. A given manufacturing device capable of outputting copies of itself, a device which we shall call a "replicator," has some degree of replicative functionality that maps some irregular volume in this substrate space. The fuse-sticking PUMA robot occupies a mere point in this space; a human being or an amoebae occupies a somewhat larger volume. A nanoreplicator able to make copies of itself out of raw dirt would occupy a still larger volume of substrate space. We would say that a replicator which occupies a smaller volume of substrate space has lesser replicative complexity than one which occupies a greater volume. But it is STILL a "replicator."


So if someone wishes to hypothesize that replicators "cannot be built" on some scale or another, they must be careful (1) to specify the input substrate they are assuming, and (2) to prove that the device in question is theoretically incapable of self-replication from that particular substrate. Using pre-made parts is not "cheating." Remember: Virtually all known replicators -- including human beings -- rely heavily on input streams consisting of "premanufactured parts" most of which cannot be synthesized "in house."


Because of their superior speed of operation in both thought and action, there is little question that microscale replicators constructed from nanoscale components are theoretically capable of far greater replicative complexity than macroscale replicators constructed of macroscale parts. But replicators can be built at EITHER scale.


Linear vs. Exponential Processes


Toth-Fejel: The real world is full of dust, and friction wears down surfaces. The only way in which these manifestations of Murphy's Law can be handled is at their smallest pieces -- otherwise, smaller bits of dust get wedged in the gears of the repair tools, and the process grinds to a halt.


Freitas: There are two arguments advanced here: (A) that dust particles can insert themselves between moving surfaces, immobilizing these moving surfaces and causing the machine to halt; and (B) that frictional abrasion from environmental dust particles degrades parts until these parts eventually fail.


(A) A correct system design will take full account of all

particles likely to be encountered in the normal operating

environment. Proper component design should ensure that dust particle diameter is much less than the typical part diameter, and that moving parts have sufficient compliance such that dust particles of the maximum anticipated size can pass through the mechanism without incident. It may be necessary to enclose critical component systems within a controlled environment to preclude entry of particles large enough to jam the mechanism. But this is a design specification -- an engineering choice -- and not a fundamental limitation of replicative systems engineering.


(B) A process such as frictional abrasion is a LINEAR

degenerative process. Assuming proper design, in which dust particle diameter is much less than typical part diameter, component-surface error caused by abrasion slowly accumulates until some critical threshold is surpassed, after which the device malfunctions and ceases to replicate.


However, the generative (replicative) process will be

EXPONENTIAL (or at least polynomial, once you reach large populations where physical crowding becomes an important factor) if the replicators (1) produce fertile offspring and (2) have full access to all necessary inputs.


Now, an exponential generative process can ALWAYS outcompete a linear degenerative process, given two assumptions: (1) the degenerative time constant (e.g. mean time to abrasive failure of the replicator) ~> the generative time constant (e.g. the replicator's gestation + maturation period), AND (2) the number of fertile offspring per generation (e.g., size of the litter) ~>1. Assumption (1) should always be true, because a "replicator" that breaks down before it has given birth to even a single fertile offspring is a poor design hardly worthy of the name. Assumption (2) should usually be true as well. Most device designs I've seen involve one machine producing one or more offspring. (Fertile offspring per generation can in theory be <1 if members of many generations cooperate in the construction of a single next-generation offspring -- the "it takes a village" scenario.) Thus, even in the absence of simple strategies such as component redundancy or a component "repair by replacement" capability which would further enhance reliability, component degeneration may slow -- but should not halt -- the replicative cascade.


Interchangeable Parts and Large Numbers


Toth-Fejel: In addition, all manufacturing depends on interchangeable parts.... when you start making relatively small things such as in microtech, then an extra layer of atoms can make a big difference, and these differences keep the parts from being interchangeable. The opposite tack, of making things really big, sounds nice, but you're going to need about a billion subcomponents, if I correctly remember the best estimates of Von Neumann's work.


Freitas: Once you have clearly specified the input substrate you will be working with, then parts size and the component compliance becomes a design decision that is completely under the control of the engineer. If your input substrate contains parts that are likely to have a few extra layers of atoms, then your design must accommodate that level of positional imprecision during normal operations.


Almost certainly, a replicating machine may someday be built that has a billion parts. However, a replicating machine can also be built with only three parts. (I have pictures of one such device in operation, in my files.) The assumption that vast numbers of parts are required to build a replicator -- even a nanoreplicator -- is simply unwarranted. Indeed, I fully expect that the first 8-bit programmable nanoscale assembler (e.g. of Feynman Prize fame) that is capable of self-replication will employ an input substrate of no more than a few dozen different types of parts, and will be constructed of fewer than 1000 of these parts -- possibly MUCH fewer. These pre-manufactured parts may be supplied to the assembler as outputs of some other nanoscale process.


The Efficient Replicator Scaling Conjecture


Toth-Fejel: So it might actually be possible to build a macro-based self-rep system. But I suspect that it would be a lot more complicated than a nanoscale system.


Freitas: I have formulated a conjecture ("the proof of which the margin is too narrow to contain") that the most efficient replicator will operate on a substrate consisting of parts that are approximately of the same scale as the parts with which it is itself constructed. Hence a robot made of ~1 cm parts will operate most efficiently in an environment in which 1-cm parts (of appropriate types) are presented to it for assembly. Such a robot would be less efficient if it was forced to build itself out of millimeter or micron-scale parts, since the robot would have to pre-assemble these smaller parts into the 1-cm parts it needed for the final assembly process. Similarly, input parts much larger than 1 cm would have to be disassembled or milled down to the proper size before they could be used, also consuming additional time, knowledge, and physical resources -- thus reducing replicative efficiency.


If this conjecture is correct, then it follows that to most

efficiently replicate from an atomic or molecular substrate, you would want to use atomic or molecular-scale parts -- that is, nanotechnology.


Robert A. Freitas Jr.

Member, Replicating Systems Concepts Team

1980 NASA Summer Study

Editor, Advanced Automation for Space Missions

rfreitas@calweb.com


Anthony G. Francis, Jr.

AI/Cognitive Science Group

Georgia Institute of Technology

Atlanta, Georgia 30332-0280

Net: centaur@cc.gatech.edu http://www.cc.gatech.edu/ai/students/centaur/

______________________________________________



MMSG Officers


President: Tom McKendree (714) 374-2081

Email: tmckendree@msmail3.hac.com


Secretary: Tihamer (Tee) Toth-Fejel (313) 662-4741

Email: ttf@rc.net


MMSG Home Page

http://www.islandone.org/MMSG/


Editorial Office

899 Starwick Drive

Ann Arbor, MI 48105


Subscription and Membership

8381 Castilian Drive

Huntington Beach, CA 92646

______________________________________________