MACHINE INTELLIGENCE IN SPACE

Tihamer Toth-Fejel

Presented at the Seventh International Space Development Conference,Denver, Colorado, May 27-30, 1988. SDC 88-049. Proceedings edited by Jill Steele Mayer, Univelt, Inc., ISBN 0-912183-06-3. pp 295-300.

The human settlement of Space requires good tools, and the best tools are intelligent machines. This paper first covers some of the basic Artificial Intelligence technologies and their advantages and disadvantages. Some current space-related projects are covered. Finally, a few projections, both realistic and unrealistic are discussed.


INTRODUCTION

Artificial Intelligence (AI) is the subfield of Computer Science that tries to make machines more intelligent, either by imitating humans or by experimenting with new forms of logic and reasoning. The human settlement of Space requires good tools, and the best tools are intelligent machines. Just as the use of computers for information processing has become ubiquitous in space programs around the world, advanced software will be applied in every aspect of the settlement of Space for which increased intelligence is an asset.


ARTIFICIAL INTELLIGENCE: BASIC TECHNOLOGIES

Artificial Intelligence has been hyped and oversold recently, but it is essentially advanced software that attempts to automate intelligence. Unfortunately, nobody knows what intelligence really is. Therefore, by definition, anyone working in AI does not know what he or she is doing. To make matters worse, once a researcher discovers how to program a computer to do some cognitive task, his or her research area is no longer AI but something else, such as pattern recognition, relational data bases, or automated theorem proving. Nevertheless, AI researchers have made a few significant discoveries -- the most important of which is that intelligence does not consist of raw processing power, but of access to well-organized data, or knowledge. Consequently, researchers have imbedded as much knowledge as possible in AI programs, hence the term Knowledge Based Systems. This knowlege can be expressed in a wide variety of symbolic arrangements, and it is these representations of knowledge that ultimately determine the capability of the software. The term Expert System has become meaningless through overuse and misuse. Ostensibly, such systems are programs that demonstrate expertise in a particular area. More specifically, l)they contain explicit declarative knowledge separate from the logical inference procedures which process it, 2)they are not algorithmic in finding solutions, 3)they can explain the reasons they arrived at a particular solution, 4)they aid in codifying knowledge, and 5) they learn from their mistakes. No existing system can satisfy the last critereon, so there is no such thing as a true expert system. Fortunately, AI researchers are recognizing this fact and sometimes use the term Assistant System to describe programs that satisfy (at least partially) the first four criteria.


Rule-based Systems

Propositional Logic was one of the first ways developed to represent knowledge, and most representations can be (theoretically at least) reduced to that form. Basically, a rule-based system contains an inference engine, a set of data facts, and a set of if-then rules. The inference engine embodies one of the most basic forms of logical deduction, modus ponens, which states that if factl is true and the rule "If factl then fact2" is true, then fact2 is also true. The inference engine "fires rules" by matching facts to the premises (the"IF" parts, conditions, antecedents, or LHS for left hand sides) in order to add new data (the "THEN" parts, consequents, actions, or RHSs) to the facts, until no more rules can be fired. In forward chaining, the consequent of one rule links with the premise of another rule, making it possible to deduce facts far removed from the original set. In backward chaining, the reasoning works in the opposite direction, from hypotheses to facts.

Rule-based systems can be considered a very flexible way to write computer software, especially if you like "IF-THEN" statements. In addition, a trace of fired rules can explain why a particular solution was reached. Unfortunately, long chains of rules in large systems often contradict each other. Another difficulty with rule-based systems is that it is very difficult to control the order in which rules fire. IF-THEN paradigms are very useful, but also very limited -- humans don't use them as a significant mode of thinking, and according to Julian Jaynes, haven't since Babylonian times (1). Rule-based systems are very difficult to test, much like most software. Finally, much of the knowledge in them has been "flattened" -consequently any "understanding" a rule-based system may exhibit derives from simple pattern matching.


Frames

Frames are another way to represent knowledge. Frames usually contain slots with associated values. They may have "childrenn which inherit some or all of the parents' slots and values. For example, a ROCKET frame can have slots Purpose, Payload Weight, and Number of Engines, and have "child" SPACE SHUTTLE:

Frame: ROCKET

Purpose: Carry Payloads to Orbit

Payload Weight: Over 50 lbs

Number of Engines: One or more


Frame: SPACESHUTTLE

Purpose: Carry Payloads to Orbit

Payload Weight: 65,000 lbs

Number of Engines- Five

Child of Rocket

In this case, the slot names are inherited, but not necessarily their exact values -many variations of inheritance and nested structures are possible, and there is no way to determine which combination is best for any particular problem domain. There is no methodology for determining how frames can best be used in expert systems, though representing rules in frames is a common tactic. Frames fit in very well with the object-based paradigm, and exhibit most of the corresponding advantages and disadvantages.


Semantic Networks

A semantic network can easily represent the fact that Polly is a bird and that it owns a nest. It does this by having a data structure with nodes like Polly, bird, and nest, which are connected by links such as "is-a" and "owns-a":

POLLY--------is-a-------------> BIRD

|

|

----------owns-a------------> NEST


While originally invented to handle machine translation, semantic networks have since moved into other areas of knowledge representation, especially functional modeling. Frames are a special case of semantic networks, and share the disadvantage of an ad-hoc methodology. The relationships between a node or link, its name, and its meaning are very unclear.

Model-based Systems

After building a few rule-based systems, researchers discovered that many of their rules contained static structural knowledge that never changed (2). For example, the statement "partl is connected to part2" logically subsumes the rule "if partl is moved in the y direction by x inches, then part2 is moved in the y direction by x inches" and many others. It turned out to be more efficient to pull this knowledge out of the rule base, where it was implicit and well hidden, and put it in an explicit, declarative form, such as frames or semantic networks. Lockheed uses this paradigm as a way to organize case-based diagnostic information (3), while Ford Aerospace actually uses reasoning modules to compare incoming telemetry with given values and then follow causal paths to the root cause of problems. The advantage of this second approach is that even though the causal reasoning modules are much more complex than the inference engine in a rule-based system, the computer can diagnose problems never encountered before (and therefore never entered in the knowledge base). In addition, a simulation can be generated from the knowledge base, allowing domain experts to verify the accuracy of their description (4)(5). Unfortunately, many domains are not understood well enough to be modeled. In addition, many concepts in a domain may be fuzzy by definition.


Neural Nets

A neural net (or network) imitates biological information processing, and is a computing system consisting of many simple, highly interconnected processing elements. It processes information in parallel by dynamically responding to external inputs. A neural net is completely different from any other AI system because it is NOT a physical symbol system. In other words, it is impossible to point to a particular portion of memory as the address of a particular piece of information. The fact that knowledge is intrinsically buried in the link distribution and nodal weights means that a neural net can still operate successfully even when damaged or given incomplete information. Unfortunately, it also means that a neural net cannot explain its reasoning by doing a backtrace, as most of the other knowledge representations can, and is consequently very difficult to debug. Neural nets are best suited for pattern recognition, and are usually "trainable". Current wisdom indicates that they will complement conventional computers and AI programs, not replace them.

SPACE APPLICATIONS

The first serious investigation of how AI could be applied to Space applications occured at the the 1980 NASA Ames/University of Santa Clara Summer Study (6). The first thing that NASA discovered was that it was woefully behind in advanced software techniques. The study looked at four main AI technology drivers:

  1. Intelligent Earth Sensing Information System (IESIS): A satellite that would gather data in a goal-directed manner, based on specific requests for information and on prior knowledge contained in a detailed self-correcting world model. IESIS would permit natural language requests to be satisfied without human intervention and optimize sensor utilization. In many ways, IESIS would fulfill the objectives of the Ride Report's "Mission to Planet Earth".
  2. Automated Interstellar Space Exploration: Practicing on Titan, a robot craft would autonomously conduct reconnaissance, exploration, and intensive study of extrasolar planets. Such a robot is far beyond the present state of the art.
  3. Automated Space Manufacturing Facility (SMF): A collection of autonomous machine systems would demonstrate the feasibility of using non-terrestrial resources for logistical support. NASA has not done very much in this area, but companies from Martin Marieta (7) and Amdahl to Ford and General Motors are working to increase productivity and decrease waste by putting machine intelligence on the factory floor. Most of this technology will be directly applicable to the SMF, with the exception of vacuum and zero gravity effects.
  4. Self-Replicating Lunar Factory: The logical extention of the SMF is a system that processes lunar soil using solar energy to form components which it can assemble into replicas of itself. This was one of the most exciting prospects generated by the 1980 Study because it means that we could drop a few hundred tons of material on the moon, come back 30 years later, and find the surface covered with manufactured devices such as solar cells and computer memory. The only difference between factory automation and self-replication is the concept of closure -- how many of the factory's components can be manufactured within the factory itself. The study indicated that 90% closure seems relatively easy to achieve, but that the last few percentage points would be very difficult. Eric Drexler had not yet publicized nanotechnology at the time of this study, but the only difference between nanotechnology and factory self-replication is the size of the components. At any rate, there is fruitful ground for cross-fertilization between these closely-related fields.


In 1983, NASA started to do something about its lack of expertise in Al by issuing a report of current AI technologies (8), but none of the applications investigated had anything to do with Space. This report, along with the general explosion in the availability of commercial AI hardware and software, signalled the beginning of serious work in applying AI toward space applications. The same year, NASA awarded Honeywell a contract to define, develop and demonstrate an approach to automate and control all of the Space Station subsystems. Honeywell concentrated on the Environmental Control and Life Support System (ECLSS) and reported that "expert" systems should really be "advisory" systems embedded in the existing architecture of conventional controllers (9).


More recently, NASA and the University of Alabama in Huntsville have begun to sponsor an annual conference on AI for Space Applications, which has focused on a wide variety of government and private projects. Automating Space Station Operations is seen by NASA as a major technology driver, since because astronauts and mission specialists are too valuable to do housekeeping. Space Shuttle Operations Automation is another area that NASA is interested in, primarily to increase human productivity and reduce costs (7). One of these expert systems, Rocketdyne's Space Shuttle Main Engine (SSME) Anomaly Knowledge Based System, is even named after a noted 23rd century rocket propulsion expert, SCOTTY (10). It is used to analyze data from SSME test firings, and is especially important because it attempts to capture the experience of many senior-level rocket engineers before they retire. Typical expert systems concentrate on a single subsystem, such as Ford Aerospace's model-based expert system for fault-handling the Global Position Satellite's electrical power subsystem (11). This narrowing of focus happens for a number of reasons, including limited time limited computer memory, and limited access to domain experts. But a more serious probiem is revealed by the fact that increased knowledge in humans makes them better and faster, while larger knowledge bases in computers usually make them slower and dumber. Many companies are attempting to solve this problem by building networks of individual expert systems to achieve a common task. Because Al technology is so new, basic research is also necessary in knowledge representation, knowledge aquisition, and tools. One such tool is CLIPS, C Language Integrated Production System, which was developed by NASA Johnson Space Center as an extremely portable and inexpensive delivery and training tool. It runs in C on top of MS-DOS, UNIX, or VMS. CLIPS is quite primitive, using only a single reasoning paradigm of forward chaining (12). However, it is still powerful enough as a knowledge-based language for Honeywell to seriously consider it for expert systems embedded within Space Station equipment controllers (13).

The simplicity and power of rule-based systems can be rather impressive, as demonstrated by Texas Instrument's Personal Consultant, even though it may considered "ancient technology" by members of the AI community. Students at the University of Central Florida interviewed domain experts at Kennedy Space Center to build "expert" systems for engineering applications, such as a Hazardous Gas Expert System which helps monitor the atmosphere around the space shuttle and its launch pad and consists of 150 rules. Other students developed a 175-rule system that would help a designer of liquid and gaseous oxygen transport systems determine whether a given design contains potential ignition hazards such as heat of compression, mechanical impact, resonance, and materials (14).

REALISTIC AND UNREALISTIC PROJECTIONS

It is always dangerous to play the prophet, but it is so much fun. Certainly, the NCOS (National Commission on Space) ignored the impact of advanced machine intelligence in its 50 year plan. On the other hand, Eric Drexler blithely talks about design automation software a million times faster than an engineer (15). His calculations are fine for brute-force cognitive processes, but it seems that most of the breakthroughs in engineering are finessed by cognitive processes that we do not yet understand, and there is no indication that we soon will.

As a historical example, let's look at the lunar landing. For many years, the expression "flying to the moon" was considered synonomous with "flagrantly impossible". Despite this belief, sometime before WWII, the U.S. government authorized a group of scientists to calculate what it would take to land a man on the moon. The calculations showed that it was possible but that it would require the entire gross national product. On the other hand, rocket engineers thought that President Kennedy was asking the impossible when he wanted the United States to land on the Moon by the end of the decade. They were too close to all the problems and saw only too clearly the obstacles in the way. I may be subject to the same shortsightedness. Unfortunately, building a truly intelligent machine is not just an engineering problem but a scientific one, because we do not have the slightest idea of what intelligence is, much less how to imitate and improve it. Let's take another example -- we can accurately predict that we will soon be able to build a diamond the size of Mount Everest because we know the structure of diamond, even though we may not have the self-replicating molecular assemblers to do it -- yet. On the other hand, repairing human cells from the inside would be very difficult even if we had the nanotools with which to do it, because we do not understand the dynamic interior structure of human cells. Of course. the nanotools can help us determine that structure, but currently we can't even measure our ignorance.

The question that is usually raised at this point is: "How do we recognize a truly intelligent machine?" Until now, the only answer has been the Turing test, an admittedly weak and ill-defined test in which a human interrogator tries to distinguish between a human and a machine which is doing its best to imitate humanity. When the interrogator can no longer distinguish between the two, Alan Turing maintains, then the machine has reached personhood. In my opinion, there is a much better test: Turn the machine off for a few minutes. Then turn the machine back on. If it reports a Near Death Experience (NDE), then that machine has reached personhood. This event does not necessarily mean that machines and humans have souls, but only indicates our ignorance of a particular phenomenon. At any rate, I suspect that the machines will argue about consciousness, free will, and NDE as much as we do.

It has been pointed out that as machines imitate human intelligence and behavior more perfectly, the Turing test will become a moot point as the question changes to, "How do I feel about this intelligent entity?" (16). With the advance of nanotechnology and direct brain interfaces, the question changes again to "Who are we becoming?" At that point, I think that even the Far Edge Committee is going to be in for some major surprises.


CONCLUSION

Machines with increased cognitive abilities will be used in the design, manufacturing, control, and diagnosis of every piece of Space hardware. They will increase reliability and capability, and will lower costs. As they become more autonomous, intelligent machines may replace humans in tasks which are too monotonous, dangerous or costly. In that role, Artificial Intelligence will be one of the most important tools in the human settlement of Space.

REFERENCES

  1. Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind, Houghton Mifflin Company,1976.
  2. Walton Perkins, "Diagnosis and Maintenance Systems", AIAA Invited Lecture Series, Aerospace Applications of Artificial Intelligence, November 13 and 20,1986.
  3. Ralph Barletta and William Mark, "Explanation-Based Indexing of Cases", AAAI Proceedings of the Spring Symposium Series, Explanation-Based Learning, March 22-24,1988 Stanford University, pp.127-134. (
  4. Tihamer Toth-Fejel, "Temporal and Contextual Knowledge in Model-Based Expert Systems: Ford Aerospace's Paragon Project", Third Conference on Artificial Intelligence for Space Applications, Huntsville, Alabama, November 2-3,1987, NASA CP-2492, pp.15-19.
  5. Jay Ferguson, "Beyond Rules: The Next Generation of Expert Systems", Proceedings of the Air Force Workshop on AI Applications for Integrated Diagnostics, May 1987.
  6. Advanced Automation for Space Missions, NASA CP-2255, June 23- August 29,1980, Edited by Robert Freitas, Jr., and William Gilbreath, GPO 1982.
  7. S. Bharwani, J. Walls, and M. Jackson, "Intelligent Process Development of Foam Molding for the Thermal Protection System of the Space Shuttle External Tank", Third Conference on Artificial Intelligence for Space Applications, NASA CP-2492, November 2-3,1987, pp.195-202.
  8. William Gevarter, An Overview of Artificial Intelligence and Robotics, NASA TM-85836, TM-85838, and TM 85839.
  9. Roger Block "Prototype Space Station Automation System Delivered and Demonstrated at NASA" Third Conference on Artificial Intelligence for Space Applications, NASA CP-2492, November 2-3,1987, pp. 447-451.
  10. Kenneth Modesitt, "Space Shuttle Main Engine Anomaly Data and Inductive Knowldge Based Systems: Automated Corporate Expertise", Third Conference on Artificial Intelligence for Space Applications, NASA CP-2492, November 2-3,1987, pp. 203-212.
  11. Arthur Blasdel, Jr., "Automated Fault Handling of a Satellite Electrical Power Subsystem Using a Model-Based Expert System" 1987 Intersociety Energy Conversion Engineering Conference, Philadelphia, August,1987, published by AIAA.
  12. Gary Riley, et al., "CLIPS: An Expert System Tool for Delivery and Training", Third Conference on Artificial Intelligence for Space Applications, NASA CP-2492, November 2-3,1987, pp.53-57.
  13. James Harrington, "CLIPS As a Knowledge Based Language", Third Conference on Artificial Intelligence for Space Applications, NASA CP-2492, November 2-3, 1987, pp.33-40
  14. "Florida Students Develop Expert Systems For Space Center", AI Interactions, April 1988, Vol.3, No.7, Texas Instruments Data Systems Group, Austin, Texas.
  15. Eric Drexler, Engines of Creation, Anchor Press/Doubleday,1986.
  16. Dr. Sherri Turkle of MIT, Lecture at University of Santa Clara Technology and Society Series,1986.