Summary of UDASSA

This is a description of my current best effort at a model of the multiverse and conscious entities considered as informational structures. It is based on ideas from Wei Dai, Jurgen Schmidhuber, and Max Tegmark, among others.

Basically it can be summed up very simply as: Universal Distribution (UD) plus ASSA (absolute self selection assumption).

Definitions

Let me first define these terms. The UD is a probability distribution over information patterns. It is based on work by Solomonoff, Kolmogorov, and Chaitin. Under the UD, the measure of a bit string is the probability that a given Universal Turing Machine (an abstract computer) will output that bit string, when given a random program as input (probability taken over all possible input programs). It is related to the concept of Kolmogorov Complexity (KC), defined as the length of the shortest program that outputs the given bit string. Roughly, the measure of a bit string under the UD is 1/2 to the power of its KC.

The ASSA is the Absolute Self Selection Assumption. It is a variant on the Self Selection Assumption (SSA) of Nick Bostrom. The SSA says that you should think of yourself as being a randomly selected conscious entity (aka "observer") from the universe. The Absolute SSA extends this concept to "observer moments" (OMs). An observer moment is one moment of existence of an observer's consciousness. If we think of conscious experience as a process, the OM is created by dividing this process up into small units of time such that no perceptible change occurs within that unit. The ASSA then says that you should think of the OM you are presently experiencing as being randomly selected from among all OMs in the universe.


Traditional philosophy distinguished between ontology, the study of the nature of reality, and epistemology, which examines our relation to and understanding of the world. I can adopt this distinction and say that the UD is the ontology, and that the epistemology is roughly the ASSA.

Ontology

For the ontology, the UD is a probability distribution over information objects (i.e. information patterns) which I assume is the fundamental system of measure in the multiverse. It is defined with respect to an arbitrary Universal Turing Machine (UTM) and basically is defined as the fraction of all possible input program strings that produce that information pattern as output.

I am therefore implicitly assuming that only information objects exist. Among the information objects are integers, universes, computer programs, program traces (records of executions), observers, and observer-moments.

The UD is an attractive choice because it is "dominant", meaning that it is asymptotically within a constant factor of any other distribution, including UD's defined by other UTMs. This is why it is called "universal". It is often considered the default probability distribution when no information is available. This makes it a natural choice, perhaps the only natural choice, for a distribution over information objects.

The UD defines a probability or "measure" for every information object. This is the basic ontology which I assume exists. It is the beginning and ending of my ontology.

A few additional points are worth making. Time does not play a significant role in this model. An information object may or may not include a time element. Time is merely a type of relationship which can exist among the parts of the information object, just as space is another type. In relativity theory, time is different from space in the sign (positive/negative) by which its effects are made known on the metric.

Among universes, some may have a time dimension, some may not; some may have more than one dimension of time. Similarly, they could have different dimensions of space, or perhaps fractal dimensions.

Observers are by definition information systems that are similar to us, and since time is intimately bound up in our perception of the world, observers will be information objects which do include a time element.

It is also worth noting that the UD measure is non-computable. However it can in practice be approximated, and that seems good enough for my purposes.

Another point relates to the question of copies. One way to interpret the UD is to imagine infinite numbers of UTMs operating on all possible programs. The measure of an object is the fraction of the UTMs which output that object. This inherently requires that "copies count", even exact copies. The more copies of an information object are created, the more measure it has.

A final point: I strongly suspect that the biggest contribution to the measure of observers (and observer-moments) like our own will arise from programs which conceptually have two parts. The first part creates a universe similar to the one we see where the observers evolve, and the second part selects the observer for output. I have argued elsewhere that each part can be relatively small compared to a program which was hard-wired to produce a specific observer and had all the information necessary to do so. Small programs have greater measure (occupy a greater fraction of possible input strings) hence this would be the main source of measure for observers like us.

Epistemology

For the epistemology, we need some way to relate this definition of measure to our experience of the world. This is necessary to give the theory grounding and enable it to make predictions and explanations. What we want is to be able to explain things by arguing that they correspond to high-measure information patterns. We also want to be able to make predictions by saying that higher measure outcomes are more likely than lower measure ones. To achieve this I want to adopt a relatively vague statement like:

You are more likely to be a high measure information object.

Obviously this statement raises many questions. It seems to suggest that you might be a table, or the number 3. It also has problems with the passage of time. "When" are you a given information object? Are you first one and then another? If you start off as one, do you stay the same?

I am not aiming to fully explain and answer all of these questions in this document. At this point I am trying to keep to the big picture. Objects have measure, and for that to be meaningful, objects with higher measure have to be considered more prominent. We should expect the universe we observe to have relatively high measure. We should expect ourselves as observers, and as observer moments, to have relatively high measure. If we face alternatives of either a low measure or a high measure future, we should expect to experience the high measure one.

As far as the problem of "being" unconscious objects, I don't necessarily see that as contradictory. We all know what it is like to be unconscious. We become unconscious every day when we sleep. We also know through experience that there are many degrees and kinds of consciousness.

In practice, being a table or the number 3 is so different from what we think of as consciousness that we cannot relate to it as human beings. We need to restrict our attention to information objects that have a similar nature and complexity to our own. Among those objects, we can distinguish between ones with low and high measure. The theory predicts that we should find ourselves as entities with a relatively high measure, and explains those aspects of our existence which have a high measure.

The ASSA is well suited for this interpretation, because it relates measure of observer moments to subjective probability. The older SSA, which is observer based where the ASSA is observer-moment based, also can work reasonably well in this model for the same reason.

But the details of ASSA vs SSA vs other interpretations are not of fundamental importance in my view. The most important part is the UD. We then connect its definition of measure to subjective experience using the concept that higher measure states are more likely to be experienced. This is the basic principle from which we attempt to make our predictions and explanations.