Hal Finney wrote:

Jesse Mazer writes:
> In your definition of the ASSA, why do you define it in terms of your next
> observer moment?


The ASSA and the RSSA were historically defined as competing views.
I am not 100% sure that I have the ASSA right, in that it doesn't seem
too different from the SSSA.  (BTW I have kept the definitions at the end
of this email.)  (BTW, BTW means By The Way.)  But I am pretty sure about
the RSSA being in terms of the "next" moment, so I defined the ASSA the
same way, to better illustrate its complementary relationship to the RSSA.

The real difference between these views was not addressed in my
glossary, which is that the RSSA is supposed to justify the QTI, the
quantum theory of immortality, while the ASSA is supposed to refute it.
That is, if you only experience universes where your identity continues,
as the RSSA implies, then it would seem that you will never die.  But if
your life-moments are ruled by statistics based on physical law as the
ASSA says, then the chance that you will ever experience being extremely
old is infinitesimal.

Personally I think the ASSA as I have it is somewhat incoherent, speaking
of a "next" observer moment in a framework where there really isn't any
such notion.  But as I said it has been considered as the alternative
to the RSSA.  I invite suggestions for improved wording.

I think that proponents of the type of ASSA you’re talking about would say that the experience of consciousness passing through multiple observer-moments is simply an illusion, and that I am nothing more than my current observer-moment. Therefore they would not believe in quantum immortality, and they also would not define the ASSA in terms of the "next" observer-moment, only the current observer-moment. I think you’d be hard-pressed to find any supporters of the ASSA who would define it in the way you have.


But as I say below, I think it is possible to have a different interpretation of the ASSA in which consciousness-over-time is not an illusion, and in which it can be compatible with the RSSA, not opposed to it.


> Wouldn't it be possible to have a version of the SSA where
> you consider your *current* observer moment to be randomly sampled from the
> set of all observer-moments, but you use something like the RSSA to guess
> what your next observer moment is likely to be like?


That seems contradictory.  You have one distribution for the current
observer-moment (sampled from all of them), and another distribution for
the next observer-moment (sampled from those that are continuous with
the same identity).  But the current observer-moment is also a "next"
observer-moment (relative to the previous observer-moment).  So you can't
use the ASSA for current OM's and the RSSA for next OM's, because every
next is a current, and vice versa.  (By OM I mean observer-moment.)

Well, any theory involving splitting/merging consciousness is naturally going to privilege the current observer-moment, because it’s the only thing you can be really sure of a la "I think therefore I am"…when talking about the past or the future, there will be multiple pasts and multiple futures compatible with your present OM, so you can only talk about a sort of probabilistic spread.


That said, although some might argue there’s a sort of philosophical contradiction there, I think it is possible to conceive of a mathematical theory of consciousness which incorporates both the ASSA and the RSSA without leading to any formal/mathematical contradictions. There could even be a sort of "complementarity" between the two aspects of the theory, so that OM’s with the highest absolute probability-of-being would also be the ones that have the most other high-absolute-probability OM’s that see them as a likely "successor" in terms of relative probability-of-becoming. In fact, an elegant solution for determining a given OM’s absolute probability-of-being might be to simply do a sum over the probability of becoming that OM relative to all the other OM’s in the multiverse, weighted by their own probability-of-being.

Here’s a simple model for how this could work. Say you have some large set of all the OM’s in the multiverse, possibly finite if there is some upper limit on the complexity of an OM’s, but probably infinite. You have some theory of consciousness that quantifies the "similarity" S between any two given OM’s, which deals with how well they fit as the same mind at different moments, how many of the same memories they share in common, how similar are their causal patterns, and so on. You also have some absolute measure on all the OM’s, a "probability-of-being" B assigned to each one—this is basically just my idea that the self-sampling assumption could be weighted somehow, so that the ideal way to use the ASSA is to assume that your current OM is randomly sampled from the set of all possible observer-moments, weighted by their own probability-of-being B.

Then, to determine the relative probability-of-becoming various possible OM’s, I could just multiply their similarity S to my own current OM by their absolute measure B representing each one’s probability-of-being. This would insure that even though a version of me observing a dragon popping out of my computer screen may be have just as much similarity S to my current mental state, in terms of memories and the like, as a version of me who’s watching the computer screen behaving normally, if one OM is objectively less probable (lower B) due to the laws of nature, I will have a higher relative probability of becoming the OM who sees business-as-usual. This would also insure that if I step into a teleportation machine and the machine reconstructs two people, one whose brain is close to identical to mine and one who has a very different personality and memories, then even if the OM’s of both these people have about the same absolute probability-of-being B, I am far more likely to become the one who’s more similar to me because his similarity S to my current OM would be much higher.

And as I suggested earlier, it would be neat if the probability-of-being B could itself be derived by something like a sum over the S’s between me and all the other other OM’s, each one weighted by their own B-rating. This idea could be summed up by the slogan "the most probable present experiences are the ones that are high-probability successors to other experiences that are themselves highly probable present experiences". In this way it might even be possible to bootstrap a unique B-rating for all OM’s, starting with only a knowledge of the similarity ratings between them. Consider the following simple universe with only three observers X, Y, and Z, and a known matrix of similarity ratings S between each pair:

   X       Y        Z
X1.00    0.60    0.35
Y 0.60   1.00    0.25
Z 0.35    0.25   1.00

In this case, if the B-ratings for each one were determined by a sum over the S-ratings for the others weighted by their own B-ratings, and you represent X’s B-rating by the variable x, Y’s B-rating by the variable y, etc., then you’d have some simultaneous equations that’d actually allow you to find a unique self-consistent solution for x, y, and z:

x = (0.60)y + (0.35)z
y = (0.60)x + (0.25)z
z = (0.35)x + (0.25)y

I haven’t actually planned these numbers out, so the solution probably leads to some variables being negative or greater than one, which doesn’t really make sense if the B’s are supposed to be probabilities, but the basic idea here is that you can bootstrap the B’s just by knowing the S’s.

Now keep in mind, this is all a very cartoonish sketch, I don’t really think whatever theory of consciousness is used to determine relative probabilities would be as simple as multiplying a "similarity rating" by an absolute probability; among other things, "similarity" fails to capture the crucial issue of the directionality of subjective time, my current OM might be just as similar to an OM 2 seconds ago as it is to one 2 second from now, but I expect a higher probability I’ll become the one 2 seconds in the future. Also, I suggested earlier that the complexity of an OM’s consciousness might play a part in both the absolute probability (so my present experience is more likely to be that of a human than an insect) and relative probability (so I am more likely to experience becoming a copy with an intact brain than one with brain damage), but the model I presented doesn’t take that into account. Still, it’s sort of a pet theory of mine that the real TOE will turn out to be analogous to this model in the following ways:

1. It will include a theory of consciousness that can take my present OM along with various possible future OM’s, and determine the relative probability of my experiencing each one in my future based on a combination of features that are inherent to each OM (analogous to the ‘similarity’ rating in my model) and an external measure which assigns each one an absolute probability. The relative probability on different future observer-moments would be used as weights in the RSSA, and the absolute probability of different present observer-moments would be used in a weighted ASSA.

2. Even if you don’t know the correct absolute probability of any of the OM’s to start with, there will turn out to be a unique self-consistent solution to what this absolute measure on OM’s has to look like, given only the theory of consciousness and the assumption that all possible OM’s exist (the ‘everything’ part of the theory). This would be analogous to the unique solution to the simultaneous equations in the cartoon model above.

This would be neat because the laws of physics we observe could hopefully be derived (in principle anyway) from the absolute and relative measures on all OM’s, so you’d basically be deriving all the laws of the universe from just a theory of consciousness and platonic assumption that every conscious pattern that can exist, does exist. The problem with any TOE that incorporates a "theory of consciousness" is that it runs the risk of being a dualist theory if any aspect of first-person probabilities derives from something other than that theory (like an objective measure on universes rather than OM’s to explain why I don’t experience Harry Potter worlds), but this idea is nicely monist and simple.

It might seem that a theory centered on consciousness and observer-moments would suggest that any part of the universe that isn’t observed by a sentient being doesn’t really exist, but I imagine identifying distinct "observer-moments" with something like "patterns of causal relationships" (or finite computations, perhaps), so that all such patterns, even the random jostling of molecules in a cloud of gas, would qualify as observer-moments with very low-grade levels of consciousness. That way the absolute probability of each such pattern, along with the probabilistic relationships between different patterns, might be used to derive what we ordinarily think of as the laws of physics, especially if the laws of physics can ultimately be stated in terms of nothing but relationships between elementary events, as physicists like Lee Smolin have suggested. This is similar to the "naturalistic panpsychism" idea I found described on the same website that hosts the many-worlds FAQ (although I disagree with them on a few points):

http://www.hedweb.com/lockwood.htm

Apologies for the long post, but I haven’t really outlined my own pet TOE on this list before, so I wanted to get all the major details in there.

Jesse Mazer

_________________________________________________________________
MSN Messenger with backgrounds, emoticons and more. http://www.msnmessenger-download.com/tracking/cdp_customize




Reply via email to