Rolf Nelson wrote:
> On Oct 25, 7:59 am, "Wei Dai" <[EMAIL PROTECTED]> wrote:
>> I don't care
>> about (1) and (3) because those universes are too arbitrary or random, 
>> and I
>> can defend that by pointing to their high algorithmic complexities.
>
> In (3) the universe doesn't have a high aIgorithmic complexity.

I should have said that in (3) our decisions don't have any consequences, so 
we disregard them even if we do care what happens in them. The end result is 
the same: I'll act as if I only live in (2).

>From your post yesterday:
> True. So how would an alternative scheme work, formally? Perhaps
> utility can be formally based on the "Measure" of "Qualia" (observer
> moments).

This is one of the possibilities I had considered and rejected, because it 
also leads to counterintuitive consequences. For example, suppose someone 
gives your the following offer:

I will throw a fair coin. If the coin lands heads up, you will be 
instantaneously vaporized. If it lands tails up, I will exactly double your 
measure (say by creating a copy of your brain and continuously keeping it 
synchronized).

Given your "measure of qualia"-based formalization of utility, and assuming 
that you're selfish so that you're only interested in the measure of the 
qualia of your own future selves, you'd have to be indifferent between 
accepting this offer and not accepting it.

Instead, here's my current approach for a formalization of decision theory. 
Let a set S be the description of an agent's knowledge of the multiverse. 
For example, for a Tegmarkian version of the multiverse, elements of S have 
the form (s, t) where s is a statement of second-order logic, and t is 
either "true" or "false". For simplicity, assume that the decision-making 
agent is logically omniscient, which means he knows the truth value of all 
statements of second-order logic, except those that depend on his own 
decisions. We'll say that he prefers choice A to choice B if and only if he 
prefers S U C(E,A) to S U C(E,B), where U is the union operator, C(x,y) is 
the logical consequences of everyone having qualia x deciding to do y, and E 
consists of all of his own memories and observations.

In this most basic version, there is not even a notion of "how much one 
cares about a universe". I'm relatively confident that it doesn't lead to 
any counterintuitive implications, but that's mainly because it is too weak 
to lead to any kind of implications at all. So how do we explain what 
probability is, and why the concept has been so useful?

Well, let's consider an agent who happens to have preferences of a special 
form. It so happens that for him, the multiverse can be divided into several 
"regions", the descriptions of which will be denoted S_1, S_2, S_3, etc., 
such that S_1 U S_2 U S_3 ... = S and his preferences over the whole 
multiverse can be expressed as a linear combination of his preferences over 
those "regions". That means, there exists functions P(.) and U(.) such that 
he prefers the multiverse S to the multiverse T if and only if

P(S_1)*U(S_1) + P(S_2)*U(S_2) + P(S_3)*U(S_3) + ...
> P(T_1)*U(T_1) + P(T_2)*U(T_2) + P(T_3)*U(T_3) ...

I haven't worked out all of the details of this formalism, but I hope you 
can see where I'm going with this... 



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to