On 6/23/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sun, 6/22/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
>  > On 6/21/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>  > >
>  > > Eliezer asked a similar question on SL4. If an agent
>  > flips a fair quantum coin and is copied 10 times if it
>  > comes up heads, what should be the agent's subjective
>  > probability that the coin will come up heads? By the
>  > anthropic principle, it should be 0.9. That is because if
>  > you repeat the experiment many times and you randomly
>  > sample one of the resulting agents, it is highly likely
>  > that will have seen heads about 90% of the time.
>  >
>  > That's the wrong answer, though (as I believe I pointed out when the
>  > question was asked over on SL4). The copying is just a red
>  > herring, it doesn't affect the probability at all.
>  >
>  > Since this question seems to confuse many people, I wrote a
>  > short Python program simulating it:
>  > http://www.saunalahti.fi/~tspro1/Random/copies.py
>
> The question was about subjective anticipation, not the actual outcome. It 
> depends on how the agent is programmed. If you extend your experiment so that 
> agents perform repeated, independent trials and remember the results, you 
> will find that on average agents will remember the coin coming up heads 99% 
> of the time. The agents have to reconcile this evidence with their knowledge 
> that the coin is fair.


If the agent is rational, then its subjective anticipation should
match the most likely outcome, no?

Define "perform repeated, independent trials". That's a vague wording
- I can come up with at least two different interpretations:

a) Perform the experiment several times. If, on any of the trials,
copies are created, then have all of them partake in the next trial as
well, flipping a new coin and possibly being duplicated again (and
quickly leading to an exponentially increasing number of copies).
Carry out enough trials to eliminate the effect of random chance.
Since every agent is flipping a fair coin each time, by the time you
finish running the trials, all of them will remember seeing a roughly
equal amount of heads and tails. Knowing this, a rational agent should
anticipate this result, and not a 99% ratio.

b) Perform the experiment several times. If, on any of the trials,
copies are created, leave most of them be and only have one of them
partake in the repeat trials. This will eventually result in a large
number of copies who've most recently seen heads and at most one copy
at a time who's most recently seen tails. But this doesn't tell us
anything about the original question! The original situation was, "if
you flip a coin and get copied on seeing heads, what result should you
anticipate seeing", not "if you flip a coin several times, and on each
time that heads turn up, copies of you get made and most are set aside
while one keeps flipping the coin, should you anticipate eventually
ending up in a group that has most recently seen heads". Yes, there is
a high chance of ending up in such a group, but we again have a
situation where the copying doesn't really affect things - this kind
of wording is effectively the same as asking, "if you flip a coin and
stop flipping once you see heads, should you on enough trials
anticipate that the outcome you most recently saw was heads" - the
copying only gives you a small chance to keep flipping anyway. The
agent should still anticipate seeing an equal ratio of tails and heads
beforehand, since that's what it will see, up to the point that it
ends up in a position where it'll stop flipping the coin anymore.

>  It is a tricker question without multiple trials. The agent then needs to 
> model its own thought process (which is impossible for any Turing computable 
> agent to do with 100% accuracy). If the agent knows that it is programmed so 
> that if it observes an outcome R times out of N that it would expect the 
> probability to be R/N, then it would conclude "I know that I would observe 
> heads 99% of the time and therefore I would expect heads with probability 
> 0.99". But this programming would not make sense in a scenario with 
> conditional copying.

That's right, it doesn't.

>  Here is an equivalent question. If you flip a fair quantum coin, and you are 
> killed with 99% probability conditional on the coin coming up tails, then, 
> when you look at the coin, what is your subjective anticipation of seeing 
> "heads"?

What sense of equivalent do you mean? It isn't directly equivalent,
since it will produce a somewhat different outcome on the single-trial
(or repeated single trial) case. Previously all the possible outcomes
would have either been in the "seen heads" or the "seen tails"
category, this question adds the "hasn't seen anything, is dead"
category.

In the original experiment my expectation would have been 50:50 - here
I have a 50% subjective anticipation of seeing "heads", a 0.5%
anticipation of seeing "tails", and 49,5% anticipation of not seeing
anything at all.




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://www.mfoundation.org/


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to