On 12 December 2014 at 12:22, Jason Resch <jasonre...@gmail.com> wrote:
>
>
> On Thu, Dec 11, 2014 at 3:10 PM, LizR <lizj...@gmail.com> wrote:
>>
>> On 11 December 2014 at 18:59, Stathis Papaioannou <stath...@gmail.com>
>> wrote:
>>>
>>>
>>> On Thursday, December 11, 2014, LizR <lizj...@gmail.com> wrote:
>>>>
>>>> Maybe it's a delayed choice experiment and retroactively collapses the
>>>> wave function, so your choice actually does determine the contents of the
>>>> boxes.
>>>>
>>>> (Just a thought...maybe the second box has a cat in it...)
>>>>
>>> No such trickery is required. Consider the experiment where the subject
>>> is a computer program and the clairvoyant is you, with the program's source
>>> code and inputs. You will always know exactly what the program will do by
>>> running it, including all its deliberations. If it is the sort of program
>>> that decides to choose both boxes it will lose the million dollars. The
>>> question of whether it *ought to* choose both boxes or one is meaningless if
>>> it is a deterministic program, and the paradox arises from failing to
>>> understand this.
>>
>> Not trickery, how dare you?! An attempt to give a meaningful answer which
>> actually makes something worthwhile from what appears to be a trivial
>> "paradox" without any real teeth.
>>
>> But OK since you are determined to belittle my efforts, let's try your
>> approach.
>>
>> 1 wait 10 seconds
>> 2 print "after careful consideration, I have decided to open both boxes"
>> 3 stop
>>
>> This is what ANY deterministic computer programme (with no added random
>> inputs) would boil down to, although millions of lines of code might take a
>> while to analyse, and the simplest way to find out the answer in practice
>> might be to run it (but each run would give the same result, so once it's
>> been run once we can replace it with my simpler version).
>>
>> I have to admit I can't see where the paradox is, or why there is any
>> interest in discussing it.
>>
>
> It's probably not a true paradox, but why it seems like one is that
> depending on which version of decision theory you use, you can be led to two
> opposite conclusions. About half of people think one-boxing is best, and the
> other half think two-boxing is best, and more often then not, people from
> either side think people on the other side are idiots. However, for whatever
> reason, everyone on this list seems to agree one-boxing is best, so you are
> missing out on the interesting discussions that can arise from seeing people
> justify their alternate decision.
>
> Often two-boxers will say: the predictor's already made his decision, what
> you decide now can't change the past or alter what's already been done. So
> you're just leaving money on the table by not taking both boxes. An
> interesting twist one two-boxer told me was: what would you do if both boxes
> were transparent, and how does that additional information change what the
> best choice is?

If both boxes were transparent, that would screw up the oracle's
ability to make the prediction, since there would be a feedback from
the oracle's attempt at prediction to the subject. The oracle can
predict if I'm going to pick head or tails, but the oracle *can't*
predict if I'm going to pick heads or tails if he tells me his
prediction then waits for me to make a decision.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to