On 4/21/2022 3:03 PM, George Kahrimanis wrote:
In my current way of thinking, the disagreement between Alan Grayson and John K. Clark is about two subtly different concepts under the same name, "probability". For example, when I read "80% chance of rain today", I may think that in some possible futures it will not rain (so probability is meaningless), yet I feel an instinctive urge for protection from bad weather, so I take my umbrella. We are programmed to act in this way, due to Darwinian selection -- but it is a different matter to claim that QM (without collapse) issues a probability for each possible outcome so that then we are rationally obliged to apply Maximisation of Expected Utility. I grant the former but not the latter.

Part of the trouble is that serious philosophical issues about probability are still debated, so that there are traps for anyone who deals with these things. Here is an example.

> [...] until Alan Grayson sees the end of the race, or somebody tells Alan Grayson about it, Alan Grayson can't be certain what world Alan Grayson is in. Alan Grayson could be in a world where horse X won or Alan Grayson could be in a world where horse Y won, until Alan Grayson receives more information Alan Grayson would have to say the odds are 50-50.

If you mean that on sheer ignorance the odds are 50-50, we need some clarifications. Strictly speaking, zero information implies "undefined probability", or "imprecise probability between 0 and 1". The reason it is commonly mistaken as 50-50 is an implied strategy, flipping a coin in case of ignorance, but then the odds are of the coin instead of the object of the bet. (This strategy works only if the agent is free to choose which side of the bet she underwrites.)

If the odds 50/50 can apply to the coin...because you don't know which way it will come down...then the same concept applies to the horse race.


For the instrumentalists among us (glad to have you, BTW): the question of interest to me is not about which way is best to derive probability from QM -- that would be a pointless discussion, I agree! The question is whether all of them beg the question, so that we have to think of a rational decision theory without probability.

Rational decision theory only exists because of uncertainty.  If there were no uncertainty one wouldn't need theory to inform your choice, you would directly by value.

Brent


Although Everett's argument (whose improvement I have proposed) grants that in the long run (that is, large samples) the Born Rule is practically certain to apply, this is not technically the same as probability for each single outcome -- though I admit that it works the same, to trigger an instinctive impulse. But for a RATIONAL decision theory this probability is not granted, IMO.

I can give examples of a decision theory w/o probability, but they would dilute the focus of this message.

George K. --
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/b10325e2-03ae-4e2f-bc4b-9e144ef989d7n%40googlegroups.com <https://groups.google.com/d/msgid/everything-list/b10325e2-03ae-4e2f-bc4b-9e144ef989d7n%40googlegroups.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a069fa20-8480-4060-3492-9494bd240b3f%40gmail.com.

Reply via email to