Re: [UAI] A perplexing problem - Version 2

2009-02-25 Thread Konrad Scheffler
On Mon, 23 Feb 2009, Francisco Javier Diez wrote:

 Konrad Scheffler wrote:
  I agree this is problematic - the notion of calibration (i.e. that you can
  say P(S|70%) = .7) does not really make sense in the subjective Bayesian
  framework where different individuals are working with different priors,
  because different individuals will have different posteriors and they can't
  all be equal to 0.7. 
 
 I apologize if I have missed your point, but I think it does make sense. If
 different people have different posteriors, it means that some people will
 agree that the TWC reports are calibrated, while others will disagree.

I think this is another way of saying the same thing - if you define the 
concept of calibration such that people will, depending on their priors, 
disagree over whether the reports are calibrated then it is still 
problematic to prescribe calibration in the problem formulation - because 
this will mean different things to different people. Unless you take 
TWC is calibrated to mean everyone has the same prior as TWC, which I 
don't think was the intention in the original question.

In my opinion the source of confusion here is the use of a subjective 
Bayesian framework (i.e. one where the prior is not explicitly stated and 
is assumed to be different for different people). If instead we use an 
objective Bayesian framework where all priors are stated explicitly, the 
difficulties disappear.

 Who is right? In the case of unrepeatable events, this question would not make
 sense, because it is not possible to determine the true probability, and
 therefore whether a person or a model is calibrated or not is a subjective
 opinion (of an external observer).
 
 However, in the case of repeatable events--and I acknowledge that
 repeatability is a fuzzy concept--, it does make sense to speak of an
 objective probability, which can be identified with the relative frequency.
 Subjective probabilities that agree with the objective probability (frequency)
 can be said to be correct and models that give the correct probability for
 each scenario will be considered to be calibrated.
 
 If we accept that snow is a repeatable event, the all the individuals should
 agree on the same probability. If it is not, P(S|70%) may be different for
 each individual because having different priors and perhaps different
 likelihoods or even different structures in their models.

I strongly disagree with this. The (true) relative frequency is not the 
same thing as the correct posterior. One can imagine a situation where the 
correct posterior (calculated from the available information) is very far 
from the relative frequency which one would obtain given the opportunity 
to perform exhaustive experiments.

Probabilities (in any variant of the Bayesian framework) do not describe 
reality directly, they describe what we know about reality (typically in 
the absence of complete information).

 Coming back to the main problem, I agree again with Peter Szolovits in making
 the distinction between likelihood and posterior probability.
 
 a) If I take the TWC forecast as the posterior probability returned by a
 calibrated model (the TWC's model), then I accept that the probability of snow
 is 70%.
 
 b) However, if I take 70% probability of snow as a finding to be introduced
 in my model, then I should combine my prior with the likelihood ratio
 associated with this finding, and after some computation I will arrive at
 P(S|70%) = 0.70. [Otherwise, I would be incoherent with my assumption that
 the model used by the TWC is calibrated.]
 
 Of course, if I think that the TWC's model is calibrated, I do not need to
 build a model of TWC's reports that will return as an output the same
 probability estimate that I introduce as an input.
 
 Therefore I see no contradiction in the Bayesian framework.

But this argument only considers the case where your prior is identical 
to TWC's prior. If your prior were _different_ from theirs (the more 
interesting case) then you would not agree that they are calibrated.
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem - Version 2

2009-02-23 Thread Francisco Javier Diez

Konrad Scheffler wrote:
I agree this is problematic - the notion of calibration (i.e. that you can 
say P(S|70%) = .7) does not really make sense in the subjective Bayesian 
framework where different individuals are working with different priors, 
because different individuals will have different posteriors and they 
can't all be equal to 0.7. 


I apologize if I have missed your point, but I think it does make sense. 
If different people have different posteriors, it means that some people 
will agree that the TWC reports are calibrated, while others will disagree.


Who is right? In the case of unrepeatable events, this question would 
not make sense, because it is not possible to determine the true 
probability, and therefore whether a person or a model is calibrated or 
not is a subjective opinion (of an external observer).


However, in the case of repeatable events--and I acknowledge that 
repeatability is a fuzzy concept--, it does make sense to speak of an 
objective probability, which can be identified with the relative 
frequency. Subjective probabilities that agree with the objective 
probability (frequency) can be said to be correct and models that give 
the correct probability for each scenario will be considered to be 
calibrated.


If we accept that snow is a repeatable event, the all the individuals 
should agree on the same probability. If it is not, P(S|70%) may be 
different for each individual because having different priors and 
perhaps different likelihoods or even different structures in their models.


---

Coming back to the main problem, I agree again with Peter Szolovits in 
making the distinction between likelihood and posterior probability.


a) If I take the TWC forecast as the posterior probability returned by a 
calibrated model (the TWC's model), then I accept that the probability 
of snow is 70%.


b) However, if I take 70% probability of snow as a finding to be 
introduced in my model, then I should combine my prior with the 
likelihood ratio associated with this finding, and after some 
computation I will arrive at P(S|70%) = 0.70. [Otherwise, I would be 
incoherent with my assumption that the model used by the TWC is calibrated.]


Of course, if I think that the TWC's model is calibrated, I do not need 
to build a model of TWC's reports that will return as an output the same 
probability estimate that I introduce as an input.


Therefore I see no contradiction in the Bayesian framework.

Best regards,
  Javier

-
Francisco Javier Diez  Phone: (+34) 91.398.71.61
Dpto. Inteligencia Artificial  Fax:   (+34) 91.398.88.95
UNED. c/Juan del Rosal, 16 http://www.ia.uned.es/~fjdiez
28040 Madrid. Spainhttp://www.cisiad.uned.es
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem - Last Version

2009-02-21 Thread Jean-Louis GOLMARD


Dear Paul,


since I was in the consensus for my last response, I give you again my  
response to this new problem.


The principle of my solution is always the same: to try to build a  
probabilistic model.



a) I first reformulate the problem in more familiar notations for me,  
with a diagnosis D and 2 signs S1 et S2.


The first report is S1, the second one is S2, and the diagnosis is  
location Y for X.


Your data are: P(D/S1) = p1, and P(D/S2) = p2
Your question is: P(D/S1 and S2) = p12 = ?

b) the problem is clearly underparametrized, so I would make 2 assumptions:

A1: S1 and S2 are independant conditionnally to D and not D (it is  
possible not to make this assumption, but you have then to give a  
value for the dependence).


A2: P(D)=P(not D) = 0.5 (this is for simplyfing the computations, but  
the solution is easy to compute also if you give another value for  
P(D)).



c) now the solution is straihtforward:

let's denote OR(12) = P(D/S1,S2)/P(not D/S1,S2),
 and OR(i)=P(D/Si)/P(not D/Si), i=1,2

we have by simple Bayes's formula and with assumptions A1 and A2:

OR(1,2)= OR(1) OR(2)

and p12 = OR(1) OR(2) / (1 + OR(1) OR(2) )


I don't know if it is your answer, since it is a very simple one...
I think it is the solution based on the simplest probabilistic computations.

sincerely yours


Jean-louis





Quoting Lehner, Paul E. pleh...@mitre.org:


Austin, Jean-Lous, Konrad,  Peter

Thank you for your responses.  They are very helpful.

Your consensus view seems to be that when receiving evidence in the   
form of a single calibrated judgment, one should not update personal  
 judgments by using Bayes rule.  This seems incoherent (from a  
strict  Bayesian perspective) unless perhaps one explicitly  
represents the  overlap of knowledge with the source of the  
calibrated judgment  (which may not be practical.)


Unfortunately this is the conclusion I was afraid we would reach,   
because it leads me to be concerned that I have been giving some bad  
 advice about applying Bayesian reasoning to some very practical   
problems.


Here is a simple example.

Analyst A is trying to determine whether X is at location Y.   She   
has two principal evidence items.  The first is a report from a   
spectral analyst that concludes based on the match to the expected   
spectral signature I conclude with high confidence that X is at   
location Y.  The second evidence is a report from a chemical   
analyst who asserts, based on the expected chemical composition   
that is typically associated with X, I conclude with moderate   
confidence that X is at location Y.   How should analyst A approach  
 her analysis?


Previously I would have suggested something like this.  Consider   
each evidence item in turn.  Assume that X is at location Y.  What   
are the chances that you would receive a 'high confidence' report   
from the spectral analyst, ... a report of 'moderate confidence'   
from the chemical analyst.  Now assume X is not a location Y,    
 In other words I would have lead the analyst toward some simple   
instantiation of Bayes inference.


But clearly the spectral and chemical analyst are using more than   
just the sensor data to make their confidence assessments.  In part   
they are using the same background knowledge that Analyst A has.
Furthermore both the spectral and chemical analysts are good at   
their job, their confidence judgments are reasonably calibrated.
This is just like the TWC problem only more complex.


So if Bayesian inference is inappropriate for the TWC problem, is it  
 also inappropriate here?  Is my advice bad?


Paul


From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu]   
On Behalf Of Lehner, Paul E.

Sent: Monday, February 16, 2009 11:40 AM
To: uai@ENGR.ORST.EDU
Subject: Re: [UAI] A perplexing problem - Version 2

UAI members

Thank you for your many responses.  You've provided at least 5   
distinct answers which I summarize below.

(Answer 5 below is clearly correct, but leads me to a new quandary.)



Answer 1:  70% chance of snow is just a label and conceptually   
should be treated as XYZ.  In other words don't be fooled by the   
semantics inside the quotes.




My response: Technically correct, but intuitively unappealing.
Although I often council people on how often intuition is   
misleading, I just couldn't ignore my intuition on this one.






Answer 2: The forecast 70% chance of snow is ill-defined



My response:  I agree, but in this case I was more concerned about   
the conflict between math and intuition.  I would be willing to   
accept any well-defined forecasting statement.






Answer 3: The reference set winter days is the wrong reference set.



My response: I was just trying to give some justification to my   
subjective prior.  But this answer does point out a distinction   
between base rates and subjective priors.  This distinction relates   
to my new quandary below so please read

Re: [UAI] A perplexing problem

2009-02-21 Thread Alexandre Saidi




Dear All,
We may not compare "70%" of TWC prediction with Paul's 34%.
Simply beacause, as Paul assumed , TWC is right only at a ratio of 1/10
(their 70% "prediction" happens at 10%, true positive compared to  1%
"false positive") ! 

Given the uncertainty of TWC "70%" predictions, the Paul's 34% would
have its own uncertainty (belief) that will be observable from the
feedback.


One is often surprized for the same reasons as Paul. That was my case
in the "Alarm/Earthquake.." problem from many sources, e.g. P. Norvig
 S. Russel.

regards.

Alex
Le 16/02/09 20:49, Agosta, John M a crit:

  All -

The "Bayes ratio" (or odds ratio) interpretation of Bayes rule is enlightening, since it reveals the strength of evidence in a way not clear from just looking at the probabilities. 

A 5% prior chance becomes odds of 1:19 against snow. 

With Paul's assigned sensitivity (probability of snow forecast given it will snow) of 10%, the evidence of a positive forcast has an odds ratio of 10:1 in favor of snow. Expressed, for instance in a scale suggested by Kass  Raftery this counts as not particularly strong positive evidence. 

Not surprisingly the combination of 1:19 prior against and a 10:1 odds for results in less than even odds for snow. 

___
John Mark Agosta, Intel Research
 
 

-Original Message-
From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf Of Paul Snow
Sent: Monday, February 16, 2009 3:24 AM
To: uai@engr.orst.edu
Subject: Re: [UAI] A perplexing problem

Dear Paul,

If the Weather Channel is Bayesian, then say they used that empricial
prior that you did (5%), and they observed evidence E to arrive at
their 70% for the snow S given E.

Their Bayes' ratio is 44.3. Yours, effectively, is 10 (assuming that
the event "They say 70%" coincides with "They observe evidence with a
Bayes ratio in the forties" - that is, they agree with you about the
empirical prior and are Bayesian).

So, having effectively disagreed with them about the import of what
they observed, you also disagreed with them about the conclusion.

Hope that helps,

Paul

2009/2/13 Lehner, Paul E. pleh...@mitre.org:
  
  
I was working on a set of instructions to teach simple
two-hypothesis/one-evidence Bayesian updating.  I came across a problem that
perplexed me.  This can't be a new problem so I'm hoping someone will clear
things up for me.



The problem

1.  Question: What is the chance that it will snow next Monday?

2.  My prior: 5% (because it typically snows about 5% of the days during
the winter)

3.  Evidence: The Weather Channel (TWC) says there is a "70% chance of
snow" on Monday.

4.  TWC forecasts of snow are calibrated.



My initial answer is to claim that this problem is underspecified.  So I add



5.  On winter days that it snows, TWC forecasts "70% chance of snow"
about 10% of the time

6.  On winter days that it does not snow, TWC forecasts "70% chance of
snow" about 1% of the time.



So now from P(S)=.05; P("70%"|S)=.10; and P("70%"|S)=.01 I apply Bayes rule
and deduce my posterior probability to be P(S|"70%") = .3448.



Now it seems particularly odd that I would conclude there is only a 34%
chance of snow when TWC says there is a 70% chance.  TWC knows so much more
about weather forecasting than I do.



What am I doing wrong?







Paul E. Lehner, Ph.D.

Consulting Scientist

The MITRE Corporation

(703) 983-7968

pleh...@mitre.org

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai



  
  ___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

  


-- 
Alexandre Saidi
Maitre de Confrences
Ecole Centrale de Lyon-Dp. MI
Tl : 0472186530, Fax : 0472186443



begin:vcard
fn:A-S Saidi
n:A-S Saidi;A-S
org:Ecole Centrale de Lyon;LIRIS-UMR 5205 CNRS
adr;quoted-printable:;;36 Av. Guy de Collongue;Ecully;Rh=C3=B4ne;69134;France
email;internet:alexandre.sa...@liris.cnrs.fr
title;quoted-printable:Ma=C3=AEtre de Conf=C3=A9rences
tel;work:04 72 18 65 30
tel;fax:04 72 18 64 43
x-mozilla-html:TRUE
version:2.1
end:vcard

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem - Version 2

2009-02-21 Thread Konrad Scheffler
I agree this is problematic - the notion of calibration (i.e. that you can 
say P(S|70%) = .7) does not really make sense in the subjective Bayesian 
framework where different individuals are working with different priors, 
because different individuals will have different posteriors and they 
can't all be equal to 0.7. Instead, you need a notion of calibration with 
respect to a particular prior.

Hopefully the TWC forecasts are calibrated with respect to their own prior 
(otherwise they are reporting something other than what they believe). In 
this case your subjective posterior P(S|70%) will only be equal to .7 if 
your prior happens to be identical to theirs.

Hope this helps,
Konrad


 Consider the following revised version.
 
 
 The TWC problem
 
 1.  Question: What is the chance that it will snow next Monday?
 
 2.  My subjective prior: 5%
 
 3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
 snow on Monday.
 
 4.  TWC forecasts of snow are calibrated.
 
 
 Notice that I did not justify by subjective prior with a base rate.
 
 From P(S)=.05 and P(S|70%) = .7 I can deduce that P(70%|S)/P(70%|~S) = 
 44.33.  So now I can deduce from my prior and evidence odds that 
 P(S|70%) = .7.  But this seems silly.  Suppose my subjective prior was 
 20%.  Then P(70%|S)/P(70%|~S) = 9.3 and again I can deduce 
 P(S|70%)=.7.
 
 My latest quandary is that it seems odd that my subjective conditional 
 probability of the evidence should depend on my subjective prior.  This may 
 be coherent, but is too counter intuitive for me to easily accept.  It would 
 also suggest that when receiving a single evidence item in the form of a 
 judgment from a calibrated source, my posterior belief does not depend on my 
 prior belief.   In effect, when forecasting snow, one should ignore priors 
 and listen to The Weather Channel.
 
 Is this correct?  If so, does this bother anyone else?
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem - Version 2

2009-02-21 Thread Peter Szolovits
Paul, your restated problem reminds me of one I encountered in  
medicine in the 1980's.  When an internist sends a patient's sample to  
a pathologist and the pathologist says 90% chance of cancer, how is  
the internist supposed to interpret that answer in light of his own  
priors?  Empirically, what we discovered is that pathologists don't  
(or at least didn't) have a clear methodology for addressing such  
problems.  Some tried to be scrupulously untainted by any evidence  
about the patient other than the submitted sample, whereas others  
would read the entire chart to understand the context in which they  
were interpreting the sample.  My assumption from this is that the  
first group were trying to judge something like a likelihood,  
conditional probability, or conditional odds, whereas the second were  
giving posteriors.


If TWC is giving posteriors, integrating everything known about  
weather in your area based on their extensive professional knowledge  
(which presumably includes all the almanac information that goes into  
your prior judgments), then you should simply accept their answer.  
This is like the second group of pathologists.  If, however, they are  
giving something like conditional odds (how much more likely would  
this weather pattern be if it turns out to snow Monday than if it does  
not), then it's most appropriate to do your Bayesian combination.


--Pete Szolovits

On Feb 16, 2009, at 11:39 AM, Lehner, Paul E. wrote:

...

Consider the following revised version.

The TWC problem
1.  Question: What is the chance that it will snow next Monday?
2.  My subjective prior: 5%
3.  Evidence: The Weather Channel (TWC) says there is a “70%  
chance of snow” on Monday.

4.  TWC forecasts of snow are calibrated.

Notice that I did not justify by subjective prior with a base rate.

From P(S)=.05 and P(S|”70%”) = .7 I can deduce that P(“70%”|S)/ 
P(“70%”|~S) = 44.33.  So now I can “deduce” from my prior and  
evidence odds that P(S|”70%”) = .7.  But this seems silly.  Suppose  
my subjective prior was 20%.  Then P(“70%”|S)/P(“70%”|~S) = 9.3  
and again I can “deduce” P(S|”70%”)=.7.


My latest quandary is that it seems odd that my subjective  
conditional probability of the evidence should depend on my  
subjective prior.  This may be coherent, but is too counter  
intuitive for me to easily accept.  It would also suggest that when  
receiving a single evidence item in the form of a judgment from a  
calibrated source, my posterior belief does not depend on my prior  
belief.   In effect, when forecasting snow, one should ignore priors  
and listen to The Weather Channel.


Is this correct?  If so, does this bother anyone else?

paull


From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu]  
On Behalf Of Lehner, Paul E.

Sent: Friday, February 13, 2009 4:29 PM
To: uai@ENGR.ORST.EDU
Subject: [UAI] A perplexing problem

I was working on a set of instructions to teach simple two- 
hypothesis/one-evidence Bayesian updating.  I came across a problem  
that perplexed me.  This can’t be a new problem so I’m hoping  
someone will clear things up for me.


The problem
5.  Question: What is the chance that it will snow next Monday?
6.  My prior: 5% (because it typically snows about 5% of the  
days during the winter)
7.  Evidence: The Weather Channel (TWC) says there is a “70%  
chance of snow” on Monday.

8.  TWC forecasts of snow are calibrated.

My initial answer is to claim that this problem is underspecified.   
So I add


9.  On winter days that it snows, TWC forecasts “70% chance of  
snow” about 10% of the time
10.   On winter days that it does not snow, TWC forecasts “70%  
chance of snow” about 1% of the time.


So now from P(S)=.05; P(“70%”|S)=.10; and P(“70%”|S)=.01 I apply  
Bayes rule and deduce my posterior probability to be P(S|”70%”) = . 
3448.


Now it seems particularly odd that I would conclude there is only a  
34% chance of snow when TWC says there is a 70% chance.  TWC knows  
so much more about weather forecasting than I do.


What am I doing wrong?



Paul E. Lehner, Ph.D.
Consulting Scientist
The MITRE Corporation
(703) 983-7968
pleh...@mitre.org
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem - Last Version

2009-02-21 Thread Lehner, Paul E.
Austin, Jean-Lous, Konrad,  Peter

Thank you for your responses.  They are very helpful.

Your consensus view seems to be that when receiving evidence in the form of a 
single calibrated judgment, one should not update personal judgments by using 
Bayes rule.  This seems incoherent (from a strict Bayesian perspective) unless 
perhaps one explicitly represents the overlap of knowledge with the source of 
the calibrated judgment (which may not be practical.)

Unfortunately this is the conclusion I was afraid we would reach, because it 
leads me to be concerned that I have been giving some bad advice about applying 
Bayesian reasoning to some very practical problems.

Here is a simple example.

Analyst A is trying to determine whether X is at location Y.   She has two 
principal evidence items.  The first is a report from a spectral analyst that 
concludes based on the match to the expected spectral signature I conclude 
with high confidence that X is at location Y.  The second evidence is a report 
from a chemical analyst who asserts, based on the expected chemical 
composition that is typically associated with X, I conclude with moderate 
confidence that X is at location Y.   How should analyst A approach her 
analysis?

Previously I would have suggested something like this.  Consider each evidence 
item in turn.  Assume that X is at location Y.  What are the chances that you 
would receive a 'high confidence' report from the spectral analyst, ... a 
report of 'moderate confidence' from the chemical analyst.  Now assume X is not 
a location Y,   In other words I would have lead the analyst toward some 
simple instantiation of Bayes inference.

But clearly the spectral and chemical analyst are using more than just the 
sensor data to make their confidence assessments.  In part they are using the 
same background knowledge that Analyst A has.  Furthermore both the spectral 
and chemical analysts are good at their job, their confidence judgments are 
reasonably calibrated.  This is just like the TWC problem only more complex.

So if Bayesian inference is inappropriate for the TWC problem, is it also 
inappropriate here?  Is my advice bad?

Paul


From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf Of 
Lehner, Paul E.
Sent: Monday, February 16, 2009 11:40 AM
To: uai@ENGR.ORST.EDU
Subject: Re: [UAI] A perplexing problem - Version 2

UAI members

Thank you for your many responses.  You've provided at least 5 distinct answers 
which I summarize below.
(Answer 5 below is clearly correct, but leads me to a new quandary.)



Answer 1:  70% chance of snow is just a label and conceptually should be 
treated as XYZ.  In other words don't be fooled by the semantics inside the 
quotes.



My response: Technically correct, but intuitively unappealing.  Although I 
often council people on how often intuition is misleading, I just couldn't 
ignore my intuition on this one.





Answer 2: The forecast 70% chance of snow is ill-defined



My response:  I agree, but in this case I was more concerned about the conflict 
between math and intuition.  I would be willing to accept any well-defined 
forecasting statement.





Answer 3: The reference set winter days is the wrong reference set.



My response: I was just trying to give some justification to my subjective 
prior.  But this answer does point out a distinction between base rates and 
subjective priors.  This distinction relates to my new quandary below so please 
read on.





Answer 4: The problem inherently requires more variables and cannot be treated 
as a simple single evidence with two hypotheses problem.



My response: Actually I was concerned that this was the answer.  As it may have 
implied that using Bayes to evaluate a single evidence item was impractical for 
the community of analysts I'm working with.   Fortunately ...





Answer 5:  The problem statement was inherently incoherent.  Many of you 
pointed out that if TWC predicts 70% snow on 10% of the days that it snows 
and on 1% of days that it does not snow, and a 5% base rate for snow, then the 
P(70% snow  snow) is .005 and P(70% snow  ~snow) = .0095.  So for the 
days that TWC says 70% snow it actually snows on a little over 34% of the 
days.  Clearly my assertion that TWC is calibrated is incoherent relative to 
the rest of the problem statement.  The problem was not underspecified, it was 
over specified.  (I hope I did the math correctly.)



My response: Thanks for pointing this out.  I'm embarrassed that I didn't 
notice this myself.  Though this clearly solves my initial concern it leads me 
to an entirely new quandary.





Consider the following revised version.


The TWC problem

1.  Question: What is the chance that it will snow next Monday?

2.  My subjective prior: 5%

3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
snow on Monday.

4.  TWC forecasts of snow are calibrated.


Notice that I did not justify by subjective

Re: [UAI] A perplexing problem - Version 2

2009-02-21 Thread Jean-Louis GOLMARD


This time, the probabilistic model is underspecified, since it has 2  
probabilities,
but it is not important for answering the question since the answer to  
question 1 is is propositions 3 et 4:
if TWC forecasts are calibrated then P(S/70%) = 70%, and prior 2 plays  
no role.


You find this since with 2 different values, you find always 0.7.

I think that you should not try to make computations with the prior  
since you have aleadythe answer in the problem formulation.


Sincerely yours

Jean-Louis







Quoting Lehner, Paul E. pleh...@mitre.org:


UAI members

Thank you for your many responses.  You've provided at least 5   
distinct answers which I summarize below.

(Answer 5 below is clearly correct, but leads me to a new quandary.)



Answer 1:  70% chance of snow is just a label and conceptually   
should be treated as XYZ.  In other words don't be fooled by the   
semantics inside the quotes.




My response: Technically correct, but intuitively unappealing.
Although I often council people on how often intuition is   
misleading, I just couldn't ignore my intuition on this one.






Answer 2: The forecast 70% chance of snow is ill-defined



My response:  I agree, but in this case I was more concerned about   
the conflict between math and intuition.  I would be willing to   
accept any well-defined forecasting statement.






Answer 3: The reference set winter days is the wrong reference set.



My response: I was just trying to give some justification to my   
subjective prior.  But this answer does point out a distinction   
between base rates and subjective priors.  This distinction relates   
to my new quandary below so please read on.






Answer 4: The problem inherently requires more variables and cannot   
be treated as a simple single evidence with two hypotheses problem.




My response: Actually I was concerned that this was the answer.  As   
it may have implied that using Bayes to evaluate a single evidence   
item was impractical for the community of analysts I'm working with.  
   Fortunately ...






Answer 5:  The problem statement was inherently incoherent.  Many of  
 you pointed out that if TWC predicts 70% snow on 10% of the days   
that it snows and on 1% of days that it does not snow, and a 5% base  
 rate for snow, then the P(70% snow  snow) is .005 and P(70%   
snow  ~snow) = .0095.  So for the days that TWC says 70% snow it  
 actually snows on a little over 34% of the days.  Clearly my   
assertion that TWC is calibrated is incoherent relative to the rest   
of the problem statement.  The problem was not underspecified, it   
was over specified.  (I hope I did the math correctly.)




My response: Thanks for pointing this out.  I'm embarrassed that I   
didn't notice this myself.  Though this clearly solves my initial   
concern it leads me to an entirely new quandary.






Consider the following revised version.


The TWC problem

1.  Question: What is the chance that it will snow next Monday?

2.  My subjective prior: 5%

3.  Evidence: The Weather Channel (TWC) says there is a 70%   
chance of snow on Monday.


4.  TWC forecasts of snow are calibrated.


Notice that I did not justify by subjective prior with a base rate.

From P(S)=.05 and P(S|70%) = .7 I can deduce that   
P(70%|S)/P(70%|~S) = 44.33.  So now I can deduce from my   
prior and evidence odds that P(S|70%) = .7.  But this seems   
silly.  Suppose my subjective prior was 20%.  Then   
P(70%|S)/P(70%|~S) = 9.3 and again I can deduce   
P(S|70%)=.7.


My latest quandary is that it seems odd that my subjective   
conditional probability of the evidence should depend on my   
subjective prior.  This may be coherent, but is too counter   
intuitive for me to easily accept.  It would also suggest that when   
receiving a single evidence item in the form of a judgment from a   
calibrated source, my posterior belief does not depend on my prior   
belief.   In effect, when forecasting snow, one should ignore priors  
 and listen to The Weather Channel.


Is this correct?  If so, does this bother anyone else?

paull


From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu]   
On Behalf Of Lehner, Paul E.

Sent: Friday, February 13, 2009 4:29 PM
To: uai@ENGR.ORST.EDU
Subject: [UAI] A perplexing problem

I was working on a set of instructions to teach simple   
two-hypothesis/one-evidence Bayesian updating.  I came across a   
problem that perplexed me.  This can't be a new problem so I'm   
hoping someone will clear things up for me.


The problem

5.  Question: What is the chance that it will snow next Monday?

6.  My prior: 5% (because it typically snows about 5% of the   
days during the winter)


7.  Evidence: The Weather Channel (TWC) says there is a 70%   
chance of snow on Monday.


8.  TWC forecasts of snow are calibrated.

My initial answer is to claim that this problem is underspecified.  So I add


9.  On winter days that it snows, TWC 

Re: [UAI] A perplexing problem

2009-02-18 Thread Jean-Louis GOLMARD

Dear Paul,


if you consider TWC prediction as a part of the probabilistic model,  
you get 4 probabilities for modelling a model which needs 3  
probabilities to be specified.
(the model is given by the 2-way table given by (Snow/not snow and  
snow prediction of 70%/not snow prediction of 70%).


The problem is that in this model the 4 numbers you give are  
inconsistent, so, when you accept
the probabilities 2,5, and 6, you find that P(S/prediction of snow is  
70%) = 0.34, which is not consistent with propositions 3 and 4.



If you accept probabilities 2,3 and 5, for example, you find that  
Pr(prediction of snow = 70% / not snow) = 0.002, and not 0.01 as given  
in the problem.


Hope that helps also,

Jean-Louis















2009/2/13 Lehner, Paul E. pleh...@mitre.org:

I was working on a set of instructions to teach simple
two-hypothesis/one-evidence Bayesian updating.  I came across a problem that
perplexed me.  This can't be a new problem so I'm hoping someone will clear
things up for me.



The problem

1.  Question: What is the chance that it will snow next Monday?

2.  My prior: 5% (because it typically snows about 5% of the days during
the winter)

3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of
snow on Monday.

4.  TWC forecasts of snow are calibrated.



My initial answer is to claim that this problem is underspecified.  So I add



5.  On winter days that it snows, TWC forecasts 70% chance of snow
about 10% of the time

6.  On winter days that it does not snow, TWC forecasts 70% chance of
snow about 1% of the time.



So now from P(S)=.05; P(70%|S)=.10; and P(70%|S)=.01 I apply Bayes rule
and deduce my posterior probability to be P(S|70%) = .3448.



Now it seems particularly odd that I would conclude there is only a 34%
chance of snow when TWC says there is a 70% chance.  TWC knows so much more
about weather forecasting than I do.



What am I doing wrong?







Paul E. Lehner, Ph.D.

Consulting Scientist

The MITRE Corporation

(703) 983-7968

pleh...@mitre.org

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai



___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai





___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem - Version 2

2009-02-18 Thread Lehner, Paul E.
UAI members

Thank you for your many responses.  You've provided at least 5 distinct answers 
which I summarize below.
(Answer 5 below is clearly correct, but leads me to a new quandary.)



Answer 1:  70% chance of snow is just a label and conceptually should be 
treated as XYZ.  In other words don't be fooled by the semantics inside the 
quotes.



My response: Technically correct, but intuitively unappealing.  Although I 
often council people on how often intuition is misleading, I just couldn't 
ignore my intuition on this one.





Answer 2: The forecast 70% chance of snow is ill-defined



My response:  I agree, but in this case I was more concerned about the conflict 
between math and intuition.  I would be willing to accept any well-defined 
forecasting statement.





Answer 3: The reference set winter days is the wrong reference set.



My response: I was just trying to give some justification to my subjective 
prior.  But this answer does point out a distinction between base rates and 
subjective priors.  This distinction relates to my new quandary below so please 
read on.





Answer 4: The problem inherently requires more variables and cannot be treated 
as a simple single evidence with two hypotheses problem.



My response: Actually I was concerned that this was the answer.  As it may have 
implied that using Bayes to evaluate a single evidence item was impractical for 
the community of analysts I'm working with.   Fortunately ...





Answer 5:  The problem statement was inherently incoherent.  Many of you 
pointed out that if TWC predicts 70% snow on 10% of the days that it snows 
and on 1% of days that it does not snow, and a 5% base rate for snow, then the 
P(70% snow  snow) is .005 and P(70% snow  ~snow) = .0095.  So for the 
days that TWC says 70% snow it actually snows on a little over 34% of the 
days.  Clearly my assertion that TWC is calibrated is incoherent relative to 
the rest of the problem statement.  The problem was not underspecified, it was 
over specified.  (I hope I did the math correctly.)



My response: Thanks for pointing this out.  I'm embarrassed that I didn't 
notice this myself.  Though this clearly solves my initial concern it leads me 
to an entirely new quandary.





Consider the following revised version.


The TWC problem

1.  Question: What is the chance that it will snow next Monday?

2.  My subjective prior: 5%

3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
snow on Monday.

4.  TWC forecasts of snow are calibrated.


Notice that I did not justify by subjective prior with a base rate.

From P(S)=.05 and P(S|70%) = .7 I can deduce that P(70%|S)/P(70%|~S) = 
44.33.  So now I can deduce from my prior and evidence odds that P(S|70%) 
= .7.  But this seems silly.  Suppose my subjective prior was 20%.  Then 
P(70%|S)/P(70%|~S) = 9.3 and again I can deduce P(S|70%)=.7.

My latest quandary is that it seems odd that my subjective conditional 
probability of the evidence should depend on my subjective prior.  This may be 
coherent, but is too counter intuitive for me to easily accept.  It would also 
suggest that when receiving a single evidence item in the form of a judgment 
from a calibrated source, my posterior belief does not depend on my prior 
belief.   In effect, when forecasting snow, one should ignore priors and listen 
to The Weather Channel.

Is this correct?  If so, does this bother anyone else?

paull


From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf Of 
Lehner, Paul E.
Sent: Friday, February 13, 2009 4:29 PM
To: uai@ENGR.ORST.EDU
Subject: [UAI] A perplexing problem

I was working on a set of instructions to teach simple 
two-hypothesis/one-evidence Bayesian updating.  I came across a problem that 
perplexed me.  This can't be a new problem so I'm hoping someone will clear 
things up for me.

The problem

5.  Question: What is the chance that it will snow next Monday?

6.  My prior: 5% (because it typically snows about 5% of the days during 
the winter)

7.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
snow on Monday.

8.  TWC forecasts of snow are calibrated.

My initial answer is to claim that this problem is underspecified.  So I add


9.  On winter days that it snows, TWC forecasts 70% chance of snow about 
10% of the time

10.   On winter days that it does not snow, TWC forecasts 70% chance of snow 
about 1% of the time.

So now from P(S)=.05; P(70%|S)=.10; and P(70%|S)=.01 I apply Bayes rule and 
deduce my posterior probability to be P(S|70%) = .3448.

Now it seems particularly odd that I would conclude there is only a 34% chance 
of snow when TWC says there is a 70% chance.  TWC knows so much more about 
weather forecasting than I do.

What am I doing wrong?



Paul E. Lehner, Ph.D.
Consulting Scientist
The MITRE Corporation
(703) 983-7968
pleh...@mitre.orgmailto:pleh...@mitre.org
___

Re: [UAI] A perplexing problem

2009-02-18 Thread Agosta, John M
All -

The Bayes ratio (or odds ratio) interpretation of Bayes rule is enlightening, 
since it reveals the strength of evidence in a way not clear from just looking 
at the probabilities. 

A 5% prior chance becomes odds of 1:19 against snow. 

With Paul's assigned sensitivity (probability of snow forecast given it will 
snow) of 10%, the evidence of a positive forcast has an odds ratio of 10:1 in 
favor of snow. Expressed, for instance in a scale suggested by Kass  Raftery 
this counts as not particularly strong positive evidence. 

Not surprisingly the combination of 1:19 prior against and a 10:1 odds for 
results in less than even odds for snow. 

___
John Mark Agosta, Intel Research
 
 

-Original Message-
From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf Of 
Paul Snow
Sent: Monday, February 16, 2009 3:24 AM
To: uai@engr.orst.edu
Subject: Re: [UAI] A perplexing problem

Dear Paul,

If the Weather Channel is Bayesian, then say they used that empricial
prior that you did (5%), and they observed evidence E to arrive at
their 70% for the snow S given E.

Their Bayes' ratio is 44.3. Yours, effectively, is 10 (assuming that
the event They say 70% coincides with They observe evidence with a
Bayes ratio in the forties - that is, they agree with you about the
empirical prior and are Bayesian).

So, having effectively disagreed with them about the import of what
they observed, you also disagreed with them about the conclusion.

Hope that helps,

Paul

2009/2/13 Lehner, Paul E. pleh...@mitre.org:
 I was working on a set of instructions to teach simple
 two-hypothesis/one-evidence Bayesian updating.  I came across a problem that
 perplexed me.  This can't be a new problem so I'm hoping someone will clear
 things up for me.



 The problem

 1.  Question: What is the chance that it will snow next Monday?

 2.  My prior: 5% (because it typically snows about 5% of the days during
 the winter)

 3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of
 snow on Monday.

 4.  TWC forecasts of snow are calibrated.



 My initial answer is to claim that this problem is underspecified.  So I add



 5.  On winter days that it snows, TWC forecasts 70% chance of snow
 about 10% of the time

 6.  On winter days that it does not snow, TWC forecasts 70% chance of
 snow about 1% of the time.



 So now from P(S)=.05; P(70%|S)=.10; and P(70%|S)=.01 I apply Bayes rule
 and deduce my posterior probability to be P(S|70%) = .3448.



 Now it seems particularly odd that I would conclude there is only a 34%
 chance of snow when TWC says there is a 70% chance.  TWC knows so much more
 about weather forecasting than I do.



 What am I doing wrong?







 Paul E. Lehner, Ph.D.

 Consulting Scientist

 The MITRE Corporation

 (703) 983-7968

 pleh...@mitre.org

 ___
 uai mailing list
 uai@ENGR.ORST.EDU
 https://secure.engr.oregonstate.edu/mailman/listinfo/uai


___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem

2009-02-18 Thread Francisco Javier Diez

Peter Szolovits wrote:

If TWC is really calibrated, then your conditions 5 and 6 are false, no?


I agree with Peter's solution. If I build a model for this problem, it 
must contain at least two variables: Snow and TWC_report. According with 
my model, the TWC forecasts are calibrated if and only if 
P(Snow=yes|TWC_report=x) = x, by definition of calibration.


Regards,
  Javier

-
Francisco Javier Diez  Phone: (+34) 91.398.71.61
Dpto. Inteligencia Artificial  Fax:   (+34) 91.398.88.95
UNED. c/Juan del Rosal, 16 http://www.ia.uned.es/~fjdiez
28040 Madrid. Spainhttp://www.cisiad.uned.es
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem

2009-02-16 Thread rif

1.  Note that you haven't really used the 70% at all.  You could
restate the problem with any other statement you liked in there.

2.  Your basic reasoning is correct.  However, your modelling choice
seems poor.  I would try replacing TWC forecasts 70% chance of
snow with TWC forecasts 70% OR MORE chance of snow.  With this
replacement, the math is correct, but if TWC only forecasts 70% or
more chance of snow 10% of the time when it's gona snow, TWC isn't
actually good at weather forecasting.

Cheers,

rif

 I was working on a set of instructions to teach simple 
 two-hypothesis/one-evidence Bayesian updating.  I came across a problem that 
 perplexed me.  This can't be a new problem so I'm hoping someone will clear 
 things up for me.
 
 The problem
 
 1.  Question: What is the chance that it will snow next Monday?
 
 2.  My prior: 5% (because it typically snows about 5% of the days during 
 the winter)
 
 3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
 snow on Monday.
 
 4.  TWC forecasts of snow are calibrated.
 
 My initial answer is to claim that this problem is underspecified.  So I add
 
 
 5.  On winter days that it snows, TWC forecasts 70% chance of snow 
 about 10% of the time
 
 6.  On winter days that it does not snow, TWC forecasts 70% chance of 
 snow about 1% of the time.
 
 So now from P(S)=.05; P(70%|S)=.10; and P(70%|S)=.01 I apply Bayes rule 
 and deduce my posterior probability to be P(S|70%) = .3448.
 
 Now it seems particularly odd that I would conclude there is only a 34% 
 chance of snow when TWC says there is a 70% chance.  TWC knows so much more 
 about weather forecasting than I do.
 
 What am I doing wrong?
 
 
 
 Paul E. Lehner, Ph.D.
 Consulting Scientist
 The MITRE Corporation
 (703) 983-7968
 pleh...@mitre.orgmailto:pleh...@mitre.org
 ___
 uai mailing list
 uai@ENGR.ORST.EDU
 https://secure.engr.oregonstate.edu/mailman/listinfo/uai

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem

2009-02-16 Thread Marek J. Druzdzel

Paul,

I'm not aware of this being discussed anywhere but my observation is 
that the information given makes TWC quite lousy -- the probability of 
the forecast 70% chance of snow is much too high when there is no 
snow.  It is a very specific piece of forecast and I would expect this 
probability to be very small given that there is actually going to be no 
snow.  When you reduce this conditional probability, the forecast is 
going to be more along the lines that you would expect.


I'm attaching a GeNIe model capturing your problem.  To open it, 
download GeNIe from http://genie.sis.pitt.edu/.

Cheers,

Marek
--
Marek J. Druzdzelhttp://www.pitt.edu/~druzdzel

Lehner, Paul E. wrote:

I was working on a set of instructions to teach simple 
two-hypothesis/one-evidence Bayesian updating.  I came across a problem 
that perplexed me.  This can’t be a new problem so I’m hoping someone 
will clear things up for me.


The problem

1.  Question: What is the chance that it will snow next Monday?
2.  My prior: 5% (because it typically snows about 5% of the days 
during the winter)
3.  Evidence: The Weather Channel (TWC) says there is a “70% chance 
of snow” on Monday.

4.  TWC forecasts of snow are calibrated.

My initial answer is to claim that this problem is underspecified.  So I add

5.  On winter days that it snows, TWC forecasts “70% chance of snow” 
about 10% of the time
6.  On winter days that it does not snow, TWC forecasts “70% chance 
of snow” about 1% of the time.


So now from P(S)=.05; P(“70%”|S)=.10; and P(“70%”|S)=.01 I apply Bayes 
rule and deduce my posterior probability to be P(S|”70%”) = .3448.


Now it seems particularly odd that I would conclude there is only a 34% 
chance of snow when TWC says there is a 70% chance.  TWC knows so much 
more about weather forecasting than I do.


What am I doing wrong?  


Paul E. Lehner, Ph.D.
Consulting Scientist
The MITRE Corporation
(703) 983-7968
pleh...@mitre.org mailto:pleh...@mitre.org

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai
?xml version=1.0 encoding=ISO-8859-1?
smile version=1.0 id=SnowForecast numsamples=1000 discsamples=1
	nodes
		cpt id=It_Snows_on_Monday
			state id=Snows /
			state id=DoesNotSnow /
			probabilities0.05 0.95/probabilities
		/cpt
		cpt id=Forecast70
			state id=Snow70 /
			state id=Other /
			parentsIt_Snows_on_Monday/parents
			probabilities0.1 0.9 0.01 0.99/probabilities
		/cpt
	/nodes
	extensions
		genie version=1.0 app=GeNIe 2.0.3306.0 name=Paul Lehnerapos;s problem faultnameformat=nodestate
			node id=It_Snows_on_Monday
nameIt Snows on Monday/name
interior color=e5f6f7 /
outline color=80 /
font color=00 name=Arial size=10 bold=true /
position142 14 250 81/position
barchart active=true width=368 height=64 /
			/node
			node id=Forecast70
nameThe Weather Channel Forecasts 70% Chance of Snow/name
interior color=e5f6f7 /
outline color=80 /
font color=00 name=Arial size=10 bold=true /
position147 213 249 276/position
barchart active=true width=368 height=64 /
			/node
			textbox
captionPaul Lehnerapos;s problem lt;pleh...@mitre.orggt;\n\nThe problem:\n\n1. Question: What is the chance that it will snow next Monday?\n2. My prior: 5% (because it typically snows about 5% of the days during the winter)\n3. Evidence: The Weather Channel (TWC) says there is a “70% chance of snow” on Monday.\n4. TWC forecasts of snow are calibrated.\n\nMy initial answer is to claim that this problem is underspecified.  So I add\n\n5. On winter days that it snows, TWC forecasts “70% chance of snow” about 10% of the time\n6. On winter days that it does not snow, TWC forecasts “70% chance of snow” about 1% of the time.\n\nSo now from P(S)=.05; P(“70%”|S)=.10; and P(“70%”|S)=.01 I apply Bayes rule and deduce my posterior probability to be P(S|”70%”) = .3448.\n\nNow it seems particularly odd that I would conclude there is only a 34% chance of snow when TWC says there is a 70% chance.  TWC knows so much more about weather forecasting than I do./caption
font color=00 name=Arial size=10 bold=true /
position406 16 1033 320/position
			/textbox
		/genie
	/extensions
/smile
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem

2009-02-16 Thread Konrad Scheffler
Hi Paul,

Your calculation is correct, but the numbers in the example are odd. If 
TWC really only manage to predict snow 10% of the time (90% false negative 
rate), you would be right not to assign much value to their predictions 
(you do assign _some_, hence the seven-fold increase from your prior to 
your posterior, but with prediction performance like that TWC cannot 
possibly think there is really a 70% chance of snow).

Change the 10% true positives to 90%, and your posterior goes up to 82.6% 
- much more believable.

Also, it's important not to think the figure of 70% has any bearing on the 
problem. I appreciate that you put it in as a red herring to challenge the 
students, but be aware that it may also lead to confusion.

Konrad


On Fri, 13 Feb 2009, Lehner, Paul E. wrote:

 I was working on a set of instructions to teach simple 
 two-hypothesis/one-evidence Bayesian updating.  I came across a problem that 
 perplexed me.  This can't be a new problem so I'm hoping someone will clear 
 things up for me.
 
 The problem
 
 1.  Question: What is the chance that it will snow next Monday?
 
 2.  My prior: 5% (because it typically snows about 5% of the days during 
 the winter)
 
 3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
 snow on Monday.
 
 4.  TWC forecasts of snow are calibrated.
 
 My initial answer is to claim that this problem is underspecified.  So I add
 
 
 5.  On winter days that it snows, TWC forecasts 70% chance of snow 
 about 10% of the time
 
 6.  On winter days that it does not snow, TWC forecasts 70% chance of 
 snow about 1% of the time.
 
 So now from P(S)=.05; P(70%|S)=.10; and P(70%|S)=.01 I apply Bayes rule 
 and deduce my posterior probability to be P(S|70%) = .3448.
 
 Now it seems particularly odd that I would conclude there is only a 34% 
 chance of snow when TWC says there is a 70% chance.  TWC knows so much more 
 about weather forecasting than I do.
 
 What am I doing wrong?
 
 
 
 Paul E. Lehner, Ph.D.
 Consulting Scientist
 The MITRE Corporation
 (703) 983-7968
 pleh...@mitre.orgmailto:pleh...@mitre.org
 
___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


Re: [UAI] A perplexing problem

2009-02-16 Thread Ann Nicholson

Hi Paul,

Your calculations are correct (although I note you really mean 
P(70%|not S) = 0.01 in the calc below).
^^^

Sometimes it helps to think about what the numbers actually
mean. First 0.05 prob of snow is quite a low prior.
You need to have quite certain evidence to move that up higher.
A posterior of 0.35 means that snow is now *7 times* more likely
given the evidence than it was before you knew anything, which
is still quite a large shift up.

It *sounds* like you have strong evidence with TWC 70% chance of snow.
However, you also have a conditional probability that even when there
is snow, TWC only says 70% chance of snow one in 10 times. That means that
9 in ten times it doesn't say that. So when you entered such evidence
it gets discounted (because it is so often wrong!). 

Another side point about the way you have modelled this problem is 
your second variable is TWC70%ChanceOfSnow, a true/false variable.
So TWC's confidence isn't really being modelled in the Bayesian
updating, only in the way you've structured your variables.
It might  be better instead to have the second variable be
TWCPredictsSnow (True/False) and then incorporate their 70% confidence
as virtual (uncertain) evidence on that variable. But then you'd
need to know P(TWCPredictsSnow|Snow) and P(TWCPredictsSnow|notSnow)...

Hope this helps.

regards,
Ann 

On Fri, Feb 13, 2009 at 04:28:41PM -0500, Lehner, Paul E. wrote:
 I was working on a set of instructions to teach simple 
 two-hypothesis/one-evidence Bayesian updating.  I came across a problem that 
 perplexed me.  This can't be a new problem so I'm hoping someone will clear 
 things up for me.
 
 The problem
 
 1.  Question: What is the chance that it will snow next Monday?
 2.  My prior: 5% (because it typically snows about 5% of the days during 
 the winter)
 3.  Evidence: The Weather Channel (TWC) says there is a 70% chance of 
 snow on Monday.
 4.  TWC forecasts of snow are calibrated.
 
 My initial answer is to claim that this problem is underspecified.  So I add
 5.  On winter days that it snows, TWC forecasts 70% chance of snow 
 about 10% of the time
 6.  On winter days that it does not snow, TWC forecasts 70% chance of 
 snow about 1% of the time.
 So now from P(S)=.05; P(70%|S)=.10; and P(70%|S)=.01 I apply Bayes rule 
 and deduce my posterior probability to be P(S|70%) = .3448.
 
 Now it seems particularly odd that I would conclude there is only a 34% 
 chance of snow when TWC says there is a 70% chance.  TWC knows so much more 
 about weather forecasting than I do.
 
 What am I doing wrong?
 
 
 
 Paul E. Lehner, Ph.D.
 Consulting Scientist
 The MITRE Corporation
 (703) 983-7968
 pleh...@mitre.orgmailto:pleh...@mitre.org

 ___
 uai mailing list
 uai@ENGR.ORST.EDU
 https://secure.engr.oregonstate.edu/mailman/listinfo/uai


-- 
A/Prof. Ann Nicholson   
Clayton School_--_|\  www.csse.monash.edu.au/~annn/
of Information Technology,   /  \ phone: +61 3 9905 5211
Monash University, VIC 3800  \_.--.*/ fax:   +61 3 9905 5146
Australia  v  ann.nichol...@infotech.monash.edu.au

___
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai