Dear Paul,

Bayesian inference is still appropriate for both problems. There are two 
issues here: 

1) the subjectivist Bayesian viewpoint is confusing because it does not 
   make it explicit on which information you are conditioning when setting 
   up your prior - it becomes much clearer if you use an objective 
   Bayesian framework (see below).
2) You are describing a situation where your two sources of information 
   are dependent, but you are not quantifying the dependency. As 
   Jean-Louis points out, the problem becomes simple if you are prepared 
   to make an independence assumption (but I think this avoids the 
   difficulty you are asking about: "In part they are using the same 
   background knowledge that Analyst A has"). Below I give the full 
   solution (which unfortunately is only useful if you can quantify the 
   dependencies - for this you need a model of how the analysts go about 
   calculating their reported probabilities).


I'll use "X" as a shorthand for the statement "X is at location Y".
Let's assume all analysts give their statements as a numerical value which 
quantifies their confidence that X is true. Let's call the value provided 
by the spectral analyst B and that provided by the chemical analyst C. 

Let I denote the information available to analyst A before reading the 
reports, so that her prior for X is P(X|I).

We want to know P(X|BCI), which can be written as follows:

P(X|BCI) = P(B|XCI)P(X|CI)/P(B|CI)
where
P(X|CI) = P(C|XI)P(X|I)/[P(C|XI)P(X|I)+P(C|(not X)I)P((not X)|I)]
and
P(B|CI) = P(B|XCI)P(X|CI) + P(B|(not X)CI)P((not X)|CI).

The quantities P(X|I), P((not X)|I), P(C|XI) and P(C|(not X)I) are 
straightforward, but instead of P(B|XI) and P(B|(not X)I) we need to know 
P(B|XCI) and P(B|(not X)CI). Once these are known the answer follows.


Hope this is useful,
Konrad




On Thu, 19 Feb 2009, Lehner, Paul E. wrote:

> Austin, Jean-Lous, Konrad,  Peter
> 
> Thank you for your responses.  They are very helpful.
> 
> Your consensus view seems to be that when receiving evidence in the form 
> of a single calibrated judgment, one should not update personal 
> judgments by using Bayes rule.  This seems incoherent (from a strict 
> Bayesian perspective) unless perhaps one explicitly represents the 
> overlap of knowledge with the source of the calibrated judgment (which 
> may not be practical.)
> 
> Unfortunately this is the conclusion I was afraid we would reach, 
> because it leads me to be concerned that I have been giving some bad 
> advice about applying Bayesian reasoning to some very practical 
> problems.
> 
> Here is a simple example.
> 
> Analyst A is trying to determine whether X is at location Y.  She has 
> two principal evidence items.  The first is a report from a spectral 
> analyst that concludes "based on the match to the expected spectral 
> signature I conclude with high confidence that X is at location Y".  
> The second evidence is a report from a chemical analyst who asserts, 
> "based on the expected chemical composition that is typically associated 
> with X, I conclude with moderate confidence that X is at location Y."  
> How should analyst A approach her analysis?
> 
> Previously I would have suggested something like this.  "Consider each 
> evidence item in turn.  Assume that X is at location Y.  What are the 
> chances that you would receive a 'high confidence' report from the 
> spectral analyst, ... a report of 'moderate confidence' from the 
> chemical analyst.  Now assume X is not a location Y, ...."  In other 
> words I would have lead the analyst toward some simple instantiation of 
> Bayes inference.
> 
> But clearly the spectral and chemical analyst are using more than just 
> the sensor data to make their confidence assessments.  In part they are 
> using the same background knowledge that Analyst A has.  Furthermore 
> both the spectral and chemical analysts are good at their job, their 
> confidence judgments are reasonably calibrated.  This is just like the 
> TWC problem only more complex.
> 
> So if Bayesian inference is inappropriate for the TWC problem, is it also 
> inappropriate here?  Is my advice bad?
> 
> Paul
> 
> 
> From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf 
> Of Lehner, Paul E.
> Sent: Monday, February 16, 2009 11:40 AM
> To: uai@ENGR.ORST.EDU
> Subject: Re: [UAI] A perplexing problem - Version 2
> 
> UAI members
> 
> Thank you for your many responses.  You've provided at least 5 distinct 
> answers which I summarize below.
> (Answer 5 below is clearly correct, but leads me to a new quandary.)
> 
> 
> 
> Answer 1:  "70% chance of snow" is just a label and conceptually should be 
> treated as "XYZ".  In other words don't be fooled by the semantics inside the 
> quotes.
> 
> 
> 
> My response: Technically correct, but intuitively unappealing.  Although I 
> often council people on how often intuition is misleading, I just couldn't 
> ignore my intuition on this one.
> 
> 
> 
> 
> 
> Answer 2: The forecast "70% chance of snow is ill-defined"
> 
> 
> 
> My response:  I agree, but in this case I was more concerned about the 
> conflict between math and intuition.  I would be willing to accept any 
> well-defined forecasting statement.
> 
> 
> 
> 
> 
> Answer 3: The reference set "winter days" is the wrong reference set.
> 
> 
> 
> My response: I was just trying to give some justification to my subjective 
> prior.  But this answer does point out a distinction between base rates and 
> subjective priors.  This distinction relates to my new quandary below so 
> please read on.
> 
> 
> 
> 
> 
> Answer 4: The problem inherently requires more variables and cannot be 
> treated as a simple single evidence with two hypotheses problem.
> 
> 
> 
> My response: Actually I was concerned that this was the answer.  As it may 
> have implied that using Bayes to evaluate a single evidence item was 
> impractical for the community of analysts I'm working with.   Fortunately ...
> 
> 
> 
> 
> 
> Answer 5:  The problem statement was inherently incoherent.  Many of you 
> pointed out that if TWC predicts "70% snow" on 10% of the days that it snows 
> and on 1% of days that it does not snow, and a 5% base rate for snow, then 
> the P("70% snow" & snow) is .005 and P("70% snow" & ~snow) = .0095.  So for 
> the days that TWC says "70% snow" it actually snows on a little over 34% of 
> the days.  Clearly my assertion that TWC is calibrated is incoherent relative 
> to the rest of the problem statement.  The problem was not underspecified, it 
> was over specified.  (I hope I did the math correctly.)
> 
> 
> 
> My response: Thanks for pointing this out.  I'm embarrassed that I didn't 
> notice this myself.  Though this clearly solves my initial concern it leads 
> me to an entirely new quandary.
> 
> 
> 
> 
> 
> Consider the following revised version.
> 
> 
> The TWC problem
> 
> 1.      Question: What is the chance that it will snow next Monday?
> 
> 2.      My subjective prior: 5%
> 
> 3.      Evidence: The Weather Channel (TWC) says there is a "70% chance of 
> snow" on Monday.
> 
> 4.      TWC forecasts of snow are calibrated.
> 
> 
> Notice that I did not justify by subjective prior with a base rate.
> 
> From P(S)=.05 and P(S|"70%") = .7 I can deduce that P("70%"|S)/P("70%"|~S) = 
> 44.33.  So now I can "deduce" from my prior and evidence odds that P(S|"70%") 
> = .7.  But this seems silly.  Suppose my subjective prior was 20%.  Then 
> P("70%"|S)/P("70%"|~S) = 9.33333 and again I can "deduce" P(S|"70%")=.7.
> 
> My latest quandary is that it seems odd that my subjective conditional 
> probability of the evidence should depend on my subjective prior.  This may 
> be coherent, but is too counter intuitive for me to easily accept.  It would 
> also suggest that when receiving a single evidence item in the form of a 
> judgment from a calibrated source, my posterior belief does not depend on my 
> prior belief.   In effect, when forecasting snow, one should ignore priors 
> and listen to The Weather Channel.
> 
> Is this correct?  If so, does this bother anyone else?
> 
> paull
> 
> 
> From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf 
> Of Lehner, Paul E.
> Sent: Friday, February 13, 2009 4:29 PM
> To: uai@ENGR.ORST.EDU
> Subject: [UAI] A perplexing problem
> 
> I was working on a set of instructions to teach simple 
> two-hypothesis/one-evidence Bayesian updating.  I came across a problem that 
> perplexed me.  This can't be a new problem so I'm hoping someone will clear 
> things up for me.
> 
> The problem
> 
> 5.      Question: What is the chance that it will snow next Monday?
> 
> 6.      My prior: 5% (because it typically snows about 5% of the days during 
> the winter)
> 
> 7.      Evidence: The Weather Channel (TWC) says there is a "70% chance of 
> snow" on Monday.
> 
> 8.      TWC forecasts of snow are calibrated.
> 
> My initial answer is to claim that this problem is underspecified.  So I add
> 
> 
> 9.      On winter days that it snows, TWC forecasts "70% chance of snow" 
> about 10% of the time
> 
> 10.   On winter days that it does not snow, TWC forecasts "70% chance of 
> snow" about 1% of the time.
> 
> So now from P(S)=.05; P("70%"|S)=.10; and P("70%"|S)=.01 I apply Bayes rule 
> and deduce my posterior probability to be P(S|"70%") = .3448.
> 
> Now it seems particularly odd that I would conclude there is only a 34% 
> chance of snow when TWC says there is a 70% chance.  TWC knows so much more 
> about weather forecasting than I do.
> 
> What am I doing wrong?
> 
> 
> 
> Paul E. Lehner, Ph.D.
> Consulting Scientist
> The MITRE Corporation
> (703) 983-7968
> pleh...@mitre.org<mailto:pleh...@mitre.org>
> 
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to