Paul,

 

I gather the spectral analyst has a spectral signal as input evidence. Does
the chemical analyst have a physical sample, or is the chemical analyst
analyzing the spectral signal as well? 

 

If the former, then an appropriate elicitation of background knowledge
should identify first order background reasoning variables used in common by
the two analysts and enable a consistent probabilistic model to be built to
fuse the two pieces of evidence and update the belief that X is present. 

 

If the latter then we have a problem of multiple expert opinions about the
same evidence. If the experts have elicitable analytic knowledge behind
their opinions, then I would attempt to approach the solution the same way. 

 

If it is a matter of the "intuition of experience" as often happens in
observational-knowledge disciplines like radiology, then I think the
evidence fusion is more problematic to rigorously formulate. One would like
to know something like base rates, experience etc., but these numbers are
often impossible to obtain in practice.

 

The location, Y, seems to be spurious to the inference as the source of
evidence(s) is at Y in both cases. If the location evidence was ambiguous in
some way, then a separate inference would need to be done on location alone
based on reasoning about knowledge of the evidence with respect to
locational information such as the diffusion of materials obtained in
non-laboratory derived spectral signatures.

Tod

 

 

  _____  

From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf
Of Lehner, Paul E.
Sent: Thursday, February 19, 2009 4:06 PM
To: Jean-Louis GOLMARD; Austin Parker; Konrad Scheffler; PeterSzolovits
Cc: uai@ENGR.ORST.EDU
Subject: Re: [UAI] A perplexing problem - Last Version

 

Austin, Jean-Lous, Konrad,  Peter

 

Thank you for your responses.  They are very helpful.

 

Your consensus view seems to be that when receiving evidence in the form of
a single calibrated judgment, one should not update personal judgments by
using Bayes rule.  This seems incoherent (from a strict Bayesian
perspective) unless perhaps one explicitly represents the overlap of
knowledge with the source of the calibrated judgment (which may not be
practical.)

 

Unfortunately this is the conclusion I was afraid we would reach, because it
leads me to be concerned that I have been giving some bad advice about
applying Bayesian reasoning to some very practical problems.

 

Here is a simple example.

 

Analyst A is trying to determine whether X is at location Y.   She has two
principal evidence items.  The first is a report from a spectral analyst
that concludes "based on the match to the expected spectral signature I
conclude with high confidence that X is at location Y".  The second evidence
is a report from a chemical analyst who asserts, "based on the expected
chemical composition that is typically associated with X, I conclude with
moderate confidence that X is at location Y."   How should analyst A
approach her analysis?

 

Previously I would have suggested something like this.  "Consider each
evidence item in turn.  Assume that X is at location Y.  What are the
chances that you would receive a 'high confidence' report from the spectral
analyst, . a report of 'moderate confidence' from the chemical analyst.  Now
assume X is not a location Y, .."  In other words I would have lead the
analyst toward some simple instantiation of Bayes inference.

 

But clearly the spectral and chemical analyst are using more than just the
sensor data to make their confidence assessments.  In part they are using
the same background knowledge that Analyst A has.  Furthermore both the
spectral and chemical analysts are good at their job, their confidence
judgments are reasonably calibrated.  This is just like the TWC problem only
more complex.

 

So if Bayesian inference is inappropriate for the TWC problem, is it also
inappropriate here?  Is my advice bad?

 

Paul

 

 

From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf
Of Lehner, Paul E.
Sent: Monday, February 16, 2009 11:40 AM
To: uai@ENGR.ORST.EDU
Subject: Re: [UAI] A perplexing problem - Version 2

 

UAI members

 

Thank you for your many responses.  You've provided at least 5 distinct
answers which I summarize below.

(Answer 5 below is clearly correct, but leads me to a new quandary.)

 

 

Answer 1:  "70% chance of snow" is just a label and conceptually should be
treated as "XYZ".  In other words don't be fooled by the semantics inside
the quotes.

 

My response: Technically correct, but intuitively unappealing.  Although I
often council people on how often intuition is misleading, I just couldn't
ignore my intuition on this one.

 

 

Answer 2: The forecast "70% chance of snow is ill-defined"

 

My response:  I agree, but in this case I was more concerned about the
conflict between math and intuition.  I would be willing to accept any
well-defined forecasting statement.

 

 

Answer 3: The reference set "winter days" is the wrong reference set.

 

My response: I was just trying to give some justification to my subjective
prior.  But this answer does point out a distinction between base rates and
subjective priors.  This distinction relates to my new quandary below so
please read on.  

 

 

Answer 4: The problem inherently requires more variables and cannot be
treated as a simple single evidence with two hypotheses problem.

 

My response: Actually I was concerned that this was the answer.  As it may
have implied that using Bayes to evaluate a single evidence item was
impractical for the community of analysts I'm working with.   Fortunately . 

 

 

Answer 5:  The problem statement was inherently incoherent.  Many of you
pointed out that if TWC predicts "70% snow" on 10% of the days that it snows
and on 1% of days that it does not snow, and a 5% base rate for snow, then
the P("70% snow" & snow) is .005 and P("70% snow" & ~snow) = .0095.  So for
the days that TWC says "70% snow" it actually snows on a little over 34% of
the days.  Clearly my assertion that TWC is calibrated is incoherent
relative to the rest of the problem statement.  The problem was not
underspecified, it was over specified.  (I hope I did the math correctly.)

 

My response: Thanks for pointing this out.  I'm embarrassed that I didn't
notice this myself.  Though this clearly solves my initial concern it leads
me to an entirely new quandary.

 

 

Consider the following revised version.

 

The TWC problem

1.       Question: What is the chance that it will snow next Monday?

2.       My subjective prior: 5% 

3.       Evidence: The Weather Channel (TWC) says there is a "70% chance of
snow" on Monday.

4.       TWC forecasts of snow are calibrated.

 

Notice that I did not justify by subjective prior with a base rate.

 

>From P(S)=.05 and P(S|"70%") = .7 I can deduce that P("70%"|S)/P("70%"|~S) =
44.33.  So now I can "deduce" from my prior and evidence odds that
P(S|"70%") = .7.  But this seems silly.  Suppose my subjective prior was
20%.  Then P("70%"|S)/P("70%"|~S) = 9.33333 and again I can "deduce"
P(S|"70%")=.7. 

 

My latest quandary is that it seems odd that my subjective conditional
probability of the evidence should depend on my subjective prior.  This may
be coherent, but is too counter intuitive for me to easily accept.  It would
also suggest that when receiving a single evidence item in the form of a
judgment from a calibrated source, my posterior belief does not depend on my
prior belief.   In effect, when forecasting snow, one should ignore priors
and listen to The Weather Channel.

 

Is this correct?  If so, does this bother anyone else?

 

paull

 

 

From: uai-boun...@engr.orst.edu [mailto:uai-boun...@engr.orst.edu] On Behalf
Of Lehner, Paul E.
Sent: Friday, February 13, 2009 4:29 PM
To: uai@ENGR.ORST.EDU
Subject: [UAI] A perplexing problem

 

I was working on a set of instructions to teach simple
two-hypothesis/one-evidence Bayesian updating.  I came across a problem that
perplexed me.  This can't be a new problem so I'm hoping someone will clear
things up for me.

 

The problem

5.       Question: What is the chance that it will snow next Monday?

6.       My prior: 5% (because it typically snows about 5% of the days
during the winter)

7.       Evidence: The Weather Channel (TWC) says there is a "70% chance of
snow" on Monday.

8.       TWC forecasts of snow are calibrated.

 

My initial answer is to claim that this problem is underspecified.  So I add

 

9.       On winter days that it snows, TWC forecasts "70% chance of snow"
about 10% of the time

10.   On winter days that it does not snow, TWC forecasts "70% chance of
snow" about 1% of the time. 

 

So now from P(S)=.05; P("70%"|S)=.10; and P("70%"|S)=.01 I apply Bayes rule
and deduce my posterior probability to be P(S|"70%") = .3448.

 

Now it seems particularly odd that I would conclude there is only a 34%
chance of snow when TWC says there is a 70% chance.  TWC knows so much more
about weather forecasting than I do.

 

What am I doing wrong?   

 

 

 

Paul E. Lehner, Ph.D.

Consulting Scientist

The MITRE Corporation

(703) 983-7968

pleh...@mitre.org

_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to