Judea,
>You say:
> No matter what prior probability we put on Q, the marginal probability
> of R in any probability model would lie between 0.8 and 0.9.
> One can "explain" the phenomenon (i.e. Bel(R)=0.72)
> by saying that there is only a 0.72 chance that "the
> evidence would prove R," but I was never able to come up
> with a way to argue this convincingly to a subject matter expert.
>
>This was one of my problems too, but watch how sensible it
>sounds when translated into the scheduling example I gave:
>
> "No matter what prior probability we put on Q, the probability
> that I will be assigned to teach class R would lie
> between 0.8 and 0.9.
> Still, the probability that I WILL BE FORCED to teach
> class R, for lack of an alternative consistent
> assignment, is 0.72."
>
>Do you find any difficulty explaining this to
>a subject matter expert?
No, I don't.
So let me rephrase.
If I were working on a scheduling system or an argumentation system or any
other problem in which "probability of provability" made semantic sense,
then I could explain the 0.72 to 0.98 bounds to a subject matter expert.
Whether those bounds are useful for anything would be a different story.
Can you construct a story in your class assignment example of why the
probability you are forced to teach class R would be something we would
want to be able to query the system about? It is easier to construct such
a story in legal reasoning, where we are interested not in the probability
a suspect is guilty, but the probability the evidence proves (s)he is
guilty.
One place I tried to apply belief functions was to a decision aid that
reasoned about whether aircraft were friendly or hostile. The Air Force
was interested in applying belief functions because they'd been told that
probabilities couldn't handle ignorance. But if an aircraft is flying at
you and you have to decide whether to shoot at it, you don't really care
about the probability that the evidence proves it's hostile. You care
about the probability it's hostile.
I'll grant you that my experience is limited, and I only stuck with it for
three or four years, but on the problems to which I tried to apply belief
functions I eventually concluded that I could have done a better job with
probabilities. This is not to say belief functions can't be usefully
applied to some problems, or even to the problems to which I tried to apply
them if I had been smarter. But I'm still waiting for that example of a
problem in which the added complexity of belief functions really makes a
difference in our ability to model the world in a useful way.
Kathy