Marco Zaffalon wrote:
> A major focus of UAI seems to be building computer systems that assist us 
> in taking decisions. I am happy with systems that can recognize the limits 
> of their knowledge and suspend the judgment when these limits are reached, 
> in the same way that I prefer to be told "I do not know" when I ask for 
> road information rather than being recommended a wrong route. Also good 
> human experts know when they should suspend the judgment.
> Having to occasionally suspend the judgment is logical consequence of 
> working with probability intervals, or with more general frameworks (e.g., 
> lower probabilities and previsions). So intervals are likely to be needed 
> by real rather than abstract problems.

I agree that advising systems need to be able to say "I don't know", but 
when to say "I don't know" depends on the relative costs of 
false-positives, false-negatives and not making a decision. So what do 
we do in these cases? An advising system does not just do probabilistic 
reasoning, but needs to make decisions about what actions it should 
take. Saying "I don't know" is an action (of the advising system) that 
has an expeted value, just like having any other action. We need to 
combine these utilities with the probabilities to determine the best 
action of the advising system.

There are many cases where the utilies are the deciding factor (even 
given the same probabilistic setup). If you are requiring road 
directions right now, it may be better for the system to say "turn 
right", even if this can't be proved to be the optimal response, to 
avoid an accident. Whereas if, the utilities change, and there is less 
cost associated with delaying a decision, it may be more prudent to 
suggest a careful check of all available information.

> What about having to make a decision?
> Consider a prospective expert system to diagnose a disease, which, given 
> information on a specific patient, tells the doctor: given my current 
> knowledge, I cannot decide between "disease" and "no disease".
> This is likely to motivate the doctor to look for further sources of 
> information externally to the system, for example, by examining recent 
> medical literature, by asking more experienced colleagues, by doing medical 
> tests that are not considered by the system, etc..., in the direction of 
> reliable diagnosis.

Again, it depends on the utilities. And we can determine the cost and 
value of information.

I have seen no evidence that "... intervals are likely to be needed by 
real rather than abstract problems."   I have seen good arguments as to 
why we need to have probabilies + utilities. The main problems I see are 
in knowledge representation: how can we actually represent real problems 
so that we can acquire the information necessary (from people and data) 
and effectively compute what we need to to make appropriate decsions. 
But I can't see how intervals helps us here.

In these foundational arguments (which are very important), there are 
many of us who think that we should let a thousand flowers bloom. It is 
quite possible that the Bayesian manifesto is wrong (I give it a low 
prior of being right, but a high posterior based on its success). 
However, I don't think that the resulting "winner" will include all of 
these formalisms; I think it will include very few. Most of these 
flowers will wither and die. Do I think that intervals will be part of 
the winning formalism? No. Do I think that research should continue on 
these formalisms? Certainly! It is quite likely that I am wrong. Each of 
us needs to make decisions about our research time; we are all 
(implicitly or explicitly) betting as to what will win and form the 
foundation of future undertanding. We need to reasonably exhaustively 
explore the search space before we declare a winner, and discussions 
like this are important!

David

Reply via email to