Konrad Scheffler wrote:
I agree this is problematic - the notion of calibration (i.e. that you can say P(S|"70%") = .7) does not really make sense in the subjective Bayesian framework where different individuals are working with different priors, because different individuals will have different posteriors and they can't all be equal to 0.7.

I apologize if I have missed your point, but I think it does make sense. If different people have different posteriors, it means that some people will agree that the TWC reports are calibrated, while others will disagree.

Who is right? In the case of unrepeatable events, this question would not make sense, because it is not possible to determine the "true" probability, and therefore whether a person or a model is calibrated or not is a subjective opinion (of an external observer).

However, in the case of repeatable events--and I acknowledge that repeatability is a fuzzy concept--, it does make sense to speak of an objective probability, which can be identified with the relative frequency. Subjective probabilities that agree with the objective probability (frequency) can be said to be correct and models that give the correct probability for each scenario will be considered to be calibrated.

If we accept that "snow" is a repeatable event, the all the individuals should agree on the same probability. If it is not, P(S|"70%") may be different for each individual because having different priors and perhaps different likelihoods or even different structures in their models.

---

Coming back to the main problem, I agree again with Peter Szolovits in making the distinction between likelihood and posterior probability.

a) If I take the TWC forecast as the posterior probability returned by a calibrated model (the TWC's model), then I accept that the probability of snow is 70%.

b) However, if I take "70% probability of snow" as a finding to be introduced in my model, then I should combine my prior with the likelihood ratio associated with this finding, and after some computation I will arrive at P(S|"70%") = 0.70. [Otherwise, I would be incoherent with my assumption that the model used by the TWC is calibrated.]

Of course, if I think that the TWC's model is calibrated, I do not need to build a model of TWC's reports that will return as an output the same probability estimate that I introduce as an input.

Therefore I see no contradiction in the Bayesian framework.

Best regards,
  Javier

-----------------------------------------------------------------
Francisco Javier Diez              Phone: (+34) 91.398.71.61
Dpto. Inteligencia Artificial      Fax:   (+34) 91.398.88.95
UNED. c/Juan del Rosal, 16         http://www.ia.uned.es/~fjdiez
28040 Madrid. Spain                http://www.cisiad.uned.es
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to