I agree that enforcing probabilities doesn't always make sense.  You don't
have to invoke 0-1 loss for this.

Much of my own experience is with fraud models that have grotesquely
uncalibrated score outputs due to strange sampling, cascaded models and so
on.  These models may look like probabilities on the surface, but it is
silly to take that too seriously.

Even so, I find it very useful to use evaluation metrics that are relatively
independent of a particular threshold because that lets me separate the
threshold question from the model quality question.  Somewhat, anyway.

On Fri, May 20, 2011 at 6:49 PM, Hector Yee <[email protected]> wrote:

> Hope that helps. By the way, it seems I'm approaching machine learning from
> a rather different point of view (empirical loss minimization that
> indirectly try to minimize 0-1 loss rather than the probabilistic
> approach),
> which is why enforcing probabilities on them don't make much sense.
>

Reply via email to