Obviously, you need to refer also to scores of other items as well.

One handy stat is AUC whcih you can compute by averaging to get the
probability that a relevant (viewed) item has a higher recommendation score
than a non-relevant (not viewed) item.

On Sun, Aug 26, 2012 at 5:55 PM, Sean Owen <sro...@gmail.com> wrote:

> There's another approach I've been playing with, which works when the
> recommender produces some score for each rec, not just a ranked list.
> You can train on data up to a certain point in time, then have the
> recommender score the observations that really happened after that
> point. Ideally it should produce a high score for things that really
> were observed next.
>
>

Reply via email to