Any time you are making an estimate, you have a loss function that expresses
how much you like or dislike different estimates. In this memory based
approach, you have several ratings that you would some how like to combine
to get an estimate of the best predicted rating.

It is common to combine these ratings using a weighted average.  Some
approaches, however, come up with negative weights.  In order to understand
how to deal with negatively weighted examples, you have to go back to the
underlying mathematics that is beneath the weighted average.  That
mathematics is expressed in terms of a quadratic loss function.  This gives
you three possibilities, one where positive weights dominate, one where
negative weights dominate and a third where they balance out.

The examples I gave were in terms of the result of querying the similar
users for movies with ratings.


On Tue, Feb 23, 2010 at 12:54 PM, Tamas Jambor <[email protected]>wrote:

> not sure if I understand your examples. I thought this is not really a 'the
> loss function' since, these are memory based approaches, so there is
> no training in the classical machine learning sense.
>

Reply via email to