Yes I think I understand what you're getting at and the examples. Loss
function here is just the 'penalty' for predicting a rating near to
those of dissimilar users and far from those of similar users?

If I read correctly, you think that a 'weighted average' (with
negative weights in numerator and denominator -- yeah I don't like
using the absolute value in the denominator, conceptually) plus
capping is an intellectually sound way of handling this situation. And
I think I am convinced by this way of rationalizing it, and so am no
longer scared of negative weights.

Let me then create a patch for this.

On Tue, Feb 23, 2010 at 7:21 PM, Ted Dunning <[email protected]> wrote:
> Weights can't be negative and still be weights.   You can have large
> (positive) weights on negative training examples (aka "not like this"), but
> you can't really have a negative weight.
>

Reply via email to