On Tue, Jun 18, 2013 at 3:48 AM, Ted Dunning <ted.dunn...@gmail.com> wrote:

> I have found that in practice, don't-like is very close to like.  That is,
> things that somebody doesn't like are very closely related to the things
> that they do like.


I guess it makes sense for cancellations. i guess it should become pretty
obvious from extensive crossvalidation search.


>  Things that are quite distant wind up as don't-care,
> not don't-like.
>
> This makes most simple approaches to modeling polar preferences very
> dangerous.  What I have usually done under the pressure of time is to
> consider like and don't-like to be equivalent synonyms and then maintain a
> kill list of items to not show.  Works well pragmatically, but gives people
> hives when they hear of the details, especially if they actually believe
> humans act according to consistent philosophy.
>

Or we just don't know exact prevailing reason for the returns. :) "did not
fit" is "almost fit, give me similar", and  "found on another sale event"
means "I still like it, just not your price".
However, if there's a consistent quality issue, it may turn bad enough to
consider p=0. bottom line, it should become fairly obvious which reasoning
prevails, thru validation.

"Kill list" should probably be maintained for a whole lot of reasons, not
just returns. E.g. something that was recently bought, may be
one-a-lifetime purchase, or it may be replenishable with a certain period
of repeatability (which could also be modelled). Does it makes sense?


>
> On Tue, Jun 18, 2013 at 9:13 AM, Sean Owen <sro...@gmail.com> wrote:
>
> > Yes the model has no room for literally negative input. I think that
> > conceptually people do want negative input, and in this model,
> > negative numbers really are the natural thing to express that.
> >
> > You could give negative input a small positive weight. Or extend the
> > definition of c so that it is merely small, not negative, when r is
> > negative. But this was generally unsatisfactory. It has a logic, that
> > even negative input is really a slightly positive association in the
> > scheme of things, but the results were viewed as unintuitive.
> >
> > I ended up extending it to handle negative input more directly, such
> > that negative input is read as evidence that p=0, instead of evidence
> > that p=1. This works fine, and tidier than an ensemble (although
> > that's a sound idea too). The change is quite small.
> >
> > Agree with the second point that learning weights is manual and
> > difficult; that's unavoidable I think when you want to start adding
> > different data types anyway.
> >
> > I also don't use M/R for searching parameter space since you may try a
> > thousand combinations and each is a model build from scratch. I use a
> > sample of data and run in-core.
> >
> > On Tue, Jun 18, 2013 at 2:30 AM, Dmitriy Lyubimov <dlie...@gmail.com>
> > wrote:
> > > (Kinda doing something very close. )
> > >
> > > Koren-Volynsky paper on implicit feedback can be generalized to
> decompose
> > > all input into preference (0 or 1) and confidence matrices (which is
> > > essentually an observation weight matrix).
> > >
> > > If you did not get any observations, you encode it as (p=0,c=1) but if
> > you
> > > know that user did not like item, you can encode that observation with
> > much
> > > more confidence weight, something like (p=0, c=30) -- actually as high
> > > confidence as a conversion in your case it seems.
> > >
> > > The problem with this is that you end up with quite a bunch of
> additional
> > > parameters in your model to figure, i.e. confidence weights for each
> type
> > > of action in the system. You can establish that thru extensive
> > > crossvalidation search, which is initially quite expensive (even for
> > > distributed machine cluster tech), but could be incrementally bail out
> > much
> > > sooner after previous good guess is already known.
> > >
> > > MR doesn't work well for this though since it requires  A LOT of
> > iterations.
> > >
> > >
> > >
> > > On Mon, Jun 17, 2013 at 5:51 PM, Pat Ferrel <pat.fer...@gmail.com>
> > wrote:
> > >
> > >> In the case where you know a user did not like an item, how should the
> > >> information be treated in a recommender? Normally for retail
> > >> recommendations you have an implicit 1 for a purchase and no value
> > >> otherwise. But what if you knew the user did not like an item? Maybe
> you
> > >> have records of "I want my money back for this junk" reactions.
> > >>
> > >> You could make a scale, 0, 1 where 0 means a bad rating and 1 a good,
> no
> > >> value as usual means no preference? Some of the math here won't work
> > though
> > >> since usually no value implicitly = 0 so maybe -1 = bad, 1 = good, no
> > >> preference implicitly = 0?
> > >>
> > >> Would it be better to treat the bad rating as a 1 and good as 2? This
> > >> would be more like the old star rating method only we would know where
> > the
> > >> cutoff should be between a good review and bad (1.5)
> > >>
> > >> I suppose this could also be treated as another recommender in an
> > ensemble
> > >> where r = r_p - r_h, where r_h = predictions from "I hate this
> product"
> > >> preferences?
> > >>
> > >> Has anyone found a good method?
> >
>

Reply via email to