This may not be a great idea.

Remember that a real recommend exists in a closed loop.  As such,
incorporating iffy recommendations is often a good thing since it helps the
system explore the space.  Often I find that the most important thing I can
do is help a weak recommender explore more by dithering the results and
adding anti-flood provisions.  Bigger/broader data wins is the moral.

On Wed, Jan 4, 2012 at 6:52 AM, Nick Jordan <n...@influen.se> wrote:

> Thanks for the feedback.  In my particular scenario, I'd rather that the
> Recommender only return recommendations for items where the expected margin
> of error were smaller, even if that meant for a specific set of users no
> recommendations were made or that a specific set of items could never be
> recommended.  Maybe what I'm describing is my own home grown Recommender,
> which is fine but I just want to confirm.
>
> It also appears that evaluator uses estimatePreference in the Recommender
> to produce it's output and estimatePreference doesn't take a Rescorer
> parameter, so even if I handled this in Rescorer the Evaluator would not
> pick it up as part of its output.  Is that also correct?
>
> Nick
>
> On Wed, Jan 4, 2012 at 8:53 AM, Sean Owen <sro...@gmail.com> wrote:
>
> > After thinking about it more, I think your theory is right.
> >
> > You really should use more like 90% of your data to train, and 10% to
> test,
> > rather than the other way around. Here it seems fairly clear that the 10%
> > training test is returning a result that isn't representative of the real
> > performance. That's how I'd really "fix" this, plain and simple.
> >
> > Sean
> >
> > On Wed, Jan 4, 2012 at 11:42 AM, Nick Jordan <n...@influen.se> wrote:
> >
> > > Yeah, I'm a little perplexed.  By low-rank items I mean items that
> have a
> > > low number of preferences not a low average preference.  Basically if
> we
> > > don't have some level of confidence in our ItemSimilarity based on the
> > fact
> > > that not many people have given a preference good or bad, don't
> recommend
> > > them.  To your point though LogLikelihood may already account for that
> > > making these results even more surprising.
> > >
> > >
> >
>

Reply via email to