Actually one more thought here: reduce your 'at' parameter from 10 to
something like 3.

Looks like your users have on average 6 preferences each. In order to
compute these metrics you have to throw out some preferences and see
whether they are recommended back. But if your "at" is 10, then you
need more than 10 preferences to even run the test; in fact, you
should have more. The framework actually won't bother with users with
less than 20 preferences in this case.

That could be biasing the test but still doesn't really suggest why it
would come out 0 -- could be chance, but maybe not.

I think precision/recall tests are a little ill-defined for
recommenders. (We just had an interesting thread on this on
mahout-dev.) Really, the RecommenderEvaluator is what you want to use
in general.

This class exists mostly to test when your data set does not contain
any ratings. Then this is the only test you can run.


The next step is honestly to attach a debugger and see what's
happening in the evaluation. I'm having trouble guessing what the
issue or strange interaction with your data is.



On Wed, Feb 24, 2010 at 7:16 PM, Sean Owen <[email protected]> wrote:
> What is your DataModel like? what is your range of ratings?
>
> I don't think it's sparsity, no. These are results from just a handful
> of users; it's possible that the test results in 0 for just those
> users. But it's not a great sign; I do think something's wrong here
> but don't yet see an issue with what you are doing.
>
>
> On Wed, Feb 24, 2010 at 7:07 PM, Mirko
> <[email protected]> wrote:
>> Hi all,
>> I would like to evaluate the IR statistics of my item based recommender with 
>> the GenericRecommenderIRStatsEvaluator. However, precision and recall are 
>> 0.0 for each user, as I can see from my logs.
>>
>> Possible, that my data are too sparse?
>

Reply via email to