There's no good guideline here -- it's not actually a great test. It's
measuring how many of the recommendations overlap with what the user
already knew. But by definition users don't necessarily know about all
or even most of the "good" recommendations. So a low score doesn't
mean a bad recommender necessarily. This test exists for what it's
worth, for lack of anything really better.

On Sat, Jun 26, 2010 at 7:22 PM, pranay venkata <svpra...@gmail.com> wrote:
> Hi,
> Thanks to all for immediate responses.
>
> I have tested my binary recommender on one million dataset by dividing it
> into 80 % train , 20 % test data-set and i observe  an average precision
> value as 0.22 (i.e out of 20 recommendations produced by the recommender
> ,there are around 5 matches with items in the test data-set)  and average
> recall value to be 0.0135 for my recommendations.
> I would like to know the quality of  these recommendations on the given
> precision and recall values  ? how to estimate the quality of a recommender
> on the values of  precision and recall and how much better could it be
> improved practically  ?
>
> Thanks,
> svpranay.
>
> On Fri, Jun 11, 2010 at 6:00 PM, pranay venkata <svpra...@gmail.com> wrote:
>
>>  Hi,
>>
>> I'm a newbie to mahout.My aim is to produce recommendations on binary user
>> purchased data.So i applied item-item similarity model in computing top N
>> recommendations for movie lens data assuming 1-3 ratings as a 0 and 4-5
>> ratings as a 1.Then i tried evaluating my recommendations with the ratings
>> in the test-data but hardly there have been two or three matches from my top
>> 20 recommendations to the top rated items in test data and no match for most
>> users.
>>
>> So are my recommendations totally bad by nature or do i need to go for a
>> different measure for evaluating my recommendations ?
>>
>> Please help me ! Thanks in advance.
>>
>> Pranay, 2nd yr ,UG student.
>>
>>
>
>
> --
> regards
> svpranay
>

Reply via email to