Hello,

When testing the mahout example BookCrossingRecommender with default settings
(GenericUserBasedRecommender, PearsonCorrelationSimilarity,
NearestNUserNeighborhood), I noticed that the result of the evaluation
(AverageAbsoluteDifferenceRecommenderEvaluator) are
changing randomly, from one test to another. I get scores between 2.1 and 4.8.

Considering the size of the input (about 100000 users and 100000 books), I can't
imagine that the randomness in the algorithms can lead to huge evaluation
differences like that.

What do you think?

Reply via email to