I don't think that AAD is a good way to compare the recommendations. A good way is to think about the application. In that application, you are likely to show about a page of recommendations. The only question that is important is whether or not users find that page useful.
One useful surrogate to answer that question is AUC. Another is precision @20. They key to both of these is that it gets at the heart of the question of whether or not the recommendations are ordered correctly. On Tue, Oct 25, 2011 at 4:12 PM, lee carroll <[email protected]>wrote: > >No, you're welcome to make comparisons in these tables. It's valid. > > Okay I think I'm back at square one. > So we have an AAD using an Euclidean similarity measure of 1.2 This is > calculated for ratings in the range of 1 through to 10. > For the same data we also have a Tanimoto AAD of 1.3 > > Now imagine the ratings are now in the range of 1 through to 20 but > all the users rate in exactly the same way (rating value)*2 > We would now have for the Euclidean driven recommender an AAD of 2.4 > but the tanimoto would still be 1.3 > > How can we use AAD to compare the two recommenders ? > > A bit of background just to explain why I'm labouring this point (and > I'm well aware that I'm labouring it) > By being able to describe AAD as "the amount a prediction would differ > from the actual rating. (Lower the better)" > to a business stake holder makes the evaluation of the recommender > vivid and concrete. The confidence this > creates is not to be under-estimated. However how do I describe to a > business stake holder the meaning of a tanimoto produced > AAD? I can't at the moment :-) > > cheers Lee C >
