Hi Young,

there are several tweaks available that can help you reduce response time.

* you can precompute the item-similarities offline with
ItemSimilarityJob and load the results into memory in the online
recommender, so it can just look them up from RAM
* in ItemSimilarityJob you can impose a limit on the number of similar
items per single item in the results, this way you can limit the number
of similarities taken into consideration while computing recommendations
* at the start of the recommendation process, a CandidateItemsStrategy
implementation is used to identify the initial set of items that might
be worth recommending. There are several implementations available that
might be worth trying and you can also create your own version optimized
for your usecase

Furthermore one of the key benefits of item-based recommendation is that
the item relations tend to be very static. Thus it might be sufficient
to precompute the item similarities in intervals (like once a day for
example depending on your usecase). So if you only update your data once
a day for example, you can consider the data readonly between those
updates which makes it an ideal caching candidate. If you manage to
cache your computed recommendations answering a request from an
in-memory cache should be accomplished in less than 1ms.

--sebastian

Am 29.08.2010 21:47, schrieb Young:
> Hi all,
>  
> Based on 1M dataset, about how many requests could be expected to be handled 
> at a time when using item-based recommender if the engine runs on a Core2 
> 2.4G CPU and 4G meomory. 
>  
> Thank you very much.
>  
> -- Young
>  
>  

Reply via email to