> Thanks for confirming this. The only kind of updates would be adding new 
>> key-value pairs. Having confirmed that this sort of updates will be 
>> consistent on the cache, I am now a little worried about the performance of 
>> doing so, given the mention of the mutex-lock design of the cache.
>>
>> If I understand this correctly, each thread (from a request) will lock 
>> the cache so that all other threads (requests) will have to wait. I intend 
>> to store multiple dictionaries (say 10) in the cache, and each dictionary 
>> will handle the data from a fixed set of users (say 30 of them) for a given 
>> period of time. If the cache truly behaves as above, then when one thread 
>> is updating the cache, all the other 10 * 30 - 1 = 299 threads will be 
>> blocked and will have to wait. This might drag the efficiency of the 
>> server-side.
>>
>
> As far as I can tell, the ram cache is not locked for the entire duration 
> of the request -- it is only locked very briefly to delete keys, update 
> access statistics, etc. So, I don't necessarily think this will pose a 
> performance problem.
>
> Anthony
>

That's great news! May I ask where you found these details in the source 
code? I'd just want to double-check and make sure it is the case, as this 
is important to my design and implementation. Many thanks again! 

Reply via email to