> In general, the approach David suggests (
> https://groups.google.com/d/msg/web2py/4cBbis_B1i0/KkKnNwUw8lcJ) is 
> probably preferable, but below are answers to your questions...
>
> user_data = cache.ram('user_data', lambda:dict(), time_expire=None)
>>
>> # add the data from this user, this should also update the cached dict?
>> user_data[this_user_id] = submitted_data
>>
>
> The above would not update the dict in the cache -- you'd have to do that 
> explicitly:
>
> cache.ram('user_data', lambda: user_data, time_expire=[some positive value
> ]
>

Thank you for your insights, Anthony.

Regarding the pros and cons of this approach (in comparison to David's 
database approach), I wonder what are the potential pitfalls/risks of the 
cache approach. For examples, but not limited to,

1. Is there any consistency issue for the data stored in web2py cache?

2. Is there any size limit on the data stored in web2py cache?

3. Is it thread-safe? For instance, if I have two threads A and B (two 
requests from different users) trying to access the same object (e.g. 
'user_data' dict) stored in the cache at the same time, would that cause 
any problem? This especially concerns the corner case where A and B bear 
the very last two pieces of data expected to meet 'some_pre_defined_number'.

Thanks!

Reply via email to