> web2py uses the query "db.log.id==X" as the key for the cache (where X
> will be replaced by the actual value).

Clever. Very simple, yet powerful.

Thanks for the clarification.

> The problem is that if you have a lot of records this may cause a
> memory leak since, in theory, you may be caching every individual
> record.

Well, maybe not memory leaks per say in the C++ sense of the terms
where memory is held up, yet no longer accessible, because pointers to
it not longer exist.

However, I get what you are saying that there is a danger that you get
unbound memory usage and I would theorize that the danger is greater
than just every DB record being held in memory.

If you have several result sets with some rows overlapping between
results (ie, in set terminology, the sets are not disjoint), couldn't
you have several times the number of records in the DB held up in
memory?

Obviously, you have to exert care and make sure that the possible sets
covered by the key sample space is manageable or and possibly use disk
cache (assuming that the disk cache is on a separate hard disk than
the DB), but that responsibility is incumbent on the end user rather
than the framework.

Obviously, the example I gave is not practical, but illustrates my
question well.

>
> On Jan 28, 3:52 pm, Magnitus <fbunny2...@hotmail.com> wrote:
>
> > I'm trying to determine whether the built-in web2py cache will fulfill
> > my needs or not and thus need a clarification on the following...
>
> > Based on this completely artificial example to illustrate my
> > questions:
>
> > someModule.py:
>
> > def ICacheLogs(cache, X):
> >     return db(db.log.id==X).select(db.log.ALL, cache=(cache.ram,
> > 3600))
>
> > default.py (controller):
>
> > from someModule import ICacheLogs
>
> > def f1():
> >      #Verify request.vars.X and whatnot
> >      logs = ICacheLogs(cache, int(request.vars.X))
> >      #Some more logic
>
> > def f2():
> >      #Verify request.vars.X and whatnot
> >      logs = ICacheLogs(cache, int(request.vars.X))
> >      #Some more logic
>
> > 1) If a user quickly accesses the f1 function with X set to 1 and then
> > again with X set to 2, will the logs result of second request be
> > cached on top of the logs result for first request (or worse, use the
> > cached result of the first request even though the parameters differ)
> > or will it know to cache them separately because they have different
> > parameters?
>
> > 2) If a user quickly accesses the f1 function with X set to 1 and then
> > accesses the f2 function with X set to 1 also, will the logs result in
> > f2 be taken from the cached result from the f1 call or will it be
> > cached again separately because the caching is performed from a
> > different function in the controller?
>
>

Reply via email to