> One real world example is a full table rereading > (rescanning) if a table occasionally has the size from cache_size +1 to > maybe 1.5*cache_size. For default sqlite cache size it's rereading of 2M to > 3M tables. Not so great disadvantage to change the algorithm.
Yes, whenever one finds not-so-rare queries in his application which require full scan on a table with size bigger than half of cache_size I think he should seriously think about changing his schema or increasing cache_size... Pavel On Wed, Sep 22, 2010 at 2:04 PM, Max Vlasov <max.vla...@gmail.com> wrote: > On Wed, Sep 22, 2010 at 7:12 PM, Pavel Ivanov <paiva...@gmail.com> wrote: > >> > Is it ok for cache to behave like this or some optimization is possible >> to >> > fix this? >> >> For this particular case I believe you can do some optimization by >> making your own implementation of cache. >> Also I believe such "strange" behavior of cache is pretty much >> explainable. Remember that standard implementation of cache replaces >> pages on LRU basis, i.e. if cache is too large then new page replaces >> the oldest page, the one which access time is smallest. >> > > > Pavel, thanks, it does make sense. I don't think it's a huge problem, just > thought after your reply whether such pattern of reading is a highly > probable scenario or not. One real world example is a full table rereading > (rescanning) if a table occasionally has the size from cache_size +1 to > maybe 1.5*cache_size. For default sqlite cache size it's rereading of 2M to > 3M tables. Not so great disadvantage to change the algorithm. > > Max > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users