Something like that is there:

expiration=3600 # seconds
rows=db(...).select(...,cache=(cache.ram,expiration))

reset with

rows=db(...).select(...,cache=(cache.ram,0))


On Sep 18, 5:27 pm, Jurgis Pralgauskis <jurgis.pralgaus...@gmail.com>
wrote:
> Hello,
>
> I think I got a bit trapped by DAL in GAE :)
> maybe because of often used  db.MyTable[i].somefield
>
> /default/topic_examples/27001/view 200 2721ms 5145cpu_ms
> 2906api_cpu_ms
>
> service.call    #RPCs   real time       api time
> datastore_v3.Get        62      712ms   516ms
> datastore_v3.RunQuery   35      706ms   2390ms
> memcache.Get    3       27ms    0ms
> memcache.Set    2       12ms    0ms
>
> Though my tables are still quite small: ~ 5 main tables with ~ 100
> records in each.
>
> in order to decrease db requests, I think to make some singleton like
> interface, so I could ask
> data.MyTable[i].somefield
>
> instead of
> db.MyTable[i].somefield
>
> data would hold  storage copies of Tables which I need to access often
> by index
>
> and if data doesn't have particular table or its record, it could
> call db.MyTable.. then (and optionaly cache the result the Singleton
> way)
>
> maybe this could be called  select_cache.
> by the way, find() probably could also be more appropriate than
> separate select in such cases?
>
> this is just a plan now, am I on a sound way?
> how big the tables should grow (approximatelly) for my mechanizm to
> become useless??

Reply via email to