Dear all, I want to share with you some results. I implemented a query in 
two different ways. Given the following common code:
start = db.record.with_alias('start_point')
end = db.record.with_alias('end_point')
elapsed_time = end.gathered_on.epoch() - start.gathered_on.epoch()

The first query is (the constrain is in the query):
rows = db( query & (elapsed_time  < 86400) ).select(
start.ALL, 
end.ALL, 
start.gathered_on.epoch(),
end.gathered_on.epoch(),
elapsed_time,
orderby=start.gathered_on.epoch(),
left=start.on( (start.mac == end.mac) & (start.gathered_on < 
end.gathered_on)),
cache=(cache.memcache, 3600),
cacheable = True
)
The second one is (the constrain is explicitly tested latter):
rows = db( query ).select(
start.ALL, 
end.ALL, 
start.gathered_on.epoch(),
end.gathered_on.epoch(),
elapsed_time,
orderby=start.gathered_on.epoch(),
left=start.on( (start.mac == end.mac) & (start.gathered_on < 
end.gathered_on)),
cache=(cache.memcache, 3600),
cacheable = True
)
rows2 = [r for r in rows if (r.end_point.gathered_on - 
r.start_point.gathered_on < datetime.timedelta(days=1)) ]

>From the timing results I got that the second query is always faster, with 
or without cache:
Q_1: 0.273243904114 
Q_1 with cache: 0.0182011127472
Q_2: 0.250607967377
Q_2 with cache: 0.0158171653748

Beside the fact that they are just a few milliseconds of difference and 
that all the rows satisfy the constrain, what is not clear to me is why 
even when the cache is enabled the first query is taking longer. The 
question that came to my mind is about computed columns, are they cached?

Paolo

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to