The only "non-essential" processing that has been mentioned is parsing and 
normalization of the records returned by the DB.

One can already skip this by specifying an alterante processor function

    db(...).select(processor=lambda rows,fields,colnames,blob_decode: ....)

where processor acts like the parse function defined in dal BaseAdapter.


Anyway, delaying the parsing and making it lazy has not performance 
advantage. If you retrieve N records it is because you plan to do something 
with all them. If you parse them at retrieve time you are guaranteed to do 
it once. If you do it lazily, you make the select faster but you risk of 
parsing them more than once later thus in incurring in an overall 
performance penalty.

The only think that we can do to speed the dal is to do avoid re-executing 
the define-table. We are working on this. One option would be to have the 
define table in a module (not a model) which is executed only once and the 
use the new recorrect method to recycle an existing db object with all its 
table definitions. this will break migrations and it is still very 
experimental.

Is there something else we can in your opinion improve?

Massimo

On Tuesday, 26 June 2012 12:05:00 UTC-5, pbreit wrote:
>
> Massimo, any ideas on DAL performance? I wonder if it would make sense to 
> have some flags to skip some of the non-essential processing?

-- 



Reply via email to