On Friday, June 22, 2012 11:37:10 AM UTC-4, Anthony wrote:
Maybe it would be possible to convert the results of executesql() to a
pseudo-Rows object (without all the processing of each individual field
value) so it could be used with the grid, etc.
The original ticket that prompted adding
On 27 June 2012 03:34, Anthony abasta...@gmail.com wrote:
I wish it was possible to use other time-saving tools like SQLFORM.grid
with db.executesql().
At the moment most of the representation tools like SQLTABLE and
plugin_wiki's jqgrid suppose DAL usage.
This wouldn't help with
Massimo, any ideas on DAL performance? I wonder if it would make sense to
have some flags to skip some of the non-essential processing?
--
The only non-essential processing that has been mentioned is parsing and
normalization of the records returned by the DB.
One can already skip this by specifying an alterante processor function
db(...).select(processor=lambda rows,fields,colnames,blob_decode: )
where processor acts
One can already skip this by specifying an alterante processor function
db(...).select(processor=lambda rows,fields,colnames,blob_decode: )
We should probably document that. Note, it doesn't appear to pass a
blob_decode argument to the processor (though the default parse() method
I wish it was possible to use other time-saving tools like SQLFORM.grid
with db.executesql().
At the moment most of the representation tools like SQLTABLE and
plugin_wiki's jqgrid suppose DAL usage.
This wouldn't help with SQLFORM.grid, which does its own query, but for
other cases
I have an app where I pull down 5,000 records. I started with DAL and
db.item.ALL and it was taking around 4 seconds. I added 5 fields to the
select() which got it down to around 2 seconds. Then I implemented the same
thing with executesql and got it down to 500 ms.
So, selecting only the
From what I understand people use DAL or ORM layers for abstraction purposes.
So a struct with pointer to functions in C will have less overhead
than a class with member functions in C++. However working at this
higher level of abstraction allows for some fancy time-saving
mechanisms such as
The DAL is a pure python implementation so it is slower than a C implementation.
With large datasets the overhead is high so it is better to use executesql.
Often there's no other way than pulling all the records at once, but
sometimes a lazy approach to data extraction with the use of limitby
can
On 22 June 2012 10:21, Michele Comitini michele.comit...@gmail.com wrote:
The DAL is a pure python implementation so it is slower than a C
implementation.
With large datasets the overhead is high so it is better to use executesql.
Often there's no other way than pulling all the records at
I wish it was possible to use other time-saving tools like SQLFORM.grid
with db.executesql().
At the moment most of the representation tools like SQLTABLE and
plugin_wiki's jqgrid suppose DAL usage.
Maybe it would be possible to convert the results of executesql() to a
pseudo-Rows
I'd be interested to understand what processing all goes on. It's
definitely very pleasant working with DAL rows but in this case, not viable.
--
@pbreit : if I'm not mistaken, the parse() function in DAL handles a lot
... basically read just any value returned from the database to the correct
type (blob, integer, double, datetime, list:*, just to say a few),
handles virtualfields and prepares the Rows object making the Storage().
BTW, 1) is quite achievable if not selecting from multiple tables (i.e. no
print db.dogs.name, db.person.name) in a breeze:
from gluon.storage import Storage
raw_rowset = db.executesql(thequery, as_dict=True)
myrowset = [Storage(row) for row in raw_rowset]
No time to patch DAL's executesql and
Interesting, I'll try that out.
I can see a million rows being slow but seems 5,000 should move pretty
well. With the increasing usage of Javascript frameworks, it seems like
more data is being retrieved for manipulating on the front-end. But in that
case you just need JSON, I think with
15 matches
Mail list logo