> On Nov 12, 2014, at 4:46 AM, Ladislav Lenart <lenart...@volny.cz> wrote:
> 
> Hello.
> 
> I am in the process of eliminating bottlenecks in our application.
> 
> I have measured relative speed of the following querying methods:
> limit = 10000
> q_orm = session.query(Foo).limit(limit)
> q_id = session.query(Foo.id).limit(limit) # primary key
> stmt = q_id.statement
> sql = "SELECT foo.id FROM foo LIMIT {}".format(limit)
> 
> Execution:
> r = q_orm.all()
> r = q_id.all()
> r = session.execute(stmt).fetchall()
> r = session.execute(sql).fetchall()
> 
> I have found the following relative speeds of the above methods:
> q_orm   1x  (slowest, expected)
> q_id   13x  (relatively slow, NOT expected)
> stmt   70x  (fastest, expected)
> sql    70x  (fastest, expected)
> 
> I am curious about a relative slowness of the second method (q_id). What is it
> doing? I thought there should be NO ORM overhead given that the query does NOT
> return ORM instances.

there is overhead in the named tuple that is returned.  this overhead is 
improved in 1.0, see 
http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html#new-keyedtuple-implementation-dramatically-faster


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to