Hello.

I am in the process of eliminating bottlenecks in our application.

I have measured relative speed of the following querying methods:
limit = 10000
q_orm = session.query(Foo).limit(limit)
q_id = session.query(Foo.id).limit(limit) # primary key
stmt = q_id.statement
sql = "SELECT foo.id FROM foo LIMIT {}".format(limit)

Execution:
r = q_orm.all()
r = q_id.all()
r = session.execute(stmt).fetchall()
r = session.execute(sql).fetchall()

I have found the following relative speeds of the above methods:
q_orm   1x  (slowest, expected)
q_id   13x  (relatively slow, NOT expected)
stmt   70x  (fastest, expected)
sql    70x  (fastest, expected)

I am curious about a relative slowness of the second method (q_id). What is it
doing? I thought there should be NO ORM overhead given that the query does NOT
return ORM instances.

Given the above results, is the following style reliable?

# Imagine a rather complex query with several join-s and
# filter conditions. Specifically, the query HAS various
# literal arguments (i.e. Foo.bar == bar).
q = session.query(Foo.id).distinct().join(...).filter(...)
r = session.execute(q.statement).fetchall()


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to