This is going to wildly depend on how many things are being sorted, and
what those things are. this topic usually a premature optimization or
"you're doing it wrong".
Imagine this query in Postgres:
SELECT * FROM records WHERE ORDER BY timestamp_desc;
If there are 1,000 items in the
On Fri, Jul 24, 2015 at 1:25 PM, Jonathan Vanasco jvana...@gmail.com
wrote:
Are you comparing the speed of SqlAlchemy Core operations or SqlAlchemy
ORM operations?
The ORM is considerably slower. The core engine is much faster.
Core.
--
Jon Nelson
Dyn / Senior Software Engineer
p. +1
Are you comparing the speed of SqlAlchemy Core operations or SqlAlchemy ORM
operations?
The ORM is considerably slower. The core engine is much faster.
There is also this:
http://docs.sqlalchemy.org/en/latest/faq/performance.html
--
You received this message because you are subscribed to
Hello,
just in case you're not motivated to share mappings here, I would note that
an incorrect placement of a
flag like remote_side on a relation() may be causing this.
I would have to produce anonymized mappings, but I will do so if it's
useful. What do you mean by incorrect placement of a
Antoine Pitrou wrote:
Hello,
just in case you're not motivated to share mappings here, I would note
that an incorrect placement of a
flag like remote_side on a relation() may be causing this.
I would have to produce anonymized mappings, but I will do so if it's
useful. What do you mean by
I would have to produce anonymized mappings, but I will do so if it's
What do you mean by incorrect placement of a flag like
`remote_side`? I do have one (exactly one) relation with a
`remote_side` flag, but the class it is defined on isn't involved in
the script I have timed here. (it
On Aug 15, 2009, at 10:26 PM, gizli wrote:
Turning on echo=True spits out all the queries that the application
generates. Currently I am directing this output to a file and then
looking through to derive statistics like what kind of tables are most
frequently accessed. I use this to see
the Query.all() call only generates a single SQL statement at all
times. Its only when you access attributes on individual rows that a
second statement would occur. If the multiple queries truly occur
within the scope of the all() call, I'd check to see if you have a
@reconstructor or
Hello Mike,
Nailed it! Thanks a million, Mike!
Michael Bayer wrote:
the Query.all() call only generates a single SQL statement at all
times. Its only when you access attributes on individual rows that a
second statement would occur. If the multiple queries truly occur
within the
On Tue, May 26, 2009 at 5:11 AM, Marcin Krol mrk...@gmail.com wrote:
What I do not get is why after this query takes place, SQLA runs a lot
those small queries - I included all (most?) the necessary columns in
the big initial query, so Host data should be filled in by SQLA eager
loading
Hello Mike,
Mike Conley wrote:
Are you assuming eager loading on the relation and actually getting lazy
loading?
Per the docs, lazy loading is the default unless you specify lazy=False
http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html#sqlalchemy.orm.relation
I have changed
When you load all those objects, saying query(A, B, C,
D).from_statement(huge statement), what you're not doing is loading
collections on individual objects, and depending on what your mappings
look like you may not be loading enough information for many-to-one
associations to occur either.
Awesome! Thank you very much indeed for the follow-up.
After turning of expire_on_commit, saving a newly created object takes
less than 1/3 of a second even with 100K objects in memory.
It's always nice to find the fast = True switch. :-)
On Sep 23, 6:00 pm, Michael Bayer [EMAIL PROTECTED]
I misread your ticket and the resolution has been corrected. The
commit() operation expires all objects present in the session as
described in
http://www.sqlalchemy.org/docs/05/session.html#unitofwork_using_committing
. Turn off expire_on_commit to disable the expiration operation,
which
if u replace back the offending function with its plain pythonic
variant, will it work?
50% boost... just because of the python? u're constructing too many
queries over and over. try cache them, and reuse as building
blocks... - or cache their results... or change the model.
e.g. on my model
On Jun 10, 2008, at 3:13 PM, Artur Siekielski wrote:
On Jun 10, 8:11 pm, Michael Bayer [EMAIL PROTECTED] wrote:
I would first take a look at the SQL
being issued as the first source of speed differences; if in 0.4.5
there's suddenly a whole series of deletes occuring which do not
On Jun 11, 11:53 am, [EMAIL PROTECTED] wrote:
if u replace back the offending function with its plain pythonic
variant, will it work?
Which function do you mean?
50% boost... just because of the python? u're constructing too many
queries over and over.
No, the main module in which we have
these two stack traces appear to show completely different operations
proceeding (in one its a flush with a delete occuring, the other its
performing an INSERT or UPDATE).
my vague understanding of psyco is that it can be quite arbitrary as
to what kind of code it can improve and what
On Jun 10, 8:11 pm, Michael Bayer [EMAIL PROTECTED] wrote:
I would first take a look at the SQL
being issued as the first source of speed differences; if in 0.4.5
there's suddenly a whole series of deletes occuring which do not
within 0.4.4, then that's the source of the difference.
On Tuesday 10 June 2008 22:13:04 Artur Siekielski wrote:
On Jun 10, 8:11 pm, Michael Bayer [EMAIL PROTECTED] wrote:
I would first take a look at the SQL
being issued as the first source of speed differences; if in
0.4.5 there's suddenly a whole series of deletes occuring which
do
the method that I improved in my checkin is related to the mapper's
construction of a newly loaded instance, and the test case that
improves 20% focuses most of its time loading a list of 2500 items.
the test you have here spends a lot of time doing lots of other things,
such as saving items
Michael Bayer wrote:
[snip]
actually a lot better than they've been in the past. if your tests are
useful, I might add them as well (but note that your attachments didnt
come through, so try again).
I forgot the attachments, sorry. Please find them here:
one thing that could make ORM loads much faster would be if you knew
the objects would not need to be flushed() at a later point, and you
disabled history tracking on those instances. this would prevent the
need to create a copy of the object's attributes at load time.
This reminds me a
if its truly an issue of security then grants would be more
appropriate. since anything the ORM does to prevent a write
operation can be easily overridden, since its Python. simpliest thing
would be to use a Session that has flush() overidden. or an engine that
overrides execute() to check for
[...] simpliest thing
would be to use a Session that has flush() overidden. or an engine that
overrides execute() to check for INSERT/UPDATE/DELETE statements and
throws an error [...]
I tried the ReadOnlySession class which overrides the flush() func. Works like
a charm, this adds a
the ORM is going to be slower in all cases since there is the overhead
of creating new object instances and populating them, as well as
initializing their attribute instrumentation and also a copy of their
attributes for the purposes of tracking changes when you issue a
flush() statement. this
Michael Bayer wrote:
the ORM is going to be slower in all cases since there is the overhead
of creating new object instances and populating them, as well as
initializing their attribute instrumentation and also a copy of their
attributes for the purposes of tracking changes when you issue a
ive committed in r2174 some speed enhancements, not including the
abovementioned change to deferring the on-load copy operation (which
is a more involved change), that affords a 20% speed improvement in
straight instance loads and a 25% speed improvement in instances loaded
via eager
28 matches
Mail list logo