The columns only contain address strings and not much more.
Okay I did some profiling for the customerorder plus two joins on
itemsbought and products:
1008030 function calls (968253 primitive calls) in 2.220 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime
that jump in time between __iter__ and all() seems kind of odd, and I wonder if
some of that is in setting up a larger span of memory to store the entire
result in a list; using yield_per() often gives much better results, though it
isn't compatible with collection-joined eager loading. I'd
Slightly OT -
Why are 5k rows being shown on an admin's order-menu ?
5k rows sounds like the spec for an offline report ( csv/excel ).
Admin interfaces typically paginate data into much smaller chunks.
Putting 5k entries on a page is going to create a lot of usability concerns.
--
You
Thats what I did before, activating echo=True shows me correct query too.
This joinedload takes 1.3 seconds for this 5000 entries.
And only 4 queries in general for complete side load. Adding one more
joinedload ends up in 2 seconds, thats way too much for 1700 customers and
5 k bought items
On
OK, keep going with the profiling, whats the total number of rows you are
seeing with the joined load, 5000 total ? sometimes a join is actually
scrolling through many more rows than that, duplicate rows in the case of a
cartesian product.echo='debug' would show that. Or there could be
I think you need to profile the application like Mike said. Just to be
thorough, I'd suggest you time the Query execution and the ORM population.
Do you have an example of the faster raw SQL that you're trying to generate
?
When you do a query on the commandline it is often faster because the
for some tips on isolating where the speed issue is, follow the steps at:
http://docs.sqlalchemy.org/en/latest/faq.html#how-can-i-profile-a-sqlalchemy-powered-application
for too many joins, you also want to look into EXPLAIN PLAN.
On Sep 11, 2014, at 12:57 PM, tatütata Okay
Yeah that would have been my next steps. I hoped more for some hints if the
setup is correct more if there are issues with the relationship
configuration? I think speed related things come up when there are over
10 entries and more? We speak here about 5000, thats why I think I did
a mistake
three joins over 100K rows is going to be very slow, two seconds seems in the
ballpark. EXPLAIN will show if there are any table scans taking place.
as far as why the relationship is faster, it might be doing something
different, check the echo=True output.
On Sep 11, 2014, at 2:35 PM,
Okay will try it. Maybe you got it wrong. I have only 5 k rows
On Thu, Sep 11, 2014 at 8:55 PM, Michael Bayer mike...@zzzcomputing.com
wrote:
three joins over 100K rows is going to be very slow, two seconds seems in
the ballpark. EXPLAIN will show if there are any table scans taking place.
The query that you are doing:
customerorders = sqlsession.query(Customerorder)\
.join(Itemsbought.order)\
.all()
probably doesn't do what you intend. In particular, it doesn't populate the
Customerorder.itemsbought collection. So when you iterate over customerorders
and access the
11 matches
Mail list logo