I'd note that those subqueryloads() render the effectiveness of yield_per() to
be almost nil.
This is how subqueryload works:
query(Parent).options(subqueryload(Parent.child)).all()
will give you:
SELECT * FROM parent
then, the moment a parent row is fetched, the first child collection is
.oO V Oo.
On 02/23/2012 02:50 PM, Michael Bayer wrote:
I'd note that those subqueryloads() render the effectiveness of yield_per() to
be almost nil.
I know. I've replicated the real use case in the application which has
yet to see proper optimization which includes better query planning
When we want to test if a Python program has a leak, we do that via seeing
how many uncollected objects are present. This is done via gc:
import gc
print total number of objects:, len(gc.get_objects())
That's the only real way to measure if the memory used by Python objects is
growing
Hi,
thanks for your reply. I haven't yet tested this with a profiler to see
exactly what exactly is happening, but the bottom line is that the
overall memory use grows with each iteration (or transaction processed),
to the point of grinding the server to a halt, and top shows only the
Yes, definitely growing at a rate of 700-800 per iteration.
.oO V Oo.
On 02/22/2012 07:23 PM, Michael Bayer wrote:
When we want to test if a Python program has a leak, we do that via seeing
how many uncollected objects are present. This is done via gc:
import gc
print total number of
On Wed, Feb 22, 2012 at 4:29 PM, Michael Bayer mike...@zzzcomputing.com wrote:
thanks for your reply. I haven't yet tested this with a profiler to see
exactly what exactly is happening, but the bottom line is that the overall
memory use grows with each iteration (or transaction processed), to
On Feb 22, 2012, at 2:46 PM, Claudio Freire wrote:
On Wed, Feb 22, 2012 at 4:29 PM, Michael Bayer mike...@zzzcomputing.com
wrote:
thanks for your reply. I haven't yet tested this with a profiler to see
exactly what exactly is happening, but the bottom line is that the overall
memory use
On Feb 22, 2012, at 3:28 PM, Claudio Freire wrote:
Like I said, it's not a leak situation as much of a fragmentation
situation, where long-lived objects in high memory positions can
prevent the process' heap from shrinking.
[0]
On Wed, Feb 22, 2012 at 5:40 PM, Michael Bayer mike...@zzzcomputing.com wrote:
Saw that a bit, but looking at the tips at the bottom, concrete
implementation changes are not coming to mind. An eternal structure is
ubiquitous in any programming language. sys.modules is a big list of all the
On Wed, Feb 22, 2012 at 5:51 PM, Claudio Freire klaussfre...@gmail.com wrote:
Such caches, for instance, are better made limited in lifespan (say,
giving them a finite lifetime, making them expire, actively cleaning
them from time to time). Structures that are truly required to be
eternal are
On Feb 22, 2012, at 3:51 PM, Claudio Freire wrote:
On Wed, Feb 22, 2012 at 5:40 PM, Michael Bayer mike...@zzzcomputing.com
wrote:
Saw that a bit, but looking at the tips at the bottom, concrete
implementation changes are not coming to mind. An eternal structure is
ubiquitous in any
On Wed, Feb 22, 2012 at 6:21 PM, Michael Bayer mike...@zzzcomputing.com wrote:
IMHO the whole point of using a high level, interpreted language like Python
is that we don't have to be bogged down thinking like C programmers. How
come I've never had a memory fragmentation issue before ?
Okay, thanks to this article:
http://neverfear.org/blog/view/155/Investigating_memory_leaks_in_Python
I made similar plot of object counts in time, showing top 50 types. The
resulting PDF is here (you might wish to download it first, Google
messes it up for me):
On Feb 22, 2012, at 6:36 PM, Vlad K. wrote:
Okay, thanks to this article:
http://neverfear.org/blog/view/155/Investigating_memory_leaks_in_Python
I made similar plot of object counts in time, showing top 50 types. The
resulting PDF is here (you might wish to download it first,
This depends upon the execution plan of the query and is more really a
postgresql question. Google postgresql IN performance and you will get
a good idea of it.
By the look of your code, Second option would obviously be faster as
it hits database once whereas first one flush after every change.
On Tue, Feb 21, 2012 at 3:24 PM, Manav Goel manav.goe...@gmail.com wrote:
This depends upon the execution plan of the query and is more really a
postgresql question. Google postgresql IN performance and you will get
a good idea of it.
By the look of your code, Second option would obviously be
Thanks for your replies.
Using the IN list definitely speeds up the process, but I hate the
resulting query which uses bound variables for each and every element of
the list.
But I have another problem with this, there's a massive memory leak
somewhere. Take a look at this model:
class
17 matches
Mail list logo