Sorry Michael, you're just too productive. ;-) We'll need some time to
install this kind of tracking on our production system.

Klaus


On 8 Feb., 17:14, Michael Bayer <[EMAIL PROTECTED]> wrote:
> attached is a new performance test which runs three different session
> cleanup tests.  In all three, 648 ORM mapped objects are loaded into a
> session which is then discarded; that then runs for 100 iterations so
> that 100 sessions are created and closed.    The test is committed in
> SVN and is also attached.  The profile output for all three tests
> spend about 11-12 CPU seconds total.
>
> In session_clean, the objects are loaded, are not modified, and the
> session is closed before removal.  The __cleanup() method is called
> 3400 times, __resurrect is never called, and the cumulative time for
> __cleanup() is .003 CPU seconds, or 0.01% of the total time spent of
> around 12 CPU seconds.
>
> session_dirty modifies about 600 of the objects, then deletes them and
> garbage collects. this is what happens when you dereference dirty
> objects that are still attached to a session.   __resurrect is called
> 3400 times total and takes about 0.019 cpu seconds and __cleanup
> takes .069 seconds, or around 0.6% of the total time spent.  This is
> the more expensive case because deferenced objects are actually being
> brought back in to be strongly referenced so that their pending
> changes are flushed.
>
> session_noclose is like session_clean except the session is garbage
> collected without close() being called.  This comes out the same as
> session_clean, .002 seconds for __cleanup out of a total 11.5 CPU
> seconds.
>
> So __cleanup is called pretty much for every object that gets GC'ed,
> but im hoping these results illustrate that its actual overhead is
> negligible.  The mutex itself is local to the session's identity map;
> so if you had hundreds of sessions all being gc'ed in different
> threads, there is no contention among those since there is no "global"
> lock.  Also, the garbage collector's periodic runs represents just one
> thread of activity so i dont see how the lock could cause contention
> in any case.
>
> If OTOH you are sharing a single session among many threads, then
> things might become more complicated.  That pattern is not
> recommended; but if you were doing so, then a mutex sounds like a good
> idea in any case.
>
> So maybe if you patch your SA source to not use the mutex and see if
> that leads to an application speedup, would be a stronger indicator
> that the mutex itself is the source of the issue.
>
>  sessions.py
> 2KHerunterladen
>
>  session_clean.txt
> 51KHerunterladen
>
>  session_dirty.txt
> 51KHerunterladen
>
>  session_noclose.txt
> 50KHerunterladen
>
>
>
> On Feb 8, 2008, at 5:50 AM, klaus wrote:
>
>
>
> > Well, we are using SQLAlchemy in a multi-threaded environment. It is
> > integrated into Zope by means of z3c.sqlalchemy. All sessions should
> > be cleared before they are discarded.
>
> > There are no exotic types involved.
>
> > We have no useful profile output. The info is provided by the request
> > monitor. Calls to __cleanup show up in all unusual places (when the
> > garbage collector is activated) and take up considerable time.
>
> > Best regards
> >  Klaus
>
> > On 7 Feb., 16:00, Michael Bayer <[EMAIL PROTECTED]> wrote:
> >> On Feb 7, 2008, at 8:14 AM, klaus wrote:
>
> >>> State tracking, the method __cleanup in sqlalchemy.orm.attributes in
> >>> particular, is among the most time consuming tasks on our platform.
> >>> The mutex lock seems to be a real bottleneck.
>
> >>> Is there any way to work around this problem? Apparently, the
> >>> weakrefs
> >>> cannot be switched off (as in the session). Can you recommend
> >>> something special to avoid in an application, something that
> >>> triggers
> >>> this "resurrection"?
>
> >> this is the first im hearing about that mutex being an issue (its a
> >> straight Lock; mutexes like that are extremely fast if used primarily
> >> in just a single thread which is the case here).  I would imagine
> >> that
> >> the checks inside the __resurrect method are whats actually taking up
> >> the time...but even that check is extremely fast *unless* you are
> >> using a "mutable scalar" type, such as a mapped Pickle column.  And
> >> if
> >> youre using a mapped Pickle column you can set the PickleType to
> >> mutable=False to disable the deep checking in that case - this
> >> condition is the one thing that could specifically make the cleanup
> >> operation a significant operation versus an almost instantaneous one.
> >> So I'd want to see more specifically what's causing a slowdown here.
>
> >> To avoid the critical section altogether, calling session.close() or
> >> session.clear() on any session that is about to be discarded should
> >> prevent any cleanup handlers from hitting that section (and of
> >> course,
> >> not dropping references to "dirty" objects on the outside until they
> >> are flushed).  If it truly is just overal latency of __cleanup, in
> >> theory its not needed if using a strongly-referenced identity map so
> >> we could perhaps disable it in that case.
>
> >> id definitely need to see some profile output to determine more
> >> accurately what the cause of the slowdown is.
> > 

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to