Thanks, but this doesn't seem to do what I wanted. The merge modifies
the object and therefore tries to update the underlying table on
session.flush(). So it might work if you prevented the save also. (In
my case, the update is not only a waste, it even fails because the
table is write-protected.)

I would prefer a solution, though, that allowed these objects to be
immutable like their tables. That is, copy.copy could be used but no
constructor and no setattr/delattr (and no remove). My implementation
currently catches these with a NotImplementedError.

Best regards
  Klaus

On 31 Okt., 18:11, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On Oct 31, 2007, at 12:43 PM, klaus wrote:
>
>
>
>
>
> > When trying to cache the contents of some "small" tables, I followed a
> > recipe given by Michael Bayer on this list (at least how I understood
> > it): create a "dead" session without database connection and move all
> > these objects into it.
>
> > However, every "outside" object that references one of these objects
> > pulls it into the current session. That's a problem because objects
> > are created in a multithreaded environment with concurrent and
> > relatively short-lived sessions. (That's the reason for the caching in
> > the first place; otherwise the session alone could handle it).
>
> > So I tried to copy an object on access from the cache and enter it
> > into the session that requested it. (I had to fiddle with
> > _instance_key and _sa_session_id because a plain session.update()
> > wouldn't accept the copy.) session.query.get is overridden in a
> > MapperExtension, so that references from objects in the current
> > session also get copies of cached objects.
>
> > But despite all these precautions, original objects from the cache
> > with their original (i.e. wrong) session ids end up in the current
> > session all the time. How can they leak in? The current session is not
> > even supposed to see these original objects in the cache.
>
> > Is there a hidden connection between different sessions? I hope you
> > can make sense of these vague description.
>
> theres no connection between sessions.  however trying to shuttle
> objects between two sessions invariably leads to issues like the
> above because of the relations between the objects.   just modifying
> the lead object's session attributes wont help if the lead object
> references whole collections of things that are tied to the old
> session.  youd have to design your operation to cascade along all
> relations.   we do have some functions which you can make use of for
> cascading an operation, mapper.cascade_iterator() and
> mapper.cascade_callable()....although if you get into those then you
> are pretty much writing us a new library function.
>
> the best function for use here is merge(),  which was meant to move
> state between sessions, and it cascades along relations so that
> everything stays on one side of the equation.   however, its not
> ideal for caching since it issues queries by itself and defeats the
> purpose of caching.    but there is a ticket in trac to provide some
> merge() hook that is better suited for caching, so it would be a
> great help if you could test the attached patch; it provides a
> "dont_load" flag which should disable loading instances.   if you
> then say myobject = session.merge(cachedobject, dont_load=True), the
> returned object is the copy you're looking for cascaded all the way
> down across all relations.
>
>  merge_dont_load.patch
> 3KHerunterladen
>
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to