On Mon, Jul 15, 2013 at 8:02 PM, Michael Bayer <mike...@zzzcomputing.com> wrote:
> On Jul 15, 2013, at 4:51 PM, Amir Elaguizy <aelag...@gmail.com> wrote:
>
>> I'm having this weird problem using the query caching recipes in which two 
>> instances of a model representing the same underlying dataset will both get 
>> into the session.
>>
>> I know this is happening because I put all of the models in a set() and 
>> there are two instances with the same underlying database row id.
>>
>> I was under the impression that the session itself would handle the case 
>> that an object coming from the query cache is already in the session, 
>> preventing duplication. Is this not the case?
>
> well you need to be using the merge() aspect of it, which will reconcile an 
> existing identity that's already in the session.  the recipe as written uses 
> merge_result() so will ensure this, yes.    This only deals with the identity 
> map though, if you have an object pending with a given identity, its not in 
> the identity map.   I'd advise against heavy usage of cached queries 
> overlapping with lots of pending objects within the same sets because things 
> can get very crazy.

Because of all that crazyness, that I came to the realization that you
can only cache detached instances, and you can never merge them into
your session. If you do, you may end up with a concurrency mess, if
two threads want to merge the same cached instance into a session.

To put a cached instance into a session, you must first copy it, then
update. How to do that, is very application-specific, and I don't
think it can be automated.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to