Thanks in advance for the discussion, hope I can continue without you feeling I'm taking too much of your time...

You said
"""
If you are using merge(), and you're trying to use it to figure out "what's changed", that implies that you are starting with a business object containing the "old" data and are populating it with the "new" data.
"""

This isn't quite the case. Rather, I am handed an object (actually JSON text representation of an object) how it should look *now*. I parse that object and create a transient SQLA object with the exact same properties and relations - recursively, very similar to what merge() is doing.

Then I hand that transient object to merge(), who nicely loads the original from the database - if it existed -, or creates it if it didn't exist - and attaches it to the session for me. What I am missing is "what did it look like before". Certainly, I could call session.query.get() before merge() so I would know what it looks like, but the main problem with that is matching up all the related objects with the post-merged ones, esp for list relations. Plus, merge() is already looking up these objects for me, so it seemed smarter to harness what merge is already doing.

I've looked into the examples you provided. Waiting for before_flush() won't work for our use case since we need the "old" "_original" object available *before* the flush as well as after. I could...before the flush, use "attributes.get_history()," but it quickly becomes problematic if the programmer needs to look in two completely different places, both "attributes.get_history()" for post-flush changes and also "_original" for pre-flush changes... somewhat defeats the purpose.

I was considering that a SessionExtension def before_merge(session, merged) hook would be handy for me... one that is called before the properties are updated on "merged". Inside that hook, I could potentially clone merged and set it as an "_original" to itself and check if it was newly created by checking session.new (or new_instance boolean could be passed to the hook)

However, in the end, I would *still* have the problem that, yes, each post-merge object has a detached reference to "_original", but these cloned objects' relations would not be populated correctly among each other.... so I couldn't call "order._original.calc_points()" and expect the relations to be populated correctly.... that's what I'm trying to figure out how to accomplish.. that would be ideal. Almost like an "alternate universe" snapshot in time of the object (now detached) before changes with all relations still populated and where each *current* object has reference to its "old" version.

Could I look it up in a separate session and then merge it into my main session? Then, after that, merge the transient objects into my main session? Would that get me closer? Ideally, I wouldn't want it to need 2 trips to the database.


On 4/29/2010 4:48 PM, Michael Bayer wrote:
Kent Bower wrote:
It is helpful to know what SQLA was designed for.

Also, you may be interested to know of our project as we are apparently
stretching SQLA's use case/design.  We are implementing a RESTful web
app (using TurboGears) on an already existent legacy database.  Since
our webservice calls (and TurboGears' in general, I believe) are
completely state-less, I think that precludes my being able to harness
"descriptors, custom collections, and/or AttributeListeners, and to
trigger the desired calculations before things change" because I'm just
handed the 'new' version of the object.
This is precisely why so much of my attention has been on merge()
because it, for the most part, works that out magically.
If you are using merge(), and you're trying to use it to figure out
"what's changed", that implies that you are starting with a business
object containing the "old" data and are populating it with the "new"
data.   Custom collection classes configured on relationship() and
AttributeListeners on any attribute are both invoked during the merge()
process (but external descriptors notably are not).

merge() is nothing more than a hierarchical attribute-copying procedure
which can be implemented externally.   The only advantages merge() itself
offers are the optimized "load=False" option which you aren't using here,
support for objects that aren't hashable on identity (a rare use case),
and overall a slightly more "inlined" approach that removes a small amount
of public method overhead.

You also should be able to even recreate the "_original" object from
attribute history before a flush occurs.   This is what the "versioned"
example is doing for scalar attributes.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to