On Apr 13, 2012, at 6:08 AM, Ronan Dunklau wrote:

> Hello.
> 
> Prior to sqlalchemy 0.7.6, we implemented our own mappers to override
> the _save_obj and _delete_obj methods.

that's very interesting.  There is an old ticket to make this kind of thing an 
API, which would work similarly, that is you make your own mapper (note this is 
3 years old at this point, may have some out of date aspects) :  
http://www.sqlalchemy.org/trac/ticket/1518 .    I imagine you had to 
reimplement a lot of private things to get that to work, dealing with 
InstanceState, etc.

If you guys had any thoughts on 1518 and/or APIs to contribute, it could 
conceivably go into 0.8.   I'm hoping to have time to really churn out 0.8 in 
the early summer, a lot of it is ready it just needs to be assembled.

> Those methods don't exist anymore, and instead this responsibility is
> transfered to the persistence module.

yeah what I was actually going for there was to make those really long 
_save_obj() and _delete_obj() methods a lot easier to understand.  I saw a talk 
by Wes McKinney and he stressed his habit of "make no method more than 50 
lines".   It nagged at me as it seemed to break those methods up like that 
would make them less efficient.   But actually not and it worked out really 
well.    (except for what you were doing of course !)

> 
> A little bit of context:
> 
> We are using postgresql foreign tables, which are read only. So, the
> purpose of these custom mappers was to prevent insert/update/delete
> statements from going to the database, and instead save the mapped
> entities in their respective storages.

so what in this case were the "respective storages", was it a different set of 
tables in a different database ?

> 
> Is there a way to achieve the same goal with the new architecture ?
> By the way, is there a documentation for the internals of the new
> architecture ?

in fact the internals of the new system work exactly the same as the old 
system, they are just organized into more discrete methods.    

the details of your system of storage would make a big difference here.     
There's two thoughts I have immediately.   One is, just for the time being you 
can monkeypatch persistence.save_obj and persistence.delete_obj, such that they 
first check the mappers to see if they are in fact your special overridden 
Mapper objects, and use your own _save_obj and _delete_obj in that case, else 
continue on to the default implementation of these functions.   The behavioral 
contract of these functions hasn't changed.

The other is, if these objects really persist in some other way, like you're 
posting them to a web service or something, you could deal with that using 
session events.  In before_flush() you'd pluck them out of the flush plan by 
doing what you need with them, then resetting their history so that they are no 
longer seen as "dirty", and they wouldn't be included in save_obj at all.  If 
you're dealing with .new and .deleted also there'd be slightly different steps 
to handle those.   You might even implement a before_flush()/after_flush() pair 
that totally removes these objects within the flush process and puts them back 
afterwards in a clean, persistent state.    I haven't tried this kind of scheme 
before but it at least would be public APIs.

A third is we actually add the altpersist API.   I've been holding off on it as 
I typically wait for some real use cases to figure out how these things should 
work.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to