I confirm what I said. 

The run in multiprocessing was regenerating instances because after 
deserialization they were getting new IDs. I tried to implement a custom 
__hash__ but it seems that SQLAlchemy does not get it. 

What I did was disabling the backref cascade for `Satellite` and 
`GroundStation` objects and then, after optimization, doing:

    for passage in results:
        # I need to merge since if coming from multiprocessing the instance
        # IDs change.
        passage.satellite = session.merge(passage.satellite)
        passage.ground_station = session.merge(passage.ground_station)
        session.add(passage)

This looks working as expected. 

Thanks to Mike and Simon pointing me on the right track!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to