I have added the flush every 1024 loads, and also set DBSession.autoflush = 
False, but I see that I have
gone and used up all available memory again.   There are no other 
references than the ones in the session.

A bit more information, I am getting stuck in a routine that adds 'root' 
pointer to set of records
so that I can use a loading pattern similar to example in 
elementtrree/optimized_al, but without
the separate document table.  Visiting all the nodes in hierarchy and 
adding the 'root' node.

nodes = Table('nodes'', metadata,
             ... 
                 Column('parent_id', Integer, ForeignKey('nodes.id')),
                 Column('root_id', Integer, ForeignKey('nodes.id')),

mapper( Node, nodes,
...
    'children' : relation(Node, lazy=True, cascade="all, delete-orphan",
                          backref = backref('parent', 
enable_typechecks=False, remote_side = [ nodes.c.id]),
   'docnodes': relation(Node, lazy=True,
                         cascade = "all, delete-orphan",
                         post_update=True,
                         primaryjoin = (taggable.c.id == 
taggable.c.root_id),
                         backref = backref('document', post_update=True,
                                           enable_typechecks=False, 
remote_side=[taggable.c.id]),


Could this be making a cycle that the session cannot get rid of?       
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/3xhJ5g6bHhwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to