[sqlalchemy] Re: delete children of object w/o delete of object?
On Wednesday 05 December 2007 23:36:12 kris wrote: with sqlalchemy 0.4.1, Is there an idiom for delete the children of the object without actually deleting the object itself? I tried session.delete (obj) session.flush() # add new children session.save (obj) session.flush() But it gave me the error InvalidRequestError: Instance '[EMAIL PROTECTED]' is already persistent which does not appear correct either. u'll probably have to first detach the children from the parent.relation somehow? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] IMPORTANT: Does SA caches objects in memory forever?
Hi All, I am having this problem with memory consumption when using SA. I have 200 MB ram allocated for the application and when I look at the usage statistics using top or any other memory monitor I see that after every request to the application the memory consumption is increasing and the memory is not getting returned back to the pool. So if the memory consumption is 80 MB initially and after a request comes and we do a whole bunch of SA processing then if I look at the memory it will be let's say 82 MB and will stay there forever and it keeps on going up and up with many requests coming in. And after a while when it reaches 200 MB the application fails and no more memory is available. Does that means that objects retrieved are cached forever without any timeout? How can I alleviate this problem as this is very high priority right now and needs to be fixed immediately. thoughts/suggestions/solutions all welcome. Thanks all! -- Cheers, - A --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?
FYI: I am using SA 0.3.9. On Dec 6, 2007 6:52 PM, Arun Kumar PG [EMAIL PROTECTED] wrote: Hi All, I am having this problem with memory consumption when using SA. I have 200 MB ram allocated for the application and when I look at the usage statistics using top or any other memory monitor I see that after every request to the application the memory consumption is increasing and the memory is not getting returned back to the pool. So if the memory consumption is 80 MB initially and after a request comes and we do a whole bunch of SA processing then if I look at the memory it will be let's say 82 MB and will stay there forever and it keeps on going up and up with many requests coming in. And after a while when it reaches 200 MB the application fails and no more memory is available. Does that means that objects retrieved are cached forever without any timeout? How can I alleviate this problem as this is very high priority right now and needs to be fixed immediately. thoughts/suggestions/solutions all welcome. Thanks all! -- Cheers, - A -- Cheers, - A --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: concurent modification
i can only make comments about fragments like this but i cant address your full design issue since you havent supplied a fully working illustration of what it is youre trying to do. the daemon: http://dpaste.com/hold/27089/ and required utils.py modele: http://dpaste.com/hold/27092/ it was running with revision 3863 the biggest issue I can see is that the code seems to have a weak notion of explicit transaction boundaries...its opening many sessions, one per thread (since youre using scoped_session), and just keeping them opened, with just one commit() when a delete is issued and thats it. How to open one session for all threads ? Using Lock with inserts/ updates should help in this case, right ? addtionally, youre issuing some SQL directly to the database without notifying the session about objects which may have been removed; youre executing a SQL statement through sess.execute() but that has no effect on the User object stored in the session..plain SQL executes dont refresh or expire anything thats currently present in the session. when you commit(), the underlying flush() apparently is hitting that user, or perhaps a different one, and attempting to update it. I was trying to represent this SQL statement in ORM: DELETE FROM fs_file WHERE path='/' and user_id=theone.id. If I'd need to delete user I'll use session.query.delete(user), indeed. But how to perform writing operations in proper way then, without execute() ? I was looking, thinking, asking and thinking again but came to nithing yet ) locking youre doing doesnt have much effect here, without more context it seems like its not needed and is just adding to the confusion. I was thinking it will exclude few doubtfull assumption from my inspection list. Its possible that you'd benefit here from using explciit transactions, so that you dont need to be dependent on the Session in order to commit raw SQL which youve executed. You can begin() and commit() transactions using an Engine or a Connection...such as: conn = db.connect() trans = conn.begin() session = Session(bind=conn) # do stuff with the conn, do stuff with session trans.commit() But I see that sqlalchemy beginning transaction anyway Transactions should be opened and closed for individual groups of operations. For example, if your arrange thread starts up, performs some work, and then completes, it should be opening a transaction and/ or Session session != transaction ? the difference between them is presence in session of cache ? strategy=threadlocal are more than likely confusing the issue. removed. In fact cannot say that clearly understand what it does scoped_session is going to accumulate sessions for every thread, even if that thread has died out, which will certainly cause memory to grow unbounded if you keep creating new threads. it would probably be cleaner to have each arrange() thread keep track of its own Session and transaction, which are disposed of when the thread completes. Thanks for the idea, I'll try to elaborate. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?
the session in 0.3 holds onto objects until they are explicitly removed using session.clear() or session.expunge(). as a super quick fix you can use the weak_identity_map flag with the 0.3 session which will change the identity map to be weak referencing. however, objects which are marked as dirty will also fall out of scope if you remove external references to them, before they are flushed. documentation is here: http://www.sqlalchemy.org/docs/03/unitofwork.html#unitofwork_identitymap in version 0.4, the session is weak referencing so that objects which are not elsewhere referenced (and also are not marked as dirty or deleted) fall out of scope automatically. that is documented at: http://www.sqlalchemy.org/docs/04/session.html#unitofwork_using_attributes On Dec 6, 2007, at 8:22 AM, Arun Kumar PG wrote: Hi All, I am having this problem with memory consumption when using SA. I have 200 MB ram allocated for the application and when I look at the usage statistics using top or any other memory monitor I see that after every request to the application the memory consumption is increasing and the memory is not getting returned back to the pool. So if the memory consumption is 80 MB initially and after a request comes and we do a whole bunch of SA processing then if I look at the memory it will be let's say 82 MB and will stay there forever and it keeps on going up and up with many requests coming in. And after a while when it reaches 200 MB the application fails and no more memory is available. Does that means that objects retrieved are cached forever without any timeout? How can I alleviate this problem as this is very high priority right now and needs to be fixed immediately. thoughts/suggestions/solutions all welcome. Thanks all! -- Cheers, - A --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Filling foreign key with mapping
Thanks for these interesting articles. I was blind ! Why using a small integer surrogate primary key instead of two chars primary key, why ? =) As described in the article, I think I am abusing of surrogate primary keys. Our database schema is not yet finalized, I am going to fix this right now. Nevertheless, in some others tables of our schema, I think a surrogate can not be avoided. It will appear in some others tables and I am expecting millions of entries. That's why I prefer a small integer surrogate primary key instead of a 50 or 100 bytes string primary key. In such cases, I was asking about an elegant way to find the unique name along the surrogate primary key. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?
If I recall, your application is using localthread strategy and scoped_session(), doesn't it? Doesn't scoped_session() collect references from otherwise transient Session()'s and hold on to them between calls? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Filling foreign key with mapping
eh, i use surrogate keys all the time :) On Dec 6, 2007, at 4:45 AM, paftek wrote: Thanks for these interesting articles. I was blind ! Why using a small integer surrogate primary key instead of two chars primary key, why ? =) As described in the article, I think I am abusing of surrogate primary keys. Our database schema is not yet finalized, I am going to fix this right now. Nevertheless, in some others tables of our schema, I think a surrogate can not be avoided. It will appear in some others tables and I am expecting millions of entries. That's why I prefer a small integer surrogate primary key instead of a 50 or 100 bytes string primary key. In such cases, I was asking about an elegant way to find the unique name along the surrogate primary key. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] UNION of Query objects
Hi. I know that I can do SQL UNION of two record sets when using SQL Expression objects. But how to do UNION of two sqlalchemy.orm.Query objects? I can do the UNION on SQL Expressions level and then use 'from_statement' from Query, but what I can do if I already have two Queries? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: concurent modification
On Dec 6, 2007, at 8:57 AM, imgrey wrote: i can only make comments about fragments like this but i cant address your full design issue since you havent supplied a fully working illustration of what it is youre trying to do. the daemon: http://dpaste.com/hold/27089/ and required utils.py modele: http://dpaste.com/hold/27092/ it was running with revision 3863 OK, your Watch class is procesing events as they occur. a particular event is bounded by the run() method of the watch class; the reasonable place for you to create/destroy a Session is at the boundaries of handling those events - so arrange should open a session at the start of its run() method, and close() it at the end. the scope of transaction should also be bordered by the run() method. if you want to keep using scoped_session(), thats fine, but i think you need to understand what that means if you are going to use it. I see you have lock.acquire() and lock.release() pairs around every usage of the session - this doesn't do anything since the scoped_session is a thread local manager of sessions (it might achieve some locking against concurrent database access but database locks, but not really very well the way its laid out). heres a wikipedia article on threadlocal: http://en.wikipedia.org/wiki/Thread-local_storage . your app is currently using a unique Session for every thread because you have configured scoped_session(). Theres nothing wrong with using scoped_session(), it means that the start of run() would say, Session(), to open a new session, and the end would say Session.close(), to close it out, release connections/ transactions, etc. This lifespan is described here: http://www.sqlalchemy.org/docs/04/session.html#unitofwork_contextual_lifespan How to open one session for all threads ? Using Lock with inserts/ updates should help in this case, right ? you dont want to open one session for all threads. the session itself is not threadsafe (read the last item) : http://www.sqlalchemy.org/docs/04/session.html#unitofwork_using_faq . Theres no reasonable reason you'd want to share a session among threads, unless you are trying to use it as a cache. if you are trying to use it as a cache for many threads, this is not its intended purpose; id recommend using a plain dict or memcached for that instead. also id get the whole app to work completely before adding any caching. Also, its not typical to have one transaction/connection shared among threads either. if thats what youre trying to go for here, and thats why you have all the locking going on, that implies your app would just have one database connection in total. if you wanted to do that, then bind the session like this: the_one_connection = engine.connect() sess = Session(bind=the_one_connection) then youd have many sessions all commuicating to the same underlying connection. im not sure how well psycopg2 allows multiple threads to use a single connection (you certainly dont want to do it with mysqldb). but id recommend going for a transaction-per-event approach instead. I was trying to represent this SQL statement in ORM: DELETE FROM fs_file WHERE path='/' and user_id=theone.id. If I'd need to delete user I'll use session.query.delete(user), indeed. But how to perform writing operations in proper way then, without execute() ? I was looking, thinking, asking and thinking again but came to nithing yet ) if you had already looked up the Path with those attributes, and you have a Path instance, you can just session.delete(mypath). If not, then sure just issue the DELETE. as far as the SELECTs, if you issue your SELECT like this: SELECT * from file where path='/' AND user_id=id FOR UPDATE within a transaction, that statement will lock those rows explicitly against other transactions doing the exact same thing (i.e. they say FOR UPDATE also). Then, if you want to delete those rows: DELETE from file where path='/' AND user_id=id; COMMIT; the other transaction then wakes up after the first one issues COMMIT; its SELECT ..FOR UPDATE returns zero rows because the row was deleted (or it returns whatever rows still remain). this is generally how you should be achieving your locking against individual file records. if youd like to select using the Session with this method, use session.query(cls).with_lockmode('update').filter(...).first() . watch your SQL echoing and youll see it do a SELECT..FOR UPDATE in every case, locking every row it touches against other sessions also using FOR UPDATE. when that session is done working, commit() and close() it. one of the other sleeping transactions will wake up and do whatever it wanted to do. it seems like your arrange() method is generally SELECTing rows which are later deleted within LockingManager. I think if you used FOR UPDATE appropriately, that should handle all the locking
[sqlalchemy] Re: UNION of Query objects
On Dec 6, 2007, at 11:41 AM, Artur Siekielski wrote: Hi. I know that I can do SQL UNION of two record sets when using SQL Expression objects. But how to do UNION of two sqlalchemy.orm.Query objects? I can do the UNION on SQL Expressions level and then use 'from_statement' from Query, but what I can do if I already have two Queries? you can pull out the select() generated by Query using query.compile(). individual components of the select() are available as well although in a not-yet-public API, using such datamembers as query._criterion, query._from_obj, etc. As far as a union() operator directly on Query, we dont have that right now and it falls under a general category of re-generative Query operators (i just made that name up), i.e. Query objects that can interact with themselves and other queries the way select() objects do in SQL (i.e. subquery of itself, etc). We've had a little bit of interest in building that out but we havent begun to address the actual complexity involved as of yet. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Representation of boolean logic in a database
Hi, My problem is beeing able to represent and store relations between options and contents tables in a normalized way. I'd probably just store the relationship as a string. Do you have any particular querying requirements? Sometimes pragmatism beats elegance. Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: UNION of Query objects
Hi. I know that I can do SQL UNION of two record sets when using SQL Expression objects. But how to do UNION of two sqlalchemy.orm.Query objects? I can do the UNION on SQL Expressions level and then use 'from_statement' from Query, but what I can do if I already have two Queries? you can pull out the select() generated by Query using query.compile(). ... So, I see that in SA when I have to do complicated queries or queries that must be ready for further enhancements I should use SQL Expressions, not object oriented Queries... BTW., Hibernate's HQL doesn't support UNION too. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: delete children of object w/o delete of object?
That's what I am looking for, since the ORM is able to find all the children of a deleted object, I thought I might use the same functionality to find all the children on the object and delete them directly. On Dec 6, 2:55 am, svilen [EMAIL PROTECTED] wrote: On Wednesday 05 December 2007 23:36:12 kris wrote: with sqlalchemy 0.4.1, Is there an idiom for delete the children of the object without actually deleting the object itself? I tried session.delete (obj) session.flush() # add new children session.save (obj) session.flush() But it gave me the error InvalidRequestError: Instance '[EMAIL PROTECTED]' is already persistent which does not appear correct either. u'll probably have to first detach the children from the parent.relation somehow? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SA 0.4.1 and MS-SQL problem at create time
Hi all, Since SQLAlchemy 0.4.1, I cannot create my database anymore on mssql only (mysql and sqlite are fine). The error is the following : File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/databases/mssql.py, line 945, in get_column_specification if (not getattr(column.table, 'has_sequence', False)) and column.primary_key and \ AttributeError: 'Column' object has no attribute 'foreign_key' It seems that the mssql backend use a 'foreign_key' attribute on Column which does not exist anymore. How should this information be checked on a Column ? Regards, Christophe de Vienne PS : More complete trace : File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/schema.py, line 303, in create self.metadata.create_all(bind=bind, checkfirst=checkfirst, tables=[self]) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/schema.py, line 1232, in create_all bind.create(self, checkfirst=checkfirst, tables=tables) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/engine/base.py, line 1052, in create self._run_visitor(self.dialect.schemagenerator, entity, connection=connection, **kwargs) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/engine/base.py, line 1082, in _run_visitor visitorcallable(self.dialect, conn, **kwargs).traverse(element) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/sql/visitors.py, line 79, in traverse meth(target) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/sql/compiler.py, line 761, in visit_metadata self.traverse_single(table) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/sql/visitors.py, line 30, in traverse_single return meth(obj, **kwargs) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/sql/compiler.py, line 780, in visit_table self.append(\t + self.get_column_specification(column, first_pk=column.primary_key and not first_pk)) File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/databases/mssql.py, line 945, in get_column_specification if (not getattr(column.table, 'has_sequence', False)) and column.primary_key and \ AttributeError: 'Column' object has no attribute 'foreign_key' --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA 0.4.1 and MS-SQL problem at create time
Hi, It seems that the mssql backend use a 'foreign_key' attribute on Column which does not exist anymore. Yes, it's now foreign_keys. This is fixed in the svn trunk. I still need to sort out a way to have MSSQL unit tests run periodically, so we can pick up this kind of issue before releases. Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA 0.4.1 and MS-SQL problem at create time
column.foreign_key is now a list: foreign_keys[]. Trunk looks correct and should work. Works here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA 0.4.1 and MS-SQL problem at create time
I still need to sort out a way to have MSSQL unit test periodically I'm still planning on hosting a buildbot as I promised some months (how embarassing) ago. The first one will be Linux + pymssql, but once I get the new VMware host provisioned out here, I can put up a Windows + pyodbc host too. If there are any buildbot wiz kids out there, I could use some setup help/tips. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: delete children of object w/o delete of object?
i would think a regular cascade=all, delete-orphan configuration would make this quite simple. http://www.sqlalchemy.org/docs/04/session.html#unitofwork_cascades --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA 0.4.1 and MS-SQL problem at create time
Hi Paul, On 6 déc, 19:31, Paul Johnston [EMAIL PROTECTED] wrote: Hi, It seems that the mssql backend use a 'foreign_key' attribute on Column which does not exist anymore. Yes, it's now foreign_keys. This is fixed in the svn trunk. Thanks ! I still need to sort out a way to have MSSQL unit tests run periodically, so we can pick up this kind of issue before releases. I may be able to help, I'm sending you a mail privately. Regards, Christophe --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: delete children of object w/o delete of object?
I am using exactly that configuration. However, I wish to delete just the children objects and I have been unable to do session.delete(o);session.flush(); o.kids = ... session.save(o) which I expected to remove the object o and all the kids, then re- insert o with the new kids. On Dec 6, 10:43 am, Michael Bayer [EMAIL PROTECTED] wrote: i would think a regular cascade=all, delete-orphan configuration would make this quite simple. http://www.sqlalchemy.org/docs/04/session.html#unitofwork_cascades --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: delete children of object w/o delete of object?
heya - Remove the children from the collection, set the new children onto it, and then flush; they'll be deleted. you can just say parent.children = [newobj1, newobj2, newobj3]. the removed children get deleted and the new children get added. session.delete(obj) does not change the status of any collections. see the boldface text at: http://www.sqlalchemy.org/docs/04/session.html#unitofwork_using_deleting --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?
Michael Bayer schrieb: in version 0.4, the session is weak referencing so that objects which are not elsewhere referenced (and also are not marked as dirty or deleted) fall out of scope automatically. that is documented at: http://www.sqlalchemy.org/docs/04/session.html#unitofwork_using_attributes I have a question which I think is similar enough to be asked in the same thread: I have a set of quite simple migration scripts which us SQLAlchemy 0.4 and Elixir 0.4. I do extract data from the old legacy (MySQL) database with SQLAlchemy and put this data into new Elixir objects. Currently, these scripts use up to 600 MB RAM. This is no real problem as we probably could devote a machine with 4 GB ram solely for the automated migration. But it would be nice to use lower-powered machines for our migration tasks. What wonders me is that I do not (knowingly) keep references neither to the old data items nor the new elixir objects. Nevertheless memory usage increases during the migration. Is there any way to debug this easily to see why Python does need so much memory/which references prevent the objects from being garbage collected? Running the garbage collector manually did not help much (saving only about 5 MB). fs smime.p7s Description: S/MIME Cryptographic Signature