Re: [sqlalchemy] Removing relations with Many To One

2012-05-23 Thread Michael Wilson
Worked perfectly. Thanks! On May 23, 2012, at 12:10 AM, ThereMichael wrote: I have this relation: # Users user_table = Table('user', self.metadata, Column('id', Integer, primary_key=True), Column('place_id', Integer), mysql_engine='InnoDB' ) # Places

Re: [sqlalchemy] Tutorial/Example Code

2012-05-23 Thread Michael Bayer
On May 22, 2012, at 7:28 PM, David Bowser wrote: The main advantage of example provided over the one in the documentation is that it's more generic. The examples in SQLAlchemy are purposely not generic for two reasons. One is to minimize complexity and maintain the focus on usage of the

Re: [sqlalchemy] Tutorial/Example Code

2012-05-23 Thread Michael Bayer
A better idea, still thinking about it, just using __mapper_cls__ to intercept when the mapping is about to be created.This is probably the most appropriate place for this to happen - as the Table is being sent right to mapper(), we can respond to the Table and add more things to the

[sqlalchemy] NullPool for sqlite (Operational error: no such table)

2012-05-23 Thread pr64
Hi, Using SQLAlchemy 0.7.7 with an underlying sqlite databse, we configured the engine with poolclass=SingletonThreadPool and pool_size=50 Our unit-tests were working fine. But when running the app, some (ProgrammingError) Cannot work on a closed database occured. Reading the docs and mailist

Re: [sqlalchemy] NullPool for sqlite (Operational error: no such table)

2012-05-23 Thread Michael Bayer
On May 23, 2012, at 5:04 AM, pr64 wrote: Hi, Using SQLAlchemy 0.7.7 with an underlying sqlite databse, we configured the engine with poolclass=SingletonThreadPool and pool_size=50 Our unit-tests were working fine. But when running the app, some (ProgrammingError) Cannot work on a closed

[sqlalchemy] Re: NullPool for sqlite (Operational error: no such table)

2012-05-23 Thread pr64
On May 23, 3:46 pm, Michael Bayer mike...@zzzcomputing.com wrote: On May 23, 2012, at 5:04 AM, pr64 wrote: Hi, Using SQLAlchemy 0.7.7 with an underlying sqlite databse, we configured the engine with poolclass=SingletonThreadPool and pool_size=50 Our unit-tests were working fine. But

Re: [sqlalchemy] Thread local connection garbage collection problem

2012-05-23 Thread Michael Bayer
There's some cleanup code inside of weakref callbacks which has been improved in 0.7 to not complain about unavoidable exceptions during Python's teardown of object state. On May 23, 2012, at 10:01 AM, limodou wrote: Did anyone encounterred this problem? In my framework, I'll cache the

Re: [sqlalchemy] Thread local connection garbage collection problem

2012-05-23 Thread limodou
On Wed, May 23, 2012 at 10:35 PM, Michael Bayer mike...@zzzcomputing.com wrote: There's some cleanup code inside of weakref callbacks which has been improved in 0.7 to not complain about unavoidable exceptions during Python's teardown of object state. Oh, I see. Thanks. -- I like python!

Re: [sqlalchemy] Re: NullPool for sqlite (Operational error: no such table)

2012-05-23 Thread Michael Bayer
On May 23, 2012, at 10:34 AM, pr64 wrote: Running on a linux sever, the unit test use the code above (create_engine('sqlite:///' + database, ...) ) but the database dir storage is /dev/shm which is a ram directory, but i wonder sqlite does not even know that it is a ram disk Anyway,

Re: [sqlalchemy] Re: NullPool for sqlite (Operational error: no such table)

2012-05-23 Thread pr64
We tried to switch from SingletonThreadPool (which raises errors, sometimes) to StaticPool... we'll see what happens. The problem is that this error is not systematically raised and it seems that it is raised when multiple threads are accessing the db. StaticPool can't be used for

Re: [sqlalchemy] Re: NullPool for sqlite (Operational error: no such table)

2012-05-23 Thread Michael Bayer
On May 23, 2012, at 11:27 AM, pr64 wrote: Example code is: orm = OrmManager() session = orm.get_session() my_obj = session.query() my_obj.attribute = 'new value' session.commit() I can see an issue here, talking to you. We used the SingleThreadedPool with pool_size=50. But

[sqlalchemy] Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
Hello, I have hundreds of independent jobs on a cluster all writing entries to the same MySQL database table. Every time one job INSERTs, it locks the table, and the other jobs have to queue up for their turn. So at that point, the massively parallel cluster has turned into a massively serial

[sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
More data: A typical not-quite-worst-but-in-the-class-of-worst case scenario is a half a million rows per insert. Absolute worst case scenarios could be 10 times that. So that insert will take awhile. Would there be any logic to breaking up all the inserts into one row per insert? Would that

Re: [sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Michael Bayer
My initial thought is that INSERTs shouldn't be locking the whole table, at least not throughout a whole transaction. There's some MySQL hints that can help with this, if you're on MyISAM take a look at http://dev.mysql.com/doc/refman/5.0/en/concurrent-inserts.html , possibly using the

[sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
Thanks for the help and links! One additional data point: The table has an id field that autoincrements. A friend thought that might be a barrier to non-locking inserts, but wasn't sure. I'm having difficulty finding any resource explicitly saying that, though, and simply trying it would be