[sqlalchemy] Re: Secondary tables and deleting

2011-06-07 Thread Geoff
On Jun 7, 12:42 am, Michael Bayer mike...@zzzcomputing.com wrote:

 Below is a short test, can you figure out what you might be doing differently 
 ?

[snip]

Michael, thanks very much. The database I'm looking at is my test
database for the application I'm developing and it gets used and
abused quite often and it's entirely possible that some idiot
programmer (me) messed things up and if that's the case I apologise
for wasting your time.

However, I will adapt the code from your last post to my schema and
see what happens. If, as I suspect, everything will work fine I'll do
a drop all and recreate the database and keep an eye on it to see if
this problem reoccurs, cheers.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Secondary tables and deleting

2011-06-06 Thread Geoff
On Jun 6, 11:32 pm, Michael Bayer mike...@zzzcomputing.com wrote:

 The table referenced by secondary= in a relationship() is managed entirely, 
 regardless of cascade setting, but
 only from the perspective of the parent.  Meaning if A references a 
 collection of B, the row in a_to_b will be
 deleted if you 1. delete A, or 2. remove a B from A's collection.   Deleting 
 a B by itself as in session.delete(B)
 won't have an effect unless you apply a backref to the relationship so that 
 each B also knows about its collection
 of  A.

I am indeed deleting B but in this case I do have the backref
specified in the parent table A e.g.

Class A(Base):
  toB = relationship('B', secondary=a_b, backref='toA')

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] one to many back to same table using DeclarativeBase

2009-03-09 Thread Geoff Corey

tContact = Table('contact', metadata,
   Column('user_id', Integer, ForeignKey('user.user_id'),
   Column('contact_id', Integer, ForeignKey('user.user_id')
)

class User(DeclarativeBase):
__table_name__ = 'user'
   user_id = Column(Integer, autoincrement=True, primary_key=True)
   name = Column(Unicode(80))
   contacts = relation('User', secondary=tContact)

I know the statement contacts = relation  is wrong.   I have poured
over examples and cannot seem to fathom how to do this relationship in
DeclarativeBase.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Possible problem with pool?

2008-06-16 Thread Geoff

I'm getting an odd issue with connection recycling. Here's whats
happening:

20080616 10:50:01,654 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection _mysql.connection open to 'localhost' at 8bf4ddc exceeded
timeout; recycling
20080616 10:50:01,654 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Closing
connection _mysql.connection open to 'localhost' at 8bf4ddc
20080616 10:50:01,655 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Created
new connection _mysql.connection open to 'localhost' at 8bf9f2c
20080616 10:50:01,656 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection _mysql.connection open to 'localhost' at 8bf9f2c checked
out from pool
20080616 10:50:02,249 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection _mysql.connection open to 'localhost' at 8bf9f2c being
returned to pool
20080616 10:50:02,251 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection _mysql.connection open to 'localhost' at 8bf9f2c checked
out from pool
20080616 10:50:02,907 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection _mysql.connection open to 'localhost' at 8bf9f2c being
returned to pool
20080616 10:50:02,909 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection _mysql.connection open to 'localhost' at 8bf9f2c checked
out from pool
20080616 10:50:02,913 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
Connection _mysql.connection open to 'localhost' at 91261e4 exceeded
timeout; recycling
20080616 10:50:02,913 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Closing
connection _mysql.connection open to 'localhost' at 91261e4
20080616 10:50:02,914 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Created
new connection _mysql.connection open to 'localhost' at a3841bc
20080616 10:50:02,915 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
Connection _mysql.connection open to 'localhost' at a3841bc checked
out from pool
20080616 10:50:03,100 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Invalidate connection _mysql.connection open to 'localhost' at
9c06b0c (reason: OperationalError:(2006, 'MyS
QL server has gone away'))
20080616 10:50:03,101 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
Invalidate connection _mysql.connection open to 'localhost' at
9c137a4 (reason: OperationalError:(2006, 'MyS
QL server has gone away'))
20080616 10:50:03,101 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Closing
connection _mysql.connection open to 'localhost' at 9c06b0c
20080616 10:50:03,101 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Closing
connection _mysql.connection open to 'localhost' at 9c137a4
20080616 10:50:03,102 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
Connection None being returned to pool
20080616 10:50:03,102 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
Connection None being returned to pool
20080616 10:50:03,103 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Pool
disposed. Pool size: 5  Connections in pool: 0 Current Overflow: -5
Current Checked out connections: 0
20080616 10:50:03,104 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Pool
disposed. Pool size: 5  Connections in pool: 0 Current Overflow: -5
Current Checked out connections: 0
20080616 10:50:03,104 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Pool
recreating
20080616 10:50:03,105 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Pool
recreating

This is after the app has been left for several hours (the recycle
time is 3600) and happens every time. It looks like it realises that
connection 8bf4ddc is old and recycles it and checks out the new
connection OK. However, the second time it does this (with 91261e4) it
recycles it and checks it out. But then it uses some random other
connection that doesn't seem to have been checked out 9c06b0c. Even
more strangely it uses another connection 9c137a4, again not checked
out. Both of these are old and fail with mysql server has gone away.

After this I keep getting errors like: Can't reconnect until invalid
transaction is rolled back

I'm about to delve into the pool code to try and figure it out, anyone
have any ideas what might be causing this problem?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible problem with pool?

2008-06-16 Thread Geoff

My bad, turns out to be a threading issue :)

On Jun 16, 11:04 am, Geoff [EMAIL PROTECTED] wrote:
 I'm getting an odd issue with connection recycling. Here's whats
 happening:

 20080616 10:50:01,654 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection _mysql.connection open to 'localhost' at 8bf4ddc exceeded
 timeout; recycling
 20080616 10:50:01,654 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Closing
 connection _mysql.connection open to 'localhost' at 8bf4ddc
 20080616 10:50:01,655 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Created
 new connection _mysql.connection open to 'localhost' at 8bf9f2c
 20080616 10:50:01,656 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection _mysql.connection open to 'localhost' at 8bf9f2c checked
 out from pool
 20080616 10:50:02,249 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection _mysql.connection open to 'localhost' at 8bf9f2c being
 returned to pool
 20080616 10:50:02,251 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection _mysql.connection open to 'localhost' at 8bf9f2c checked
 out from pool
 20080616 10:50:02,907 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection _mysql.connection open to 'localhost' at 8bf9f2c being
 returned to pool
 20080616 10:50:02,909 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection _mysql.connection open to 'localhost' at 8bf9f2c checked
 out from pool
 20080616 10:50:02,913 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
 Connection _mysql.connection open to 'localhost' at 91261e4 exceeded
 timeout; recycling
 20080616 10:50:02,913 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Closing
 connection _mysql.connection open to 'localhost' at 91261e4
 20080616 10:50:02,914 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Created
 new connection _mysql.connection open to 'localhost' at a3841bc
 20080616 10:50:02,915 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
 Connection _mysql.connection open to 'localhost' at a3841bc checked
 out from pool
 20080616 10:50:03,100 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Invalidate connection _mysql.connection open to 'localhost' at
 9c06b0c (reason: OperationalError:(2006, 'MyS
 QL server has gone away'))
 20080616 10:50:03,101 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
 Invalidate connection _mysql.connection open to 'localhost' at
 9c137a4 (reason: OperationalError:(2006, 'MyS
 QL server has gone away'))
 20080616 10:50:03,101 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Closing
 connection _mysql.connection open to 'localhost' at 9c06b0c
 20080616 10:50:03,101 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Closing
 connection _mysql.connection open to 'localhost' at 9c137a4
 20080616 10:50:03,102 INFO  [sqlalchemy.pool.QueuePool.0x..2c]
 Connection None being returned to pool
 20080616 10:50:03,102 INFO  [sqlalchemy.pool.QueuePool.0x..ac]
 Connection None being returned to pool
 20080616 10:50:03,103 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Pool
 disposed. Pool size: 5  Connections in pool: 0 Current Overflow: -5
 Current Checked out connections: 0
 20080616 10:50:03,104 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Pool
 disposed. Pool size: 5  Connections in pool: 0 Current Overflow: -5
 Current Checked out connections: 0
 20080616 10:50:03,104 INFO  [sqlalchemy.pool.QueuePool.0x..2c] Pool
 recreating
 20080616 10:50:03,105 INFO  [sqlalchemy.pool.QueuePool.0x..ac] Pool
 recreating

 This is after the app has been left for several hours (the recycle
 time is 3600) and happens every time. It looks like it realises that
 connection 8bf4ddc is old and recycles it and checks out the new
 connection OK. However, the second time it does this (with 91261e4) it
 recycles it and checks it out. But then it uses some random other
 connection that doesn't seem to have been checked out 9c06b0c. Even
 more strangely it uses another connection 9c137a4, again not checked
 out. Both of these are old and fail with mysql server has gone away.

 After this I keep getting errors like: Can't reconnect until invalid
 transaction is rolled back

 I'm about to delve into the pool code to try and figure it out, anyone
 have any ideas what might be causing this problem?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ordering

2008-05-30 Thread Geoff

Hi!

I've noticed that a very simple query has an ordering applied to it
even though I haven't asked for one. Is there a way to stop it doing
that?

the query:
Session.query(User).set_shard(shard).filter_by(uuid=uuid).all()

Thanks!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Optimisation Aid Layer

2008-05-28 Thread Geoff

Hi!

I've optimising my database interaction recently and thought that SQLA
might be able to help out with the process. Something like doing an
EXPLAIN on each query and reporting this to the developer. It could
even inspect the indices used and flag up any queries which could need
some TLC.

No paradigm shift, but would be a handy little time saver :)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mySQL force index?

2008-05-22 Thread Geoff

Does SQLA have any mechanism to use FORCE INDEX?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using a non-primary key as ORM identifier

2008-05-22 Thread Geoff

Nice, thanks for the tips guys!

From what I understand, MySQL has some tasty optimisations that are
used when the primary key is an int. So once it knows what primary key
it's looking for (after looking in the index), it's faster to retrieve
the row.

On May 22, 2:50 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On May 22, 2008, at 7:10 AM, Geoff wrote:



  Hi,

  I have split up my database  horizontally and am using UUIDs to
  uniquely identify a row across databases. Using UUIDs as a primary key
  is slow (InnoDB) so I wanted to use the common trick of having a INT
  primary key using auto_increment on each database. This is all fine,
  until sqlalchemy checks its cache of objects after a query to see if
  the object has already been retrieved. This breaks because the primary
  key is not unique across databases when I use an auto_incremented INT.

  I reckon the solution is going to have to be manually setting the
  field used by sqlalchemy to make the cache decision. Is there any way
  of doing this already, or am I going to have to put it in myself?

 setup the mapper() to have a composite primary key consisting of the  
 autoincremented integer and the UUID column.  Use the primary_key  
 option on mapper() to achieve this.

 (also why is using a UUID slow ?  if the column is indexed, the  
 difference between int/string would be miniscule compared to the fact  
 that you're using Python and not hardcoded C as the application  
 platform...)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] from_statement doesn't apply convert_unicode

2008-05-22 Thread Geoff

Any know why results from using from_statement do not convert strings?
It works fine when I use filter_by etc...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Sharding and COUNT

2008-04-11 Thread Geoff

Hi!

I'm struggling to understand why the shard_chooser gets called for
this query:

SELECT count(users.uid) AS count_1 FROM users WHERE users.external_id
= %s

I would have thought this query_chooser should be called instead?
shard_chooser gets set a mapper, a clause but the instance is None.

Thanks for any help,

Geoff
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sharding and COUNT

2008-04-11 Thread Geoff

I've figured out why it doesn't work. The scalar method on Session
doesn't deal with sharding.

On Apr 11, 11:55 am, Geoff [EMAIL PROTECTED] wrote:
 Hi!

 I'm struggling to understand why the shard_chooser gets called for
 this query:

 SELECT count(users.uid) AS count_1 FROM users WHERE users.external_id
 = %s

 I would have thought this query_chooser should be called instead?
 shard_chooser gets set a mapper, a clause but the instance is None.

 Thanks for any help,

 Geoff
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Lock wait timeout exceeded

2007-07-02 Thread Geoff

Hi,

I'm getting this exception: 'Lock wait timeout exceeded; try
restarting transaction.

I've looked through the docs and can't see if there is a way to get SA
to retry itself rather than having to catch this exception and retry.
It would be nicer to have the necessary apparatus to do retries closer
to the actual execution, rather than have another abstraction layer
between the queries and SA to be able to handle these types of timeout
and not have the code replicated all over the show.

Is there anything in SA handle lock timeouts or does anyone have a
decent way of doing so on top?

Thanks!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Lock wait timeout exceeded

2007-07-02 Thread Geoff

Hi! thanks for replying :)

I agree that transactions shouldn't be timing out, the problem happens
when we do a database dump, this locks all the tables for a while and
it happens once a day.

I think I'll go fix it at the source rather than handling it in the
code - i.e. find another way of dumping the database which doesn't
take so long  and / or increase the timeout on the mysql server...

Cheers :)




On Jul 2, 3:17 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jul 2, 6:37 am, Geoff [EMAIL PROTECTED] wrote:

  Hi,

  I'm getting this exception: 'Lock wait timeout exceeded; try
  restarting transaction.

 you shouldn't have transactions locking for so long that you're
 getting such an error.  meaning, try not to have long-running
 transactions open.



  I've looked through the docs and can't see if there is a way to get SA
  to retry itself rather than having to catch this exception and retry.

 no, if you wanted to retry this is something youd have to do in your
 own application code.  SA does not store a history of every
 statement it issues within a transaction (not to mention all the
 dependent operations around its statements) nor would it  ever want to
 assume such behaviorsince the error youre getting is generally
 considered to be a failure despite its optimistic try again message.

 if you really want to build around a long running transactions model
 where timeout errors are expected, youd have to build a layer of
 heres the things i just did, now im doing them again within your
 application.

 note that im not even sure every database even allows transaction
 locks to timeout.  my experience with oracle and MS-SQL is that locks
 are infinite until cancelled/killed (since it allows an administrator
 to log in and see whats deadlocked).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---