Re: [sqlalchemy] Problems handling an Amazon RDS Multi-AZ failover with SqlSoup and SqlAlchemy 0.7.8

2012-11-15 Thread JonathanQ
Thanks Michael - I think I am starting to see where we have gone wrong with 
our session handling.  Looks like calling remove() on the session at the 
end of a unit of work is probably the best approach.  

Here is a bit more detail on what these applications are doing - and 
perhaps you can comment on whether calling a remove() on the session at the 
end of the scope of work is the best action to take.

Application 1 runs 24/7 and consumes from a messaging queue.  It pulls 10 
messages off the queue and processes each one individually.  Upon 
completion of processing for all 10 messages - we will ACK the ones that 
succeeded and let the ones that failed be re-driven to another consumer 
(after a few attempts we put them aside in a DLQ).

The unit of work here could be each message, or the batch of 10.

Application 2 runs 24/7 and loops over a list of enums and does queries 
based on them.  For example: [blue, green, red, yellow].  We will loop over 
each and do a query for all records with color: blue, process them and then 
move on to the next.  If processing for a single color fails, it shouldn't 
stop the processing of the next one - and assumption is that the next time 
around the failing ones will work (or will work soon).  We sleep between 
outer loops for ~15 seconds.

The unit of work would be each color, or a single loop through all colors.

Currently in both applications we are not doing anything with the session 
after we finish processing the unit of work, so we are reusing the same 
session for all requests (for many days).  That is why when something goes 
wrong and rollback fails for whatever reason - our application gets into a 
bad state and can't process anything until it is restarted.  

Neither application is multi-threaded.  

Is there any performance hit in doing a session.remove() very often?  I am 
assuming not - after reading your reply I did some searches on 
session.remove() and that seems to be the regular behaivor for a web 
request.  We can safely break up the unit of work into either very small 
chunks to minimize impact of a db issue, or batch them together safely 
without much impact on data integrity if there is a performance impact. 
 Either way - re-using the same session for long periods of time (days) 
looks like its the root cause of our issues.

Thanks again for your help!

Jonathan


On Wednesday, November 14, 2012 5:17:07 PM UTC-8, Michael Bayer wrote:


 On Nov 14, 2012, at 2:38 PM, JonathanQ wrote: 

  We have some code running against MySQL running in Amazon RDS.  During 
 an outage or the maintenance window - Amazon issues an automatic failover 
 of our RDS instance.  However our code doesn't failover with it. 
  
  We are having issues with 2 of our applications (2 different issues): 
  
  First one: Unable to rollback when we get the error 2006, 'MySQL server 
 has gone away' 
  
  The issue we are having is that a DB operation will fail when the 
 failover occurs.  So in the event of an error we issue a rollback on the 
 SqlSoup object (which delegates to the underlying session).   
  
sqlsoup_engine.rollback() 
  
  However this rollback fails with the error:   
  
  connection.rollback() 
OperationalError: (OperationalError) (2006, 'MySQL server has gone 
 away') None None 
  
  Subsequently our application continues on an then starts failing with: 
StatementError(Can't reconnect until invalid transaction is rolled 
 back (original cause: InvalidRequestError: Can't reconnect until invalid 
 transaction is rolled back),) 

 OK well when you say in the event of an error, is that an error that 
 precedes the rollback, and is it the same 2006 error ?  or is the 2006 
 error new with the rollback? 

 Basically, what should happen is, any DB operation, like execute, or 
 commit, rollback, can potentially throw a 2006.  SQLAlchemy will detect 
 this as a disconnect situation, and discard all open connections that 
 aren't checked out.  The next operation will use a brand new connection. 

 So without knowing if you're getting this same error multiple times or 
 not, you'd want to either rollback() a second time, or just throw away the 
 Session after the rollback() fails.   With SQLSoup you'd want to get at it 
 by sqlsoup_engine.session.remove(), which will replace the current thread's 
 Session with a new one. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/EQPTTqrtih0J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] v0.7.9 text() and multi-row inserts failing - sqlite driver issue?

2012-11-15 Thread Adam Venturella
Maybe I am just doing something wrong, but I have been fighting with this 
for a while to no avail.

Here's my code:

query = '''INSERT INTO users_roles (user_id, role_id) VALUES (:user_id, 
:perm0), (:user_id, :perm1)'''
t = text(query).execution_options(autocommit=True)
self.conn.execute(t, user_id=1, perm0=1, perm1=2)

Here is the log:
2012-11-15 11:22:12,450 INFO sqlalchemy.engine.base.Engine INSERT INTO 
users_roles (user_id, role_id) VALUES (?, ?), (?, ?)
2012-11-15 11:22:12,451 INFO sqlalchemy.engine.base.Engine (1, 1, 1, 2)

Here is the error:
File /.../site-packages/sqlalchemy/engine/default.py, line 331, in 
do_execute
cursor.execute(statement, parameters)
OperationalError: (OperationalError) near ,: syntax error u'INSERT INTO 
users_roles (user_id, role_id) VALUES (?, ?), (?, ?)' (1, 1, 1, 2)


If I evaluate this manually, my insert statement is:
INSERT INTO users_roles (user_id, role_id) VALUES (1, 1), (1, 2)

When I run this manually in my sql client (using sqlite for the DB right 
now) it succeeds. If I run it through sqlalchemy, it get the error 
outlined. Is this potentially a driver issue or am I just missing something?

Tried it with the stdlib sqlite3 driver as well, it fails there too:

import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute(INSERT INTO users_roles (user_id, role_id) VALUES (1, 1), (1, 
2))


Table def:

CREATE TABLE users_roles (
id INTEGER NOT NULL, 
user_id INTEGER NOT NULL, 
role_id INTEGER NOT NULL,  
PRIMARY KEY (id), 
)

Anyone else encounter this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/fRQ9vBlKMzIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] v0.7.9 text() and multi-row inserts failing - sqlite driver issue?

2012-11-15 Thread Michael Bayer

On Nov 15, 2012, at 2:43 PM, Adam Venturella wrote:

 Maybe I am just doing something wrong, but I have been fighting with this for 
 a while to no avail.
 
 
 Tried it with the stdlib sqlite3 driver as well, it fails there too:
 
 import sqlite3
 conn = sqlite3.connect('example.db')
 c = conn.cursor()
 c.execute(INSERT INTO users_roles (user_id, role_id) VALUES (1, 1), (1, 2))

Is it possible that your pysqlite/sqlite3 driver is linked to an older version 
of the sqlite3 libraries ?   check sqlite3.sqlite_version_info for that 
information, then take a look at http://www.sqlite.org/news.html to see if this 
is a more recent feature add.

In any case, this is either an out of date sqlite library or a bug in 
pysqlite/sqlite3.   If a bug in the Python driver, would mean pysqlite is 
parsing the SQL, though that seems very surprising.  But in any case the issue 
is not within SQLAlchemy.

If your sqlite libraries are up to date as well as your sqlite3/pysqlite 
libraries, you'd look into scanning bugs.python.org as well as 
http://code.google.com/p/pysqlite/issues/list to see if this has been reported 
and you can report it there, using a pure sqlite3 test case.   Though i have a 
feeling you have two versions of the sqlite database libraries installed.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] _DBProxy.dispose() doesn't actually dispose of the pool/connections?

2012-11-15 Thread Michael Bayer

On Nov 15, 2012, at 7:26 PM, Carl Meyer wrote:

 Hi,
 
 I'm using SQLAlchemy's connection pool implementation directly (via
 manage() and _DBProxy), and it seems to me that _DBProxy does not
 dispose of its pools/connections the way that the
 documentation/docstrings/method names seem to imply it would.
 
 The _DBProxy.dispose method docstring says that it will dispose the
 pool referenced by the given connect arguments, but it doesn't ever
 call dispose() on the pool (which would actually close the db
 connections), it just removes the pool from its internal mapping of
 pools. The effect of this is that idle connections in the pool remain
 open until the process ends. Here's example code:

that's possible, it's a pretty old, little used, API, and that document 
probably assumes GC will pick up on it, and it would be better if dispose() was 
called on the pool all the way through.   But the garbage collector should be 
reclaiming the pool and those connections (all DBAPIs I'm aware of release 
TCP/IP connections when the connection is garbage collected).  Even calling 
gc.collect(), you're not seeing anything clear out ?

 It seems that perhaps _DBProxy is relying on garbage collection to
 somehow clean everything up once it lets go of its reference to the
 pool, but even on CPython with refcounting GC this does not seem to
 work;

CPython clears unreachable cycles periodically as well.  calling gc.collect() 
will force this process to occur.

 Maybe some
 previous version of Pool had a __del__ method that called its dispose
oh.  actually there's a __del__ on _DBProxy and that's probably why 
gc.collect() isn't doing it (it interferes with cyclic GC).

your best bet for now is to just iterate through db.proxy.pools and just call 
dispose() on them.   

http://www.sqlalchemy.org/trac/ticket/2609 is added.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Is it possible to define multiple model objects that refer to the same table?

2012-11-15 Thread Michael Bayer

On Nov 15, 2012, at 7:33 PM, Rob Crowell wrote:

 Sorry, that got cut off at the end.
 
 class IssueTag(Base):
 __tablename__ = 'issue_user_tag'
 
 sqlalchemy.exc.InvalidRequestError: Table 'issue_user_tag' is already defined 
 for this MetaData instance.  Specify 'extend_existing=True' to redefine 
 options and columns on an existing Table object.
 
 On Thursday, November 15, 2012 7:32:29 PM UTC-5, Rob Crowell wrote:
 I'm working with a denormalized cache schema, and I've run into a situation 
 where it would be helpful to be able to create multiple classes that extend 
 Base but refer to the same __tablename__.  Is this possible to do?  I am 
 getting this Exception:
 sqlalchemy.exc.InvalidRequestError: Table '[tablename]' is already 
 defined for this MetaData instance.  Specify 'extend_existing=True' to 
 redefine options and columns on an existing Table object.
 
 For a little more insight, we have some attributes that always have exactly 
 one value (user who created the issue), and other attributes that can have 1 
 or more values (user-defined tags for the issue).  If we were being 
 exhaustive, we would create two cached tables for our issues since 
 sometimes we want to display recent issues sometimes by user and sometimes by 
 tag:
 * issue_user
 * issue_tag
 
 However, we can get away with writing just one table and querying it with an 
 appropriate group_by(user_id) to achieve the same end as having 2 tables.  
 Since my application should behave as if there were 2 separate cache tables 
 (and I'd like to keep open the option of adding two separate cache tables in 
 the future), I would like to have 2 different Base classes representing the 
 two ways in which we would query the table.  The obvious way of doing this 
 doesn't work:
 
 class IssueUser(Base):
 __tablename__ = 'issue_user_tag'
 
 class IssueTag(Base):
 
 
 -- 

two^H^H^H three ways:

1. map to a Table:

mytable = Table(mytable, Base.metadata, Column(...))

class A(Base):
__table__ = mytable

class B(Base):
__table__ = mytable


1a: variant of 1, map A as you did but use __table__ on B

class A(Base):
__tablename__ = 'mytable'

x = Column(...)

class B(Base):
__table__ = A.__table__

2. use single table inheritance with no discriminator

class MyTable(Base):
__tablename__ = 'mytable'

class A(MyTable):
   # 

class B(MyTable):
   # ...

I don't have an understanding of your querying situation yet, discriminating on 
group_by() seems a little strange as group_by() is only intended to be used to 
group for aggregates, but #1, #1a or #2 should fit the bill.




 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/pPc-8bqYaSUJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] adding an association_proxy to mapper properties?

2012-11-15 Thread Gerald Thibault
I have an association proxy set up to proxy an attribute from a parent to a 
one to one child. That proxied column does not show when I do 
class.__mapper__.iterate_properties. How would I go about adding a property 
so the proxied columns are exposed via iterate_properties?

I'll add some code if necessary, but this seems like a pretty 
straight-forward question that I probably just overlooked in the docs.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/tURm9Y3NI6YJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.