Re: [sqlalchemy] Bulk Inserts and Unique Constraints

2014-03-27 Thread Cosmia Luna
Wow, I didn't know that... is it a bug?

But, the RETURNING clause will work though :)

stmt = P.__table__.insert(returning=[P.id], values=[{val: 1}, {val: 2}])
with engine.connect() as conn:
result = conn.execute(stmt)
# do something with result

hmm, maybe perfomance suffers from huge-size-SQL.. well, which I didn't 
test

Cosmia

On Monday, March 24, 2014 9:39:51 PM UTC+8, Michael Bayer wrote:

 RETURNING doesn’t work with DBAPI’s “executemany” style of execution, 
 however, which is what conn.execute(stmt, [list of parameter sets]) calls.



 On Mar 24, 2014, at 5:33 AM, Cosmia Luna cos...@gmail.com javascript: 
 wrote:

 INSERT statement of postgresql supports RETURNING, read this 
 http://docs.sqlalchemy.org/en/rel_0_8/core/dml.html#sqlalchemy.sql.expression.Insert.returning

 On Monday, March 24, 2014 2:43:46 PM UTC+8, James Meneghello wrote:

 Oops, I should add - the reason I can't use an itertools counter to 
 pre-assign IDs is because the table is potentially being dumped to by 
 multiple scripts, which is why I have to commit the parts prior to the 
 segments (since engine.execute can't return multiple insert_ids).

 On Monday, 24 March 2014 14:40:52 UTC+8, James Meneghello wrote:

 Thanks for the quick reply!

 This seems to work pretty well. I took out the batching (as it's already 
 batched at a higher level) and modified it to suit the insertion of 
 children as well (and reducded the unique to a single field) , and it 
 appears to work.

 with db_session() as db:
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
 )
 )

 segment_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if existing_entry:
 segments = dict(((s.segment, s) for s in 
 existing_entry.segments))
 for segment_number, segment in entry['segments'].items():
 if int(segment_number) not in segments:
 segment['entry_id'] = existing_entry.id
 segment_inserts.append(segment)
 else:
 entry_id = engine.execute(Entry.__table__.insert(), 
 entry).inserted_primary_key
 for segment in entry['segments'].values():
 segment['entry_id'] = entry_id[0]
 segment_inserts.append(segment)

 engine.execute(Segment.__table__.insert(), segment_inserts)


 For 20,000 segments, this ends up being about 45 seconds and 1650 
 queries - 2 to select all the entries and segments, 1 to insert the 
 segments and the rest to insert parts. From here, however, I rewrote it a 
 bit:

 with db_session() as db:
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).all()
 )
 )

 entry_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if not existing_entry:
 entry_inserts.append(entry)

 engine.execute(Entry.__table__.insert(), entry_inserts)

 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
 )
 )

 segment_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if existing_entry:
 segments = dict(((s.segment, s) for s in 
 existing_entry.segments))
 for segment_number, segment in entry['segments'].items():
 if int(segment_number) not in segments:
 segment['entry_id'] = existing_entry.id
 segment_inserts.append(segment)
 else:
 log.error('i\'ve made a huge mistake')

 engine.execute(Segment.__table__.insert(), segment_inserts)

 This ends up being about 19 seconds, 6 queries for a clean dump, and a 
 bit less if the table is already populated. Removing the unique indexes on 
 both the entries and segments tables and replacing them with standard 
 indexes saves about a second in a full dump, and about 6 seconds for an 
 update. I'm pretty happy with where it is now, and I suspect most of the 
 time (aside from the two insert calls) is being spent in Python. That said, 
 if you have any tips for improvements I'd be all ears.

 Thanks for the help!

 On Monday, 24 March 2014 09:19:25 UTC+8, Michael Bayer wrote:


 On Mar 23, 2014, at 11:33 AM, James Meneghello muro...@gmail.com 
 wrote:

 I'm having a few issues with unique constraints and bulk inserts. The 
 software I'm writing takes data from an external source (a lot of it, 
 anywhere 

[sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
Hi all,
  Today I get an error as title.
I use sqlalchemy + mysql.
Here is my code snippet:

def dbquery(_table,whereclause):

try:

#_table=Table(tablename, metadata, autoload=True)

#_table = tables[tablename]

i=_table.select().where(whereclause) 

if direct_engine:  #direct_engine is True

res = engine.execute(i)

return res

else:

session = scoped_session(sessionmaker(bind=engine))

res = session.execute(i)

return res

session.close()

except Exception,e:

#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

#dba_logger.log(20,'Exception detail:%s' % str(kwargs))

exctrace('db','1','Error happened when querying 
db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
str(whereclause))

#session.rollback()

if not direct_engine:

session.close()

Here is snippet in another file involving dbquery:

try:

 res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
*'sessions_details'*].c.app_key==*self*.app_key,tables[*'sessions_details'*
].c.device_token==*self*._devicetoken))

except Exception,e:

exctrace(*'db'*,*'1'*,*'Error happened when querying 
db'*,dba_logger,*'Exception 
when query session_details:%s'* % str(e),*'Exception detail:appkey is 
%s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

*self*.read_message()

return   

if res is None:

   logger.log(40,*'When query connection,mysql has gone or something, just 
skip db and go-on'*)

#here need to justify 0 or 1, later on 

*self*.status=*'0'*

*self*.read_message()

return

if res.first() is None:

   if json_obj[*'action'*] == *'reg'*:

So, the line in pink above raises the exception.

Could anyone give some suggestion how this happened?


Thanks.

Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bulk Inserts and Unique Constraints

2014-03-27 Thread Michael Bayer
no DBAPI I've tested does it, at most it would only be possible for PG, SQL 
Server dialects.

On Mar 27, 2014, at 2:30 AM, Cosmia Luna cosm...@gmail.com wrote:

 Wow, I didn't know that... is it a bug?
 
 But, the RETURNING clause will work though :)
 
 stmt = P.__table__.insert(returning=[P.id], values=[{val: 1}, {val: 2}])
 with engine.connect() as conn:
 result = conn.execute(stmt)
 # do something with result
 
 hmm, maybe perfomance suffers from huge-size-SQL.. well, which I didn't 
 test
 
 Cosmia
 
 On Monday, March 24, 2014 9:39:51 PM UTC+8, Michael Bayer wrote:
 RETURNING doesn't work with DBAPI's executemany style of execution, 
 however, which is what conn.execute(stmt, [list of parameter sets]) calls.
 
 
 
 On Mar 24, 2014, at 5:33 AM, Cosmia Luna cos...@gmail.com wrote:
 
 INSERT statement of postgresql supports RETURNING, read this 
 http://docs.sqlalchemy.org/en/rel_0_8/core/dml.html#sqlalchemy.sql.expression.Insert.returning
 
 On Monday, March 24, 2014 2:43:46 PM UTC+8, James Meneghello wrote:
 Oops, I should add - the reason I can't use an itertools counter to 
 pre-assign IDs is because the table is potentially being dumped to by 
 multiple scripts, which is why I have to commit the parts prior to the 
 segments (since engine.execute can't return multiple insert_ids).
 
 On Monday, 24 March 2014 14:40:52 UTC+8, James Meneghello wrote:
 Thanks for the quick reply!
 
 This seems to work pretty well. I took out the batching (as it's already 
 batched at a higher level) and modified it to suit the insertion of children 
 as well (and reducded the unique to a single field) , and it appears to work.
 
 with db_session() as db:
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
 )
 )
 
 segment_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if existing_entry:
 segments = dict(((s.segment, s) for s in 
 existing_entry.segments))
 for segment_number, segment in entry['segments'].items():
 if int(segment_number) not in segments:
 segment['entry_id'] = existing_entry.id
 segment_inserts.append(segment)
 else:
 entry_id = engine.execute(Entry.__table__.insert(), 
 entry).inserted_primary_key
 for segment in entry['segments'].values():
 segment['entry_id'] = entry_id[0]
 segment_inserts.append(segment)
 
 engine.execute(Segment.__table__.insert(), segment_inserts)
 
 For 20,000 segments, this ends up being about 45 seconds and 1650 queries - 
 2 to select all the entries and segments, 1 to insert the segments and the 
 rest to insert parts. From here, however, I rewrote it a bit:
 
 with db_session() as db:
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).all()
 )
 )
 
 entry_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if not existing_entry:
 entry_inserts.append(entry)
 
 engine.execute(Entry.__table__.insert(), entry_inserts)
 
 existing_entries = dict(
 ((entry.subject, entry) for entry in
 
 db.query(Entry).filter(Entry.subject.in_(entries.keys())).options(subqueryload('segments')).all()
 )
 )
 
 segment_inserts = []
 for subject, entry in entries.items():
 existing_entry = existing_entries.get(subject, None)
 if existing_entry:
 segments = dict(((s.segment, s) for s in 
 existing_entry.segments))
 for segment_number, segment in entry['segments'].items():
 if int(segment_number) not in segments:
 segment['entry_id'] = existing_entry.id
 segment_inserts.append(segment)
 else:
 log.error('i\'ve made a huge mistake')
 
 engine.execute(Segment.__table__.insert(), segment_inserts)
 
 This ends up being about 19 seconds, 6 queries for a clean dump, and a bit 
 less if the table is already populated. Removing the unique indexes on both 
 the entries and segments tables and replacing them with standard indexes 
 saves about a second in a full dump, and about 6 seconds for an update. I'm 
 pretty happy with where it is now, and I suspect most of the time (aside 
 from the two insert calls) is being spent in Python. That said, if you have 
 any tips for improvements I'd be all ears.
 
 Thanks for the help!
 
 On Monday, 24 March 2014 09:19:25 UTC+8, Michael Bayer wrote:
 
 On Mar 23, 2014, at 11:33 AM, James Meneghello 

Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Michael Bayer
when the DBAPI cursor has no .description object, it is determined to not be a 
result-row returning object.   The MySQLDB DBAPI has been observed to 
occasionally have issues in this area, when a connection gets into a bad state. 
There are likely patterns in how you're calling it that lead it to have this 
issue but you might try a different DBAPI like mysql-connector-python just to 
see what's different.




On Mar 27, 2014, at 7:41 AM, Ni Wesley nisp...@gmail.com wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:
 def dbquery(_table,whereclause):
 
 try:
 
 #_table=Table(tablename, metadata, autoload=True)
 
 #_table = tables[tablename]
 
 i=_table.select().where(whereclause) 
 
 if direct_engine:  #direct_engine is True
 
 res = engine.execute(i)
 
 return res
 
 else:
 
 session = scoped_session(sessionmaker(bind=engine))
 
 res = session.execute(i)
 
 return res
 
 session.close()
 
 except Exception,e:
 
 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
 
 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))
 
 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))
 
 #session.rollback()
 
 if not direct_engine:
 
 session.close()
 
 Here is snippet in another file involving dbquery:
 
 try:
 
  res = 
 dbquery(tables['sessions_details'],whereclause=and_(tables['sessions_details'].c.app_key==self.app_key,tables['sessions_details'].c.device_token==self._devicetoken))
 
 except Exception,e:
 
 exctrace('db','1','Error happened when querying db',dba_logger,'Exception 
 when query session_details:%s' % str(e),'Exception detail:appkey is 
 %s,devicetoken is %s' % (self.app_key,self._devicetoken))
 
 self.read_message()
 
 return   
 
 if res is None:
 
logger.log(40,'When query connection,mysql has gone or something, just 
 skip db and go-on')
 
 #here need to justify 0 or 1, later on
 
 self.status='0'
 
 self.read_message()
 
 return
 
 if res.first() is None:
 
if json_obj['action'] == 'reg':
 
 So, the line in pink above raises the exception.
 
 Could anyone give some suggestion how this happened?
 
 
 
 Thanks.
 
 Wesley
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
It's not happening everytime.

And I am not using mysql-python, instead, I use pymysql.

So, the root cause is connection in bad state?
How to fix or work around from sqlalchemy side?

Wesley

在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:

 when the DBAPI cursor has no .description object, it is determined to not 
 be a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.




 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:

 def dbquery(_table,whereclause):

 try:

 #_table=Table(tablename, metadata, autoload=True)

 #_table = tables[tablename]

 i=_table.select().where(whereclause) 

 if direct_engine:  #direct_engine is True

 res = engine.execute(i)

 return res

 else:

 session = scoped_session(sessionmaker(bind=engine))

 res = session.execute(i)

 return res

 session.close()

 except Exception,e:

 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))

 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))

 #session.rollback()

 if not direct_engine:

 session.close()

 Here is snippet in another file involving dbquery:

 try:

  res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
 *'sessions_details'*].c.app_key==*self*.app_key,tables[
 *'sessions_details'*].c.device_token==*self*._devicetoken))

 except Exception,e:

 exctrace(*'db'*,*'1'*,*'Error happened when querying 
 db'*,dba_logger,*'Exception 
 when query session_details:%s'* % str(e),*'Exception detail:appkey is 
 %s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

 *self*.read_message()

 return   

 if res is None:

logger.log(40,*'When query connection,mysql has gone or something, 
 just skip db and go-on'*)

 #here need to justify 0 or 1, later on 

 *self*.status=*'0'*

 *self*.read_message()

 return

 if res.first() is None:

if json_obj[*'action'*] == *'reg'*:

 So, the line in pink above raises the exception.

 Could anyone give some suggestion how this happened?


 Thanks.

 Wesley

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
It's not happening everytime.

And I am not using mysql-python, instead, I use pymysql.

So, the root cause is connection in bad state?
How to fix or work around from sqlalchemy side?

Wesley

在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:

 when the DBAPI cursor has no .description object, it is determined to not 
 be a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.




 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:

 def dbquery(_table,whereclause):

 try:

 #_table=Table(tablename, metadata, autoload=True)

 #_table = tables[tablename]

 i=_table.select().where(whereclause) 

 if direct_engine:  #direct_engine is True

 res = engine.execute(i)

 return res

 else:

 session = scoped_session(sessionmaker(bind=engine))

 res = session.execute(i)

 return res

 session.close()

 except Exception,e:

 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))

 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))

 #session.rollback()

 if not direct_engine:

 session.close()

 Here is snippet in another file involving dbquery:

 try:

  res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
 *'sessions_details'*].c.app_key==*self*.app_key,tables[
 *'sessions_details'*].c.device_token==*self*._devicetoken))

 except Exception,e:

 exctrace(*'db'*,*'1'*,*'Error happened when querying 
 db'*,dba_logger,*'Exception 
 when query session_details:%s'* % str(e),*'Exception detail:appkey is 
 %s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

 *self*.read_message()

 return   

 if res is None:

logger.log(40,*'When query connection,mysql has gone or something, 
 just skip db and go-on'*)

 #here need to justify 0 or 1, later on 

 *self*.status=*'0'*

 *self*.read_message()

 return

 if res.first() is None:

if json_obj[*'action'*] == *'reg'*:

 So, the line in pink above raises the exception.

 Could anyone give some suggestion how this happened?


 Thanks.

 Wesley

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] What are the ORM mechanics when appending a new association in an M to N relationship?

2014-03-27 Thread Darin Gordon
Let me preface this by saying that I researched to the best of my abilities 
the answers to my following inquiries.  I would not ask for help without 
first trying to investigate this on my own.  Asking through this forum is 
the proverbial end of the line, so thanks for taking any time to read and 
respond.

I understand relational databases and have been bridging a knowledge gap 
with ORMs, using SQLAlchemy.  I've struggled a bit while learning what 
exactly is going on in the ORM when ADDING new associations in a M to N 
relationship.   

My questions are related to SQL Alchemy ORM concepts:
1) Relationships
2) Association Object pattern
3) Association Proxy


To help me grasp the concepts, I pieced together a working example of an M 
to N relation and drew a diagram showing the relationships.   The scenario 
is of a very crude appointment scheduler that someone like a realtor may 
use to keep track of who is visiting what property.  My attribute naming 
convention was chosen to facilitate our discussion.  *Link:   Github repo 
http://www.github.com/dowwie/sqlalchemyillustrated*

There are two files of interest:

   - orm_associations.py -- a working example 
   - SQLAlchemy - Association Object Relations.pdf --  contains a diagram 
   of the M to N relationship in orm_associations.py
   

In this example, a user can be added to a meeting OR a meeting can be added 
to a user using either of these association proxies:

Meeting.attendees_proxy.append(User)
or
User.meetings_proxy.append(Meeting)

These statements create the desired outcome-- a new Appointment record 
inserted into the association table and possibly a new user or meeting 
record if none already exist.

I'd like to know what exactly is happening in the ORM when either of these 
statements is executed.
For instance, what is the association_proxy creator doing?  If I add a user 
to a meeting, the attendees_proxy creator is defined as follows:
creator=lambda user: Appointment(appointment_user_relation=user))
The creator expression appears to be creating a new Appointment object, but 
then defines the RELATIONSHIP as a User object?  I'm missing something 
here.  No idea what is going on. It seems to be all a bit to automagical to 
me.


Thanks

Darin

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] 'MySQL server has gone away' on closing connection

2014-03-27 Thread Tim Tisdall
Today I found the following traceback in my logs:

2014-03-27 13:55:59,876 ERROR [sqlalchemy.pool.QueuePool _close_connection 
b'uWSGIWorker2Core14'] Exception closing connection oursql.Connection 
object at 0x7fecfdf6a140
Traceback (most recent call last):
  File 
/sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/pool.py,
 
line 243, in _close_connection
self._dialect.do_close(connection)
  File 
/sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/engine/default.py,
 
line 401, in do_close
dbapi_connection.close()
  File connection.pyx, line 193, in oursql.Connection.close 
(oursqlx/oursql.c:6143)
  File connection.pyx, line 240, in oursql.Connection.rollback 
(oursqlx/oursql.c:6562)
  File connection.pyx, line 207, in oursql.Connection._raise_error 
(oursqlx/oursql.c:6317)
oursql.OperationalError: (2006, 'MySQL server has gone away', None)

Shouldn't SQLAlchemy closing a connection catch this type of exception and 
ignore it?  If the mysql connection has gone away, can it not be 
considered closed?

-Tim

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: What are the ORM mechanics when appending a new association in an M to N relationship?

2014-03-27 Thread Tim Tisdall
Have you tried turning the SQLAlchemy logging on to see the SQL queries 
that are run on your sample program?  I've found it very helpful when 
trying to debug why something is not working the way I expected.  You _may_ 
be able to do it as follows:

import logging
logging.getLogger(sqlalchemy.engine).setLevel(logging.INFO)  # change it 
to DEBUG if you want the results of the queries too

-Tim

On Thursday, 27 March 2014 09:27:42 UTC-4, Darin Gordon wrote:

 Let me preface this by saying that I researched to the best of my 
 abilities the answers to my following inquiries.  I would not ask for help 
 without first trying to investigate this on my own.  Asking through this 
 forum is the proverbial end of the line, so thanks for taking any time to 
 read and respond.

 I understand relational databases and have been bridging a knowledge gap 
 with ORMs, using SQLAlchemy.  I've struggled a bit while learning what 
 exactly is going on in the ORM when ADDING new associations in a M to N 
 relationship.   

 My questions are related to SQL Alchemy ORM concepts:
 1) Relationships
 2) Association Object pattern
 3) Association Proxy


 To help me grasp the concepts, I pieced together a working example of an M 
 to N relation and drew a diagram showing the relationships.   The scenario 
 is of a very crude appointment scheduler that someone like a realtor may 
 use to keep track of who is visiting what property.  My attribute naming 
 convention was chosen to facilitate our discussion.  *Link:   Github repo 
 http://www.github.com/dowwie/sqlalchemyillustrated*

 There are two files of interest:

- orm_associations.py -- a working example 
- SQLAlchemy - Association Object Relations.pdf --  contains a 
diagram of the M to N relationship in orm_associations.py


 In this example, a user can be added to a meeting OR a meeting can be 
 added to a user using either of these association proxies:

 Meeting.attendees_proxy.append(User)
 or
 User.meetings_proxy.append(Meeting)

 These statements create the desired outcome-- a new Appointment record 
 inserted into the association table and possibly a new user or meeting 
 record if none already exist.

 I'd like to know what exactly is happening in the ORM when either of these 
 statements is executed.
 For instance, what is the association_proxy creator doing?  If I add a 
 user to a meeting, the attendees_proxy creator is defined as follows:
 creator=lambda user: Appointment(appointment_user_relation=user))
 The creator expression appears to be creating a new Appointment object, 
 but then defines the RELATIONSHIP as a User object?  I'm missing something 
 here.  No idea what is going on. It seems to be all a bit to automagical 
 to me.


 Thanks

 Darin



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Replacing a single column in a select (Core language)

2014-03-27 Thread Josh Kuhn
I have a situation where I need to produce a select object, and then later,
one of the fields needs to be zeroed out conditionally.

so something like:

def select_ab(param):
from_obj = join(A, B, A.c.b == B.c.b)
return select([A.c.a, B.c.b, B.c.d], from_obj=from_obj).where(A.c.a ==
param)

...

sql_query = select_ab(34)

# Would like to do something like this:
if some_condition:
   sql_query = sql_query.replace_column(B.c.d, literal(0))

engine.execute(sql_query)


I tried to hack together a replace_column function using
Select.with_only_columns, but as the documentation indicates, I need the
original list of columns in order to really do what I want to do, I can't
take the select's columns. And in my case, the original select's columns
are trapped inside the function select_ab. I'd like to be able to replace a
column on any arbitrary select if the column exists in the select.

Is there a way to do what I'm trying to do? Or do I just need to keep the
original columns around?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] What are the ORM mechanics when appending a new association in an M to N relationship?

2014-03-27 Thread Simon King
On Thu, Mar 27, 2014 at 1:27 PM, Darin Gordon dar...@gmail.com wrote:
 Let me preface this by saying that I researched to the best of my abilities
 the answers to my following inquiries.  I would not ask for help without
 first trying to investigate this on my own.  Asking through this forum is
 the proverbial end of the line, so thanks for taking any time to read and
 respond.

 I understand relational databases and have been bridging a knowledge gap
 with ORMs, using SQLAlchemy.  I've struggled a bit while learning what
 exactly is going on in the ORM when ADDING new associations in a M to N
 relationship.

 My questions are related to SQL Alchemy ORM concepts:
 1) Relationships
 2) Association Object pattern
 3) Association Proxy


 To help me grasp the concepts, I pieced together a working example of an M
 to N relation and drew a diagram showing the relationships.   The scenario
 is of a very crude appointment scheduler that someone like a realtor may use
 to keep track of who is visiting what property.  My attribute naming
 convention was chosen to facilitate our discussion.  Link:   Github repo

 There are two files of interest:

 orm_associations.py -- a working example
 SQLAlchemy - Association Object Relations.pdf --  contains a diagram of
 the M to N relationship in orm_associations.py


 In this example, a user can be added to a meeting OR a meeting can be added
 to a user using either of these association proxies:

 Meeting.attendees_proxy.append(User)
 or
 User.meetings_proxy.append(Meeting)

 These statements create the desired outcome-- a new Appointment record
 inserted into the association table and possibly a new user or meeting
 record if none already exist.

 I'd like to know what exactly is happening in the ORM when either of these
 statements is executed.
 For instance, what is the association_proxy creator doing?  If I add a user
 to a meeting, the attendees_proxy creator is defined as follows:
 creator=lambda user: Appointment(appointment_user_relation=user))
 The creator expression appears to be creating a new Appointment object, but
 then defines the RELATIONSHIP as a User object?  I'm missing something here.
 No idea what is going on. It seems to be all a bit to automagical to me.



The association proxy is purely a shorthand to allow you to get from
Meeting to User and back again without having to worry about the
Appointment object in between. If you didn't have the association
proxy, you would have to do something like this to add a user to a
meeting:

def add_user_to_meeting(user, meeting):
appointment = Appointment(appointment_user_relation=user,
meeting_appointment_relation=meeting)
return appointment

The association proxy hides the Appointment object, making it look
like Meetings just have a list of Users:

def add_user_to_meeting(user, meeting):
meeting.attendees_proxy.append(user)
# or, alternatively
# user.meetings_proxy.append(meeting)

When you call append, it calls your creator() function, passing the
user. Your creator function creates the Appointment object and sets up
the appointment - user side of the relationship. The association
proxy sets up the appointment - meeting side of the relationship
itself, something like this:

# creator function does this:
appointment = Appointment(appointment_user_relation=user)
# Association proxy does this:
appointment.meeting_appointment_relation = meeting

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 9:14 AM, Ni Wesley nisp...@gmail.com wrote:

 It's not happening everytime.
 
 And I am not using mysql-python, instead, I use pymysql.
 
 So, the root cause is connection in bad state?
 How to fix or work around from sqlalchemy side?

I don’t really know, I’d need to have a short script which replicates it in 
order to identify the pattern exactly.

if its pymysql, try another DBAPI.  though this may be some artifact of the 
MySQL protocol.

as always, if you’re doing anything with threads, concurrent use of a MySQL 
connection will usually lead to problems like this.  but there can be other 
reasons too.




 
 Wesley
 
 在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:
 when the DBAPI cursor has no .description object, it is determined to not be 
 a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.
 
 
 
 
 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com wrote:
 
 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:
 def dbquery(_table,whereclause):
 
 try:
 
 #_table=Table(tablename, metadata, autoload=True)
 
 #_table = tables[tablename]
 
 i=_table.select().where(whereclause) 
 
 if direct_engine:  #direct_engine is True
 
 res = engine.execute(i)
 
 return res
 
 else:
 
 session = scoped_session(sessionmaker(bind=engine))
 
 res = session.execute(i)
 
 return res
 
 session.close()
 
 except Exception,e:
 
 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
 
 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))
 
 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))
 
 #session.rollback()
 
 if not direct_engine:
 
 session.close()
 
 Here is snippet in another file involving dbquery:
 
 try:
 
  res = 
 dbquery(tables['sessions_details'],whereclause=and_(tables['sessions_details'].c.app_key==self.app_key,tables['sessions_details'].c.device_token==self._devicetoken))
 
 except Exception,e:
 
 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when query session_details:%s' % str(e),'Exception 
 detail:appkey is %s,devicetoken is %s' % (self.app_key,self._devicetoken))
 
 self.read_message()
 
 return   
 
 if res is None:
 
logger.log(40,'When query connection,mysql has gone or something, just 
 skip db and go-on')
 
 #here need to justify 0 or 1, later on
 
 self.status='0'
 
 self.read_message()
 
 return
 
 if res.first() is None:
 
if json_obj['action'] == 'reg':
 
 So, the line in pink above raises the exception.
 
 Could anyone give some suggestion how this happened?
 
 
 
 Thanks.
 
 Wesley
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] 'MySQL server has gone away' on closing connection

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 10:03 AM, Tim Tisdall tisd...@gmail.com wrote:

 Today I found the following traceback in my logs:
 
 2014-03-27 13:55:59,876 ERROR [sqlalchemy.pool.QueuePool _close_connection 
 b'uWSGIWorker2Core14'] Exception closing connection oursql.Connection object 
 at 0x7fecfdf6a140
 Traceback (most recent call last):
   File 
 /sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/pool.py,
  line 243, in _close_connection
 self._dialect.do_close(connection)
   File 
 /sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/engine/default.py,
  line 401, in do_close
 dbapi_connection.close()
   File connection.pyx, line 193, in oursql.Connection.close 
 (oursqlx/oursql.c:6143)
   File connection.pyx, line 240, in oursql.Connection.rollback 
 (oursqlx/oursql.c:6562)
   File connection.pyx, line 207, in oursql.Connection._raise_error 
 (oursqlx/oursql.c:6317)
 oursql.OperationalError: (2006, 'MySQL server has gone away', None)
 
 Shouldn't SQLAlchemy closing a connection catch this type of exception and 
 ignore it?  If the mysql connection has gone away, can it not be considered 
 closed?

there's a difference between ignore and silent. Here's the code:

def _close_connection(self, connection):
self.logger.debug(Closing connection %r, connection)
try:
self._dialect.do_close(connection)
except (SystemExit, KeyboardInterrupt):
raise
except:
self.logger.error(Exception closing connection %r,
connection, exc_info=True)

as you can see, the error is ignored, we just log that something unexpected and 
possibly problematic has happened.I don't think this is necessarily a bad 
thing.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Replacing a single column in a select (Core language)

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 10:08 AM, Josh Kuhn deontologic...@gmail.com wrote:

 I have a situation where I need to produce a select object, and then later, 
 one of the fields needs to be zeroed out conditionally.
 
 so something like:
 
 def select_ab(param):
 from_obj = join(A, B, A.c.b == B.c.b)
 return select([A.c.a, B.c.b, B.c.d], from_obj=from_obj).where(A.c.a == 
 param)
 
 ...
 
 sql_query = select_ab(34)
 
 # Would like to do something like this:
 if some_condition:
sql_query = sql_query.replace_column(B.c.d, literal(0))
 
 engine.execute(sql_query)
 
 
 I tried to hack together a replace_column function using 
 Select.with_only_columns, but as the documentation indicates, I need the 
 original list of columns in order to really do what I want to do, I can't 
 take the select's columns. And in my case, the original select's columns are 
 trapped inside the function select_ab. I'd like to be able to replace a 
 column on any arbitrary select if the column exists in the select.
 
 Is there a way to do what I'm trying to do? Or do I just need to keep the 
 original columns around?

the original columns are in the select() object.   They're first in their 
original form in a list called _raw_columns, that is, if you had 
select([sometable, somecol, someothercol]) it would be literally those three 
elements, and then there is a public accessor called inner_columns which is 
derived directly from _raw_columns, it just expands a selectable (e.g. 
something with a .c. collection) into individual columns.

So it should be feasible to say something equivalent to:  
select.with_only_columns([(literal(0) if c is my_special_thing else c) for c in 
select.inner_columns])


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Replacing a single column in a select (Core language)

2014-03-27 Thread Josh Kuhn
That works great. My one stumbling block was that the column objects
couldn't be compared directly, so I compared their string representation (I
had two separate calls to A.alias() which made `is` not a valid comparison
in the comprehension)


On Thu, Mar 27, 2014 at 11:49 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Mar 27, 2014, at 10:08 AM, Josh Kuhn deontologic...@gmail.com wrote:

 I have a situation where I need to produce a select object, and then
 later, one of the fields needs to be zeroed out conditionally.

 so something like:

 def select_ab(param):
 from_obj = join(A, B, A.c.b == B.c.b)
 return select([A.c.a, B.c.b, B.c.d], from_obj=from_obj).where(A.c.a ==
 param)

 ...

 sql_query = select_ab(34)

 # Would like to do something like this:
 if some_condition:
sql_query = sql_query.replace_column(B.c.d, literal(0))

 engine.execute(sql_query)


 I tried to hack together a replace_column function using
 Select.with_only_columns, but as the documentation indicates, I need the
 original list of columns in order to really do what I want to do, I can't
 take the select's columns. And in my case, the original select's columns
 are trapped inside the function select_ab. I'd like to be able to replace a
 column on any arbitrary select if the column exists in the select.

 Is there a way to do what I'm trying to do? Or do I just need to keep the
 original columns around?


 the original columns are in the select() object.   They’re first in their
 original form in a list called _raw_columns, that is, if you had
 select([sometable, somecol, someothercol]) it would be literally those
 three elements, and then there is a public accessor called inner_columns
 which is derived directly from _raw_columns, it just expands a selectable
 (e.g. something with a .c. collection) into individual columns.

 So it should be feasible to say something equivalent to:
  select.with_only_columns([(literal(0) if c is my_special_thing else c) for
 c in select.inner_columns])


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] 'MySQL server has gone away' on closing connection

2014-03-27 Thread Tim Tisdall
Ah, yeah, that seems like a reasonable way of handling it.

I just don't get why I'm getting those exceptions, though, as I have it set
to recycle pool connections every 4 hrs where the mysql setting is to
expire connections after 8hrs.  As far as I understand it, I shouldn't be
getting messages like that.  Or are the pool connections recycled just
before the next use and not after 4hrs elapse?

-Tim


On 27 March 2014 11:44, Michael Bayer mike...@zzzcomputing.com wrote:


 On Mar 27, 2014, at 10:03 AM, Tim Tisdall tisd...@gmail.com wrote:

  Today I found the following traceback in my logs:
 
  2014-03-27 13:55:59,876 ERROR [sqlalchemy.pool.QueuePool
 _close_connection b'uWSGIWorker2Core14'] Exception closing connection
 oursql.Connection object at 0x7fecfdf6a140
  Traceback (most recent call last):
File
 /sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/pool.py,
 line 243, in _close_connection
  self._dialect.do_close(connection)
File
 /sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/engine/default.py,
 line 401, in do_close
  dbapi_connection.close()
File connection.pyx, line 193, in oursql.Connection.close
 (oursqlx/oursql.c:6143)
File connection.pyx, line 240, in oursql.Connection.rollback
 (oursqlx/oursql.c:6562)
File connection.pyx, line 207, in oursql.Connection._raise_error
 (oursqlx/oursql.c:6317)
  oursql.OperationalError: (2006, 'MySQL server has gone away', None)
 
  Shouldn't SQLAlchemy closing a connection catch this type of exception
 and ignore it?  If the mysql connection has gone away, can it not be
 considered closed?

 there's a difference between ignore and silent. Here's the code:

 def _close_connection(self, connection):
 self.logger.debug(Closing connection %r, connection)
 try:
 self._dialect.do_close(connection)
 except (SystemExit, KeyboardInterrupt):
 raise
 except:
 self.logger.error(Exception closing connection %r,
 connection, exc_info=True)

 as you can see, the error is ignored, we just log that something
 unexpected and possibly problematic has happened.I don't think this is
 necessarily a bad thing.



 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/K9Pk2pXbLgQ/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] 'MySQL server has gone away' on closing connection

2014-03-27 Thread Michael Bayer
what this looks like is that the connection is closed by the server due to the 
MySQL timeout, then SQLAlchemy's pool, when accessed, goes to recycle those 
connections, and does a close() on the existing one.  Since they've been closed 
server side, the close() method raises.  The pool recycle feature of course 
otherwise knows nothing about those stale connections and can't assume 
they're already closed, it just knows that connection has to be closed out.
   
I can see how this particular exception would be annoying and I'm not sure if 
other DBAPIs besides OurSQL raise under this condition.   I have a feeling I'd 
have heard about this sooner if MySQLdb did it also.   Under the realm of 
we're closing a connection we know to have been invalidated and/or stale, it 
seems like we'd totally suppress even any printing of the error because we 
already know this connection is suspect.   But at the same time, it always 
makes me nervous to 100% ignore any error message entirely. Here it might 
be nice if there wasn't the whole stack trace, and maybe if the message 
included context, exception closing stale/invalidated connection ignored: x.

if you want to propose a pullreq along those lines that could be helpful (e.g. 
pass a flag into the _close_connection()  routine indicating the style of error 
reporting).



On Mar 27, 2014, at 1:24 PM, Tim Tisdall tisd...@gmail.com wrote:

 Ah, yeah, that seems like a reasonable way of handling it.
 
 I just don't get why I'm getting those exceptions, though, as I have it set 
 to recycle pool connections every 4 hrs where the mysql setting is to expire 
 connections after 8hrs.  As far as I understand it, I shouldn't be getting 
 messages like that.  Or are the pool connections recycled just before the 
 next use and not after 4hrs elapse?
 
 -Tim
 
 
 On 27 March 2014 11:44, Michael Bayer mike...@zzzcomputing.com wrote:
 
 On Mar 27, 2014, at 10:03 AM, Tim Tisdall tisd...@gmail.com wrote:
 
  Today I found the following traceback in my logs:
 
  2014-03-27 13:55:59,876 ERROR [sqlalchemy.pool.QueuePool _close_connection 
  b'uWSGIWorker2Core14'] Exception closing connection oursql.Connection 
  object at 0x7fecfdf6a140
  Traceback (most recent call last):
File 
  /sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/pool.py,
   line 243, in _close_connection
  self._dialect.do_close(connection)
File 
  /sites/ColanderAlchemy/SQLAlchemy-0.9.3-py3.3-linux-x86_64.egg/sqlalchemy/engine/default.py,
   line 401, in do_close
  dbapi_connection.close()
File connection.pyx, line 193, in oursql.Connection.close 
  (oursqlx/oursql.c:6143)
File connection.pyx, line 240, in oursql.Connection.rollback 
  (oursqlx/oursql.c:6562)
File connection.pyx, line 207, in oursql.Connection._raise_error 
  (oursqlx/oursql.c:6317)
  oursql.OperationalError: (2006, 'MySQL server has gone away', None)
 
  Shouldn't SQLAlchemy closing a connection catch this type of exception and 
  ignore it?  If the mysql connection has gone away, can it not be 
  considered closed?
 
 there's a difference between ignore and silent. Here's the code:
 
 def _close_connection(self, connection):
 self.logger.debug(Closing connection %r, connection)
 try:
 self._dialect.do_close(connection)
 except (SystemExit, KeyboardInterrupt):
 raise
 except:
 self.logger.error(Exception closing connection %r,
 connection, exc_info=True)
 
 as you can see, the error is ignored, we just log that something unexpected 
 and possibly problematic has happened.I don't think this is necessarily a 
 bad thing.
 
 
 
 --
 You received this message because you are subscribed to a topic in the Google 
 Groups sqlalchemy group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/sqlalchemy/K9Pk2pXbLgQ/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at 

Re: [sqlalchemy] What are the ORM mechanics when appending a new association in an M to N relationship?

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 9:27 AM, Darin Gordon dar...@gmail.com wrote:

 Let me preface this by saying that I researched to the best of my abilities 
 the answers to my following inquiries.  I would not ask for help without 
 first trying to investigate this on my own.  Asking through this forum is the 
 proverbial end of the line, so thanks for taking any time to read and respond.
 
 I understand relational databases and have been bridging a knowledge gap with 
 ORMs, using SQLAlchemy.  I've struggled a bit while learning what exactly is 
 going on in the ORM when ADDING new associations in a M to N relationship.   
 
 My questions are related to SQL Alchemy ORM concepts:
   1) Relationships
   2) Association Object pattern
   3) Association Proxy
 
 
 To help me grasp the concepts, I pieced together a working example of an M to 
 N relation and drew a diagram showing the relationships.   The scenario is of 
 a very crude appointment scheduler that someone like a realtor may use to 
 keep track of who is visiting what property.  My attribute naming convention 
 was chosen to facilitate our discussion.  Link:   Github repo
 
 There are two files of interest:
 orm_associations.py -- a working example 
 SQLAlchemy - Association Object Relations.pdf --  contains a diagram of the 
 M to N relationship in orm_associations.py
 
 In this example, a user can be added to a meeting OR a meeting can be added 
 to a user using either of these association proxies:
 
   Meeting.attendees_proxy.append(User)
   or
   User.meetings_proxy.append(Meeting)
 
 These statements create the desired outcome-- a new Appointment record 
 inserted into the association table and possibly a new user or meeting record 
 if none already exist.
 
 I'd like to know what exactly is happening in the ORM when either of these 
 statements is executed.
   
 For instance, what is the association_proxy creator doing?  If I add a user 
 to a meeting, the attendees_proxy creator is defined as follows:
   creator=lambda user: 
 Appointment(appointment_user_relation=user))
   
 The creator expression appears to be creating a new Appointment object, but 
 then defines the RELATIONSHIP as a User object?  I'm missing something here.  
 No idea what is going on.  It seems to be all a bit to automagical to me.

I'd note that when you append an item to a collection, or do anything else with 
your objects, not very much happens inside the ORM except that your object 
records that something changed.  It's only at flush time that SQL types of 
things actually occur.   This is under the realm of understanding how the unit 
of work works, and for that I'd recommend my talk The SQLAlchemy Session - In 
Depth which is at 
http://techspot.zzzeek.org/2012/11/14/pycon-canada-the-sqlalchemy-session-in-depth/,
 and in particular you can run the linked html demo yourself, which will 
painstakingly graphically draw out every single state change within the 
session/unit of work/object graph if you really want to step through it.

then lets talk about the association proxy.  this is an entirely different 
subject, the association proxy has relatively little that's specific to the 
ORM, and is just working with Python attributes in such a way as to automate 
the middle.  The following code example illustrates the main idea of what 
association proxy does, with no ORM or SQLAlchemy at all.  You can copy this to 
a .py script and run it.  What we're making use of here are Python magic 
methods; some introductory material on the ones specific to sequences are at: 
http://www.rafekettler.com/magicmethods.html#sequence .If you understand 
this script then you know 99% of what the association proxy is doing:

class AssocProxy(object):
def __init__(self, collection_name, proxied):
self.collection_name = collection_name
self.proxied = proxied

def __get__(self, obj, cls):
if obj is not None:
return ProxyList(self, getattr(obj, self.collection_name))

def _makeitem(self, item):
return Association(**{self.proxied: item})

class ProxyList(object):
def __init__(self, parent, collection):
self.parent = parent
self.collection = collection

def __getitem__(self, index):
item = self.collection[index]
item = getattr(item, self.parent.proxied)
return item

def append(self, item):
self.collection.append(self.parent._makeitem(item))

def __setitem__(self, index, item):
self.collection[index] = self.parent._makeitem(item)

def __iter__(self):
return (getattr(item, self.parent.proxied) for item in self.collection)

class Parent(object):
def __init__(self):
self.associations = []

children = AssocProxy('associations', 'element')

class Association(object):
def __init__(self, element):

[sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Dustin Oprea
I'm having a problem where I'm doing writes to MySQL that aren't being 
committed. I'm obviously calling the procedure from an execute(). I'm okay 
with explicit commits, but since all of my data logic is in procedures (all 
operations are one step), autocommit is preferable. I'm using 0.9.1 .

Until recently, the application was only doing SELECTs, so I hadn't noticed.

There seem to be dozens of offered solutions online, but none of them work.

I'm testing using the following procedure:

delimiter //

CREATE PROCEDURE test_func()
BEGIN
SELECT @@AUTOCOMMIT `autocommit`;
END//

delimiter ;


I'm doing my tests from the command-line, in a simple script.

- I've tried setting the autocommit to True at the engine level, but I get 
the Commands out of sync error:

engine = create_engine(DSN, echo=True).execution_options(autocommit=True)


- I've tried setting the autocommit at the session level, but the result 
indicates that autocommit is 0 (the option was ignored):

Code:

Session = sessionmaker(bind=engine, autocommit=True)
session = Session()

query = 'CALL test_func()'

r = session.execute(text(query, autocommit=True))

Debug output:

2014-03-27 14:45:36,956 INFO sqlalchemy.engine.base.Engine SELECT DATABASE()
2014-03-27 14:45:36,956 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 14:45:37,293 INFO sqlalchemy.engine.base.Engine SHOW VARIABLES 
LIKE 'character_set%%'
2014-03-27 14:45:37,293 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 14:45:37,350 INFO sqlalchemy.engine.base.Engine SHOW VARIABLES 
LIKE 'sql_mode'
2014-03-27 14:45:37,350 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 14:45:37,416 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2014-03-27 14:45:37,417 INFO sqlalchemy.engine.base.Engine CALL test_func()
2014-03-27 14:45:37,417 INFO sqlalchemy.engine.base.Engine ()


- I've tried setting the autocommit on a text(), but it's ignored (with the 
same result as the above):

r = session.execute(text(query, autocommit=True))

- I've tried using an execute() on the engine, but autoflush is either got 
ignored or I got the out of sync error (like the above). I can't remember.

- I've also tried to do the actual commits, both directly or using the 
context-manager, but I get the out of sync message every time.

This is driving me crazy. Can anyone give me a short, working example of 
calling a stored procedure with autocommit, as well as with an explicit 
commit?


Dustin

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 3:01 PM, Dustin Oprea myselfasun...@gmail.com wrote:

 I'm having a problem where I'm doing writes to MySQL that aren't being 
 committed. I'm obviously calling the procedure from an execute(). I'm okay 
 with explicit commits, but since all of my data logic is in procedures (all 
 operations are one step), autocommit is preferable. I'm using 0.9.1 .
 
 Until recently, the application was only doing SELECTs, so I hadn't noticed.
 
 There seem to be dozens of offered solutions online, but none of them work.

OK, you seem to have found the autocommit flag on text(), this is the correct 
approach in this case.

I can't get your stored proc to run as creating a proc on my machine produces 
some thread stack overrun error I don't have the time to figure out, but a 
simple test of just a SELECT with autocommit=True illustrates the COMMIT being 
called in the log:

from sqlalchemy import *

e = create_engine(mysql://scott:tiger@localhost/test, echo=True)

r = e.execute(text('SELECT 1', autocommit=True))

output:

2014-03-27 15:08:26,951 INFO sqlalchemy.engine.base.Engine SELECT 1
2014-03-27 15:08:26,951 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 15:08:26,951 INFO sqlalchemy.engine.base.Engine COMMIT

 - I've tried setting the autocommit to True at the engine level, but I get 
 the Commands out of sync error:
 
 engine = create_engine(DSN, echo=True).execution_options(autocommit=True)

can't reproduce - test script:

from sqlalchemy import *

e = create_engine(mysql://scott:tiger@localhost/test, 
echo=True).execution_options(autocommit=True)

r = e.execute(text('SELECT 1'))

output, includes the COMMIT as expected:

2014-03-27 15:11:14,324 INFO sqlalchemy.engine.base.OptionEngine SELECT 1
2014-03-27 15:11:14,324 INFO sqlalchemy.engine.base.OptionEngine ()
2014-03-27 15:11:14,324 INFO sqlalchemy.engine.base.OptionEngine COMMIT


 
 - I've tried setting the autocommit at the session level, but the result 
 indicates that autocommit is 0 (the option was ignored):

session autocommit is not the same thing here, it just means that the session 
won't implicitly call begin().  it doesn't imply commit for all statements.

 
 - I've tried setting the autocommit on a text(), but it's ignored (with the 
 same result as the above):
 
 r = session.execute(text(query, autocommit=True))

can't reproduce, as above

 
 - I've also tried to do the actual commits, both directly or using the 
 context-manager, but I get the out of sync message every time.

can't reproduce that either, obviously, there should be no commands out of 
sync error.  whatever is going on to cause that, is likely related to what the 
actual problem is.  In fact it seems like the attempts you've made to do the 
*correct* thing are where you're getting that issue, meaning, your database or 
MySQL DBAPI is not able to run your procedure correctly.

I will note that I'm not able to run a MySQL stored procedure here with 
MySQL-python, but I get a different error.If I do this:

e.execute(CALL test_func())

or if I do this (e.g. use MySQL-python directly):

conn = e.connect()
curs = conn.connection.cursor()
curs.callproc(test_func)

I get:

2014-03-27 15:14:03,360 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 15:14:03,361 INFO sqlalchemy.engine.base.Engine COMMIT
Traceback (most recent call last):
  File test.py, line 17, in module
curs.callproc(test_func)
  File build/bdist.macosx-10.4-x86_64/egg/MySQLdb/cursors.py, line 272, in 
callproc
  File build/bdist.macosx-10.4-x86_64/egg/MySQLdb/cursors.py, line 316, in 
_query
  File build/bdist.macosx-10.4-x86_64/egg/MySQLdb/cursors.py, line 280, in 
_do_query
_mysql_exceptions.OperationalError: (1436, Thread stack overrun:  11776 bytes 
used of a 131072 byte stack, and 128000 bytes needed.  Use 'mysqld 
--thread_stack=#' to specify a bigger stack.)


In your case, you should use a simple MySQL-python script first, get the SP to 
run and commit, without any out of sync messages.   I think that is the 
actual problem you're having.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: What are the ORM mechanics when appending a new association in an M to N relationship?

2014-03-27 Thread Darin Gordon
Thank you, everyone, for your extremely helpful responses.  You really 
helped clear this matter up



On Thursday, March 27, 2014 9:27:42 AM UTC-4, Darin Gordon wrote:

 Let me preface this by saying that I researched to the best of my 
 abilities the answers to my following inquiries.  I would not ask for help 
 without first trying to investigate this on my own.  Asking through this 
 forum is the proverbial end of the line, so thanks for taking any time to 
 read and respond.

 I understand relational databases and have been bridging a knowledge gap 
 with ORMs, using SQLAlchemy.  I've struggled a bit while learning what 
 exactly is going on in the ORM when ADDING new associations in a M to N 
 relationship.   

 My questions are related to SQL Alchemy ORM concepts:
 1) Relationships
 2) Association Object pattern
 3) Association Proxy


 To help me grasp the concepts, I pieced together a working example of an M 
 to N relation and drew a diagram showing the relationships.   The scenario 
 is of a very crude appointment scheduler that someone like a realtor may 
 use to keep track of who is visiting what property.  My attribute naming 
 convention was chosen to facilitate our discussion.  *Link:   Github repo 
 http://www.github.com/dowwie/sqlalchemyillustrated*

 There are two files of interest:

- orm_associations.py -- a working example 
- SQLAlchemy - Association Object Relations.pdf --  contains a 
diagram of the M to N relationship in orm_associations.py


 In this example, a user can be added to a meeting OR a meeting can be 
 added to a user using either of these association proxies:

 Meeting.attendees_proxy.append(User)
 or
 User.meetings_proxy.append(Meeting)

 These statements create the desired outcome-- a new Appointment record 
 inserted into the association table and possibly a new user or meeting 
 record if none already exist.

 I'd like to know what exactly is happening in the ORM when either of these 
 statements is executed.
 For instance, what is the association_proxy creator doing?  If I add a 
 user to a meeting, the attendees_proxy creator is defined as follows:
 creator=lambda user: Appointment(appointment_user_relation=user))
 The creator expression appears to be creating a new Appointment object, 
 but then defines the RELATIONSHIP as a User object?  I'm missing something 
 here.  No idea what is going on. It seems to be all a bit to automagical 
 to me.


 Thanks

 Darin



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Dustin Oprea
Thanks, Mike.


On Thursday, March 27, 2014 3:16:27 PM UTC-4, Michael Bayer wrote:

...
 

 OK, you seem to have found the autocommit flag on text(), this is the 
 correct approach in this case.

 I can’t get your stored proc to run as creating a proc on my machine 
 produces some “thread stack overrun” error I don’t have the time to figure 
 out, but a simple test of just a SELECT with autocommit=True illustrates 
 the COMMIT being called in the log:


...
 

 can’t reproduce - test script:

 from sqlalchemy import *

 e = create_engine(mysql://scott:tiger@localhost/test, 
 echo=True).execution_options(autocommit=True)

 r = e.execute(text('SELECT 1’))

 output, includes the COMMIT as expected:

 2014-03-27 15:11:14,324 INFO sqlalchemy.engine.base.OptionEngine SELECT 1
 2014-03-27 15:11:14,324 INFO sqlalchemy.engine.base.OptionEngine ()
 2014-03-27 15:11:14,324 INFO sqlalchemy.engine.base.OptionEngine COMMIT


...
 

 In your case, you should use a simple MySQL-python script first, get the 
 SP to run and commit, without any “out of sync” messages.   I think that is 
 the actual problem you’re having.



A SELECT @@AUTOCOMMIT had the same result. However, the same query using 
the standard library, along with a call to autocommit(), worked exactly as 
expected. I should've mentioned that I'm calling into Amazon [RDS].

Let:

query = 'SELECT @@AUTOCOMMIT'

dsn = 'DSN string'


SQLAlchemy code:

def alchemy_test():
e = create_engine(dsn, echo=True)

Session = sessionmaker()
Session.configure(bind=e)
session = Session()

r = session.execute(text(query, autocommit=True))
print(list(r))


Result:

2014-03-27 17:47:16,215 INFO sqlalchemy.engine.base.Engine SELECT DATABASE()
2014-03-27 17:47:16,215 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 17:47:16,548 INFO sqlalchemy.engine.base.Engine SHOW VARIABLES 
LIKE 'character_set%%'
2014-03-27 17:47:16,548 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 17:47:16,604 INFO sqlalchemy.engine.base.Engine SHOW VARIABLES 
LIKE 'sql_mode'
2014-03-27 17:47:16,604 INFO sqlalchemy.engine.base.Engine ()
2014-03-27 17:47:16,670 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2014-03-27 17:47:16,670 INFO sqlalchemy.engine.base.Engine SELECT 
@@AUTOCOMMIT
2014-03-27 17:47:16,670 INFO sqlalchemy.engine.base.Engine ()
[(0L,)]


Direct example:

def direct_test():
import MySQLdb
conn = MySQLdb.connect(host='db.host.com', user='abc', passwd=def, 
db=ghi, port=3307)

conn.autocommit(True)

c = conn.cursor()
c.execute(query)

print(c.fetchone())


Result:

(1L,)

Any idea why SA is falling down on this?


Dustin

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 5:51 PM, Dustin Oprea myselfasun...@gmail.com wrote:

 Thanks, Mike.
 
 
 Any idea why SA is falling down on this?
 
 

this is not a SQLAlchemy issue, it is an issue with the DBAPI you're using.  If 
you're using MySQL-python, you need to work up a test script using only 
MySQL-python directly, and figure out what's going on.  you may need to use 
another DBAPI like mysql-connector-python (which is the official DBAPI of MySQL 
now).

if you can show me a MySQL-python script that works, then SQLAlchemy should be 
working too because it doesn't have any special behavior here.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 6:43 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Mar 27, 2014, at 5:51 PM, Dustin Oprea myselfasun...@gmail.com wrote:
 
 Thanks, Mike.
 
 
 Any idea why SA is falling down on this?
 
 
 
 this is not a SQLAlchemy issue, it is an issue with the DBAPI you're using.  
 If you're using MySQL-python, you need to work up a test script using only 
 MySQL-python directly, and figure out what's going on.  you may need to use 
 another DBAPI like mysql-connector-python (which is the official DBAPI of 
 MySQL now).
 
 if you can show me a MySQL-python script that works, then SQLAlchemy should 
 be working too because it doesn't have any special behavior here.

oh, you had one at the bottom there, let me look, sorry.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 5:51 PM, Dustin Oprea myselfasun...@gmail.com wrote:

 
 Direct example:
 
 def direct_test():
 import MySQLdb
 conn = MySQLdb.connect(host='db.host.com', user='abc', passwd=def, 
 db=ghi, port=3307)
 
 conn.autocommit(True)
 
 c = conn.cursor()
 c.execute(query)
 
 print(c.fetchone())
 
 Result:
 
 (1L,)
 
 

One thing I don't understand is, does your *actual* stored procedure need to 
call SELECT @@AUTOCOMMIT?  because that may be part of the problem.

Above, SQLAlchemy has no direct support for conn.autocommit(), which is not 
part of the DBAPI - you can enable this if you want using an event, but it 
would be better if commit() just worked as expected.  

I would try removing the conn.autocommit() part and just try calling 
conn.commit(). That would reveal whatever bugs are in play, and I'd look to 
see if a DBAPI like mysql-connector-python, which is now the official MySQL 
DBAPI, has the same problem.

if you really have to turn on this non-standard feature on, it needs to be 
across the board for that particular engine, and would be like this:

from sqlalchemy import event

@event.listens_for(engine, connect)
def conn(conn, rec):
conn.autocommit(True)






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Stored-procedure commit woes

2014-03-27 Thread Michael Bayer

On Mar 27, 2014, at 6:50 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 
 if you really have to turn on this non-standard feature on, it needs to be 
 across the board for that particular engine, and would be like this:
 
 from sqlalchemy import event
 
 @event.listens_for(engine, connect)
 def conn(conn, rec):
 conn.autocommit(True)


correction, if you want it per connection, then like this:

conn = engine.connect()
conn.connection.autocommit(True)

conn.execute(whatever)

conn.connection.autocommit(False)


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Self-join and autoload

2014-03-27 Thread thatsanicehatyouhave
Hi,

I'm trying to configure a table with autoload but can't quite get the syntax to 
set up a self-relationship. This is my abbreviated) schema:

CREATE TABLE sdssphoto.photoobj
(
  pk bigint NOT NULL DEFAULT nextval('photoobj_pk_seq'::regclass),
  parent_photoobj_pk bigint
 CONSTRAINT photoobj_pk PRIMARY KEY (pk),
   CONSTRAINT parent_fk FOREIGN KEY (parent_photoobj_pk)
  REFERENCES sdssphoto.photoobj (pk) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION,
)

and my code:

class PhotoObj(Base):

__tablename__ = 'photoobj'
__table_args__ = {'autoload':True, 'schema':'sdssphoto'}

children = relationship('PhotoObj', backref=backref('parent', 
remote_side=[PhotoObj.pk]))

The error I get is NameError: name 'PhotoObj' is not defined. I've tried many 
iterations, but can't quite seem to get this right. Any suggestions would be 
appreciated!

Cheers,
Demitri

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
MySQLdb has the same problem.
So, no concurrent, I just use engine(with default connection pool) execute.
Then, how to work around the problem, for my query operation, if catch this 
exception, query again?


在 2014年3月27日星期四UTC+8下午11时42分19秒,Michael Bayer写道:


 On Mar 27, 2014, at 9:14 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 It's not happening everytime.

 And I am not using mysql-python, instead, I use pymysql.

 So, the root cause is connection in bad state?
 How to fix or work around from sqlalchemy side?


 I don’t really know, I’d need to have a short script which replicates it 
 in order to identify the pattern exactly.

 if its pymysql, try another DBAPI.  though this may be some artifact of 
 the MySQL protocol.

 as always, if you’re doing anything with threads, concurrent use of a 
 MySQL connection will usually lead to problems like this.  but there can be 
 other reasons too.





 Wesley

 在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:

 when the DBAPI cursor has no .description object, it is determined to not 
 be a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.




 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:

 def dbquery(_table,whereclause):

 try:

 #_table=Table(tablename, metadata, autoload=True)

 #_table = tables[tablename]

 i=_table.select().where(whereclause) 

 if direct_engine:  #direct_engine is True

 res = engine.execute(i)

 return res

 else:

 session = scoped_session(sessionmaker(bind=engine))

 res = session.execute(i)

 return res

 session.close()

 except Exception,e:

 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))

 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))

 #session.rollback()

 if not direct_engine:

 session.close()

 Here is snippet in another file involving dbquery:

 try:

  res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
 *'sessions_details'*].c.app_key==*self*.app_key,tables[
 *'sessions_details'*].c.device_token==*self*._devicetoken))

 except Exception,e:

 exctrace(*'db'*,*'1'*,*'Error happened when querying 
 db'*,dba_logger,*'Exception 
 when query session_details:%s'* % str(e),*'Exception detail:appkey is 
 %s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

 *self*.read_message()

 return   

 if res is None:

logger.log(40,*'When query connection,mysql has gone or something, 
 just skip db and go-on'*)

 #here need to justify 0 or 1, later on 

 *self*.status=*'0'*

 *self*.read_message()

 return

 if res.first() is None:

if json_obj[*'action'*] == *'reg'*:

 So, the line in pink above raises the exception.

 Could anyone give some suggestion how this happened?


 Thanks.

 Wesley

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Delete fails using automap with joined inheritance

2014-03-27 Thread Adrian Robert
I have been using the automap extension with postgres, with an inheritance 
structure using the joined inheritance pattern. I could not figure out a 
way to have this reflected from the DB so I define the classes for this 
part of my schema explicitly, and when automap initializes, these classes 
get used (and modified) for those tables and the rest get autogenerated.

It works fine until I try to delete an instance of an inheriting class: 
then I get a circular dependency error which seems to relate to 
relationships and backreferences created for the foreign key relationship 
underlying the joined inheritance.

The attached code demonstrates the issue.  The first run generates the DB 
schema from the classes, and works, any number of times.  On the second 
run, switch the two comments for Base (in two places) to use automap.  The 
output of the first:

% ./test.py
RELATIONSHIPS: []
Run completed successfully.

% ./test.py
RELATIONSHIPS: [('employee', RelationshipProperty at 0x10263d310; 
employee), ('engineer_collection', RelationshipProperty at 0x102663210; 
engineer_collection)]
Circular dependency detected. Cycles: {DeleteState(Engineer at 
0x1026a9d10)} all edges: {(DeleteState(Engineer at 0x1026a9d10), 
DeleteState(Engineer at 0x1026a9d10)), 
(ProcessState(OneToManyDP(Employee.engineer_collection), Engineer at 
0x1026a9d10, delete=True), DeleteState(Engineer at 0x1026a9d10))}

Notice that the mapper in the first case shows no relationships, despite 
the foreign key created for the inheritance.  On the other hand when the 
same structure is read from the DB by automap, we see forward and back 
relationships, which I guess is somehow causing the circular dependency.

Am I doing something wrong in my attempt to use joined inheritance in 
conjunction with automap, or is this a bug or something unsupported?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
#!/usr/bin/env python

from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import ForeignKey

# SWITCH commenting here and below to test
Base = declarative_base()# Run 1
#Base = automap_base(declarative_base=declarative_base())# Run 2

class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String(50))
type = Column(String(50))
__mapper_args__ = {
'polymorphic_identity':'employee', 'polymorphic_on':type
}

class Engineer(Employee):
__tablename__ = 'engineer'
id = Column(Integer, ForeignKey('employee.id'), primary_key=True)
engineer_name = Column(String(30))
__mapper_args__ = {
'polymorphic_identity':'engineer',
}

engine = create_engine(postgresql://user:pw@localhost/test)
Session = sessionmaker(bind=engine)

# SWITCH commenting to test
Base.metadata.create_all(engine)# Run 1
#Base.prepare(engine, reflect=True)# Run 2

# Test
session = Session()
engineer = Engineer(engineer_name='Eng 1', name='Emp 1')
print(RELATIONSHIPS:  + str(engineer.__mapper__.relationships.items()))
session.add(engineer)
session.commit()

session = Session()
engineer = session.query(Engineer).all()[0]
session.delete(engineer)

try:
session.commit()
print(Run completed successfully)
except Exception as e:
print(EXCEPTION:  + str(e))


[sqlalchemy] Best sql platform to use with sqlalchemy on a limited shared network

2014-03-27 Thread Eric Kiser
CURRENT SCENARIO:
- we have been given access to a network hence we can share copy and paste 
files on it. We access it by ftp://xx.x.x.xx with a user and password.

THE IDEA ON MIND:
- Create a simple program that uses sqlalchemy to query sqlite database 
(this is finished but my sqlite file is on my local drive)
- Put the sqlite on the shared network so that we can all query the same 
database (there's exactly 10 of us).

WHAT I KNOW SO FAR:
- I am aware that sqlite is not a good one to use over a network, but then 
again I cannot use Postgresql or MySQL because I cannot setup them on the 
network.

QUESTION:
- how do I access an sqlite file located on the network using sqlalchemy?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.