[sqlalchemy] sqlalchemy how to enlarge current using connections

2014-04-08 Thread Ni Wesley
Hi all,
   I have a question here.

Here is my db init code snippet:

engine = create_engine(db_url, 
pool_size=100,max_overflow=150,echo=engine_echo,pool_recycle=3600)
session = scoped_session(sessionmaker(bind=engine))
metadata = MetaData(bind=engine)

Then, I use engine to do update/insert operations.

There is no problem here, then I find during the performance test that, 
it's too slow to insert/update mysql server.
When my script running, on mysql server, use mysql 'show processlist;' 
command, I see only 2 or 3 active connections kept.

Then,within my script, I use multiprocessing.pool(defaults to cpu core 
size, which is 4 for me) to do mysql insert/update operations.

Finally, I see 4 or 5 connections kept from mysql server.

So, seems db operation is bottleneck because I cannot use more connections 
yet.

How to push the conncurrent connections up? I am sure script generating db 
tasks are fast enough.

Thanks.
Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sqlalchemy how to enlarge current using connections

2014-04-08 Thread Ni Wesley
For me, I just use engine.execute most of the time, any problem with this?

Or if I use session or connect, do I need to close the session or 
connection everytime? otherwise, it will trigger the pool size limit error 
as the connection is increasing, right?


在 2014年4月8日星期二UTC+8下午8时49分16秒,Michael Bayer写道:

 To make more connections, call connect(), and/or use more session objects. 
  Each session uses one connection.

 If you see three connections, that means your script has only worked with 
 three connections at once, such as, you opened three Session objects 
 concurrently.

 Sqlalchemy does not initiate any concurrent operations on its own.


 Sent from my iPhone

 On Apr 8, 2014, at 5:03 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 Hi all,
I have a question here.

 Here is my db init code snippet:

 engine = create_engine(db_url, 
 pool_size=100,max_overflow=150,echo=engine_echo,pool_recycle=3600)
 session = scoped_session(sessionmaker(bind=engine))
 metadata = MetaData(bind=engine)

 Then, I use engine to do update/insert operations.

 There is no problem here, then I find during the performance test that, 
 it's too slow to insert/update mysql server.
 When my script running, on mysql server, use mysql 'show processlist;' 
 command, I see only 2 or 3 active connections kept.

 Then,within my script, I use multiprocessing.pool(defaults to cpu core 
 size, which is 4 for me) to do mysql insert/update operations.

 Finally, I see 4 or 5 connections kept from mysql server.

 So, seems db operation is bottleneck because I cannot use more connections 
 yet.

 How to push the conncurrent connections up? I am sure script generating db 
 tasks are fast enough.

 Thanks.
 Wesley

  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sqlalchemy how to enlarge current using connections

2014-04-08 Thread Ni Wesley
...

在 2014年4月8日星期二UTC+8下午9时52分58秒,Michael Bayer写道:


 On Apr 8, 2014, at 8:55 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 For me, I just use engine.execute most of the time, any problem with this?

 Or if I use session or connect, do I need to close the session or 
 connection everytime? otherwise, it will trigger the pool size limit error 
 as the connection is increasing, right?


 if you use sessions or a Connection object then yes you need to make sure 
 those are closed when you’re done.

 The docs at http://docs.sqlalchemy.org/en/rel_0_9/core/connections.htmlhave 
 background on the difference between engine.execute and 
 connection.execute.




 在 2014年4月8日星期二UTC+8下午8时49分16秒,Michael Bayer写道:

 To make more connections, call connect(), and/or use more session 
 objects.  Each session uses one connection.

 If you see three connections, that means your script has only worked with 
 three connections at once, such as, you opened three Session objects 
 concurrently.

 Sqlalchemy does not initiate any concurrent operations on its own.


 Sent from my iPhone

 On Apr 8, 2014, at 5:03 AM, Ni Wesley nis...@gmail.com wrote:

 Hi all,
I have a question here.

 Here is my db init code snippet:

 engine = create_engine(db_url, 
 pool_size=100,max_overflow=150,echo=engine_echo,pool_recycle=3600)
 session = scoped_session(sessionmaker(bind=engine))
 metadata = MetaData(bind=engine)

 Then, I use engine to do update/insert operations.

 There is no problem here, then I find during the performance test that, 
 it's too slow to insert/update mysql server.
 When my script running, on mysql server, use mysql 'show processlist;' 
 command, I see only 2 or 3 active connections kept.

 Then,within my script, I use multiprocessing.pool(defaults to cpu core 
 size, which is 4 for me) to do mysql insert/update operations.

 Finally, I see 4 or 5 connections kept from mysql server.

 So, seems db operation is bottleneck because I cannot use more 
 connections yet.

 How to push the conncurrent connections up? I am sure script generating 
 db tasks are fast enough.

 Thanks.
 Wesley


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sqlalchemy how to enlarge current using connections

2014-04-08 Thread Ni Wesley


在 2014年4月8日星期二UTC+8下午11时15分27秒,Michael Bayer写道:

 There is no need to use a scoped_session() in a function that is just 
 running a single query.  There’s no need to use a Session in a function 
 that is not running an ORM query - session.execute() and engine.execute() 
 are equivalent.  Most of this code is unnecessary.

   [wni] I see this, so , i use direct_engine= True to control all if 
statement goes to engine.execute 


 There is however a need to make a new Engine within each child process if 
 you are using multiprocessing, the code as is isn’t safe for multiple 
 processes.  multiprocessing.Pool provides the “initializer” hook for this 
 purpose.

 A reasonable pattern for that would be:

 engine = create_engine(url)

 def setup_fork():
 engine.dispose()  # ensure all connections are gone

 pool = multiprocessing.Pool(10, initializer=setup_fork)

 Then as always, if the code is only opening 4 or 5 connections, that 
 because you’re only asking it to do 4 or 5 things at once.  If you want it 
 to use 100 connections, you need to have it doing 100 things at a time, 
 which means your multiprocessing.Pool would have to be of size 100 and 
 you’d need to send at least 100 tasks into it.

[wni] I will try creating new engine for each sub process later :-) 




 On Apr 8, 2014, at 10:21 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 The above procedure can be performed in a shorthand way by using the 
 execute()http://docs.sqlalchemy.org/en/rel_0_9/core/connections.html#sqlalchemy.engine.Engine.execute
  method 
 of 
 Enginehttp://docs.sqlalchemy.org/en/rel_0_9/core/connections.html#sqlalchemy.engine.Engine
  itself:

 result = engine.execute(select username from users)for row in result:
 print username:, row['username']

 So, seems engine.execute is shorthand of connection.execute.

 Here is my db apis :
 #!/usr/bin/env python
 #coding: utf-8

 from sqlalchemy.sql.expression import text, bindparam
 from sqlalchemy.sql import select,insert, delete, update
 from sqlalchemy.schema import Table
 from sqlalchemy.orm import sessionmaker,scoped_session
 from db import dba_logger,metadata,engine#session
 from datetime import datetime
 from exctrace import exctrace
 from sqlalchemy import and_

 direct_engine = True
 use_raw = False

 #import gevent  
 #from gevent import monkey
 #monkey.patch_all()
 import multiprocessing
 from db import tables
 
 def tmp_dbwrite(tablename,**kwargs):
 
 Used to insert exception info into database.
 
 Params:
 module : module name, indicating who raises the exception, e.g. 
 android,ios,psg,adpns,db .etc
 type : exception type, 0 means service level while 1 is system 
 level.
 message : exception description,length limit to 256 bytes
 
 try:
 #_table=Table(tablename, metadata, autoload=True)
 _table = tables[tablename]
 i=_table.insert().values(**kwargs) 
 if direct_engine:
 engine.execute(i)
 #gevent.spawn(engine.execute,i)
 #gevent.sleep(0)
 #gevent.joinall([gevent.spawn(engine.execute,i)])
 else:
 session = scoped_session(sessionmaker(bind=engine))
 session.execute(i)
 session.commit()
 session.close()
 except Exception,e:
 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))
 exctrace('db','1','Error happened when writing 
 db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
 % str(kwargs))
 if not direct_engine:
 session.rollback()
 session.close()


 
 def tmp_dbupdate(_table,whereclause,**kwargs):
 
 Used to insert exception info into database.
 
 Params:
 module : module name, indicating who raises the exception, e.g. 
 android,ios,psg,adpns,db .etc
 type : exception type, 0 means service level while 1 is system 
 level.
 message : exception description,length limit to 256 bytes
 
 try:
 #_table=Table(tablename, metadata, autoload=True)
 #_table = tables[tablename]
 i=_table.update().values(**kwargs).where(whereclause) 
 if direct_engine:
 engine.execute(i)
 #gevent.spawn(engine.execute,i)
 else:
 session = scoped_session(sessionmaker(bind=engine))
 session.execute(i)
 session.commit()
 session.close()
 except Exception,e:
 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))
 exctrace('db','1','Error happened when updating 
 db',dba_logger,'Exception when dbupdate:%s' % str(e),'Exception detail:%s' 
 % str(kwargs))
 if not direct_engine:
 session.rollback()
 session.close()
 
 def dbquery(_table,whereclause):
 try

[sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
Hi all,
  Today I get an error as title.
I use sqlalchemy + mysql.
Here is my code snippet:

def dbquery(_table,whereclause):

try:

#_table=Table(tablename, metadata, autoload=True)

#_table = tables[tablename]

i=_table.select().where(whereclause) 

if direct_engine:  #direct_engine is True

res = engine.execute(i)

return res

else:

session = scoped_session(sessionmaker(bind=engine))

res = session.execute(i)

return res

session.close()

except Exception,e:

#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

#dba_logger.log(20,'Exception detail:%s' % str(kwargs))

exctrace('db','1','Error happened when querying 
db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
str(whereclause))

#session.rollback()

if not direct_engine:

session.close()

Here is snippet in another file involving dbquery:

try:

 res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
*'sessions_details'*].c.app_key==*self*.app_key,tables[*'sessions_details'*
].c.device_token==*self*._devicetoken))

except Exception,e:

exctrace(*'db'*,*'1'*,*'Error happened when querying 
db'*,dba_logger,*'Exception 
when query session_details:%s'* % str(e),*'Exception detail:appkey is 
%s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

*self*.read_message()

return   

if res is None:

   logger.log(40,*'When query connection,mysql has gone or something, just 
skip db and go-on'*)

#here need to justify 0 or 1, later on 

*self*.status=*'0'*

*self*.read_message()

return

if res.first() is None:

   if json_obj[*'action'*] == *'reg'*:

So, the line in pink above raises the exception.

Could anyone give some suggestion how this happened?


Thanks.

Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
It's not happening everytime.

And I am not using mysql-python, instead, I use pymysql.

So, the root cause is connection in bad state?
How to fix or work around from sqlalchemy side?

Wesley

在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:

 when the DBAPI cursor has no .description object, it is determined to not 
 be a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.




 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:

 def dbquery(_table,whereclause):

 try:

 #_table=Table(tablename, metadata, autoload=True)

 #_table = tables[tablename]

 i=_table.select().where(whereclause) 

 if direct_engine:  #direct_engine is True

 res = engine.execute(i)

 return res

 else:

 session = scoped_session(sessionmaker(bind=engine))

 res = session.execute(i)

 return res

 session.close()

 except Exception,e:

 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))

 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))

 #session.rollback()

 if not direct_engine:

 session.close()

 Here is snippet in another file involving dbquery:

 try:

  res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
 *'sessions_details'*].c.app_key==*self*.app_key,tables[
 *'sessions_details'*].c.device_token==*self*._devicetoken))

 except Exception,e:

 exctrace(*'db'*,*'1'*,*'Error happened when querying 
 db'*,dba_logger,*'Exception 
 when query session_details:%s'* % str(e),*'Exception detail:appkey is 
 %s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

 *self*.read_message()

 return   

 if res is None:

logger.log(40,*'When query connection,mysql has gone or something, 
 just skip db and go-on'*)

 #here need to justify 0 or 1, later on 

 *self*.status=*'0'*

 *self*.read_message()

 return

 if res.first() is None:

if json_obj[*'action'*] == *'reg'*:

 So, the line in pink above raises the exception.

 Could anyone give some suggestion how this happened?


 Thanks.

 Wesley

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
It's not happening everytime.

And I am not using mysql-python, instead, I use pymysql.

So, the root cause is connection in bad state?
How to fix or work around from sqlalchemy side?

Wesley

在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:

 when the DBAPI cursor has no .description object, it is determined to not 
 be a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.




 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:

 def dbquery(_table,whereclause):

 try:

 #_table=Table(tablename, metadata, autoload=True)

 #_table = tables[tablename]

 i=_table.select().where(whereclause) 

 if direct_engine:  #direct_engine is True

 res = engine.execute(i)

 return res

 else:

 session = scoped_session(sessionmaker(bind=engine))

 res = session.execute(i)

 return res

 session.close()

 except Exception,e:

 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))

 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))

 #session.rollback()

 if not direct_engine:

 session.close()

 Here is snippet in another file involving dbquery:

 try:

  res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
 *'sessions_details'*].c.app_key==*self*.app_key,tables[
 *'sessions_details'*].c.device_token==*self*._devicetoken))

 except Exception,e:

 exctrace(*'db'*,*'1'*,*'Error happened when querying 
 db'*,dba_logger,*'Exception 
 when query session_details:%s'* % str(e),*'Exception detail:appkey is 
 %s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

 *self*.read_message()

 return   

 if res is None:

logger.log(40,*'When query connection,mysql has gone or something, 
 just skip db and go-on'*)

 #here need to justify 0 or 1, later on 

 *self*.status=*'0'*

 *self*.read_message()

 return

 if res.first() is None:

if json_obj[*'action'*] == *'reg'*:

 So, the line in pink above raises the exception.

 Could anyone give some suggestion how this happened?


 Thanks.

 Wesley

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] this result object does not return rows. it has been closed automatically

2014-03-27 Thread Ni Wesley
MySQLdb has the same problem.
So, no concurrent, I just use engine(with default connection pool) execute.
Then, how to work around the problem, for my query operation, if catch this 
exception, query again?


在 2014年3月27日星期四UTC+8下午11时42分19秒,Michael Bayer写道:


 On Mar 27, 2014, at 9:14 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 It's not happening everytime.

 And I am not using mysql-python, instead, I use pymysql.

 So, the root cause is connection in bad state?
 How to fix or work around from sqlalchemy side?


 I don’t really know, I’d need to have a short script which replicates it 
 in order to identify the pattern exactly.

 if its pymysql, try another DBAPI.  though this may be some artifact of 
 the MySQL protocol.

 as always, if you’re doing anything with threads, concurrent use of a 
 MySQL connection will usually lead to problems like this.  but there can be 
 other reasons too.





 Wesley

 在 2014年3月27日星期四UTC+8下午8时58分40秒,Michael Bayer写道:

 when the DBAPI cursor has no .description object, it is determined to not 
 be a result-row returning object.   The MySQLDB DBAPI has been observed to 
 occasionally have issues in this area, when a connection gets into a bad 
 state. There are likely patterns in how you’re calling it that lead it to 
 have this issue but you might try a different DBAPI like 
 mysql-connector-python just to see what’s different.




 On Mar 27, 2014, at 7:41 AM, Ni Wesley nis...@gmail.com wrote:

 Hi all,
   Today I get an error as title.
 I use sqlalchemy + mysql.
 Here is my code snippet:

 def dbquery(_table,whereclause):

 try:

 #_table=Table(tablename, metadata, autoload=True)

 #_table = tables[tablename]

 i=_table.select().where(whereclause) 

 if direct_engine:  #direct_engine is True

 res = engine.execute(i)

 return res

 else:

 session = scoped_session(sessionmaker(bind=engine))

 res = session.execute(i)

 return res

 session.close()

 except Exception,e:

 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))

 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))

 exctrace('db','1','Error happened when querying 
 db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
 str(whereclause))

 #session.rollback()

 if not direct_engine:

 session.close()

 Here is snippet in another file involving dbquery:

 try:

  res = dbquery(tables[*'sessions_details'*],whereclause=and_(tables[
 *'sessions_details'*].c.app_key==*self*.app_key,tables[
 *'sessions_details'*].c.device_token==*self*._devicetoken))

 except Exception,e:

 exctrace(*'db'*,*'1'*,*'Error happened when querying 
 db'*,dba_logger,*'Exception 
 when query session_details:%s'* % str(e),*'Exception detail:appkey is 
 %s,devicetoken is %s'* % (*self*.app_key,*self*._devicetoken))

 *self*.read_message()

 return   

 if res is None:

logger.log(40,*'When query connection,mysql has gone or something, 
 just skip db and go-on'*)

 #here need to justify 0 or 1, later on 

 *self*.status=*'0'*

 *self*.read_message()

 return

 if res.first() is None:

if json_obj[*'action'*] == *'reg'*:

 So, the line in pink above raises the exception.

 Could anyone give some suggestion how this happened?


 Thanks.

 Wesley

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] MySQL server has gone away

2014-03-18 Thread Ni Wesley
Hi all,
  I hit a problem when using sqlalchemy operates mysql.

Engine.execute hangs about 5 mins... and then, get error:
'MySQL server has gone away' or 'Lost connection to MySQL server during 
query'.

Actually,  when operation hangs, I login mysql server is OK.

Why this happened? 
Or how to control execute timeout?

Thanks.
Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: MySQL server has gone away

2014-03-18 Thread Ni Wesley
I tried all you said. Here is my code:

engine = create_engine(db_url, 
pool_size=100,max_overflow=150,echo=engine_echo,pool_recycle=3600)

And, in the beginning, I googled a lot and add this:
def _ping_db():
#session.execute('show variables')
try:
session.execute('select 1')
except (InvalidRequestError,StatementError,OperationalError),e:
dba_logger.log(30,'Exception when pinging db:%s' % str(e))
session.rollback()

if pool_rec:
# ping db, so that mysql won't goaway
PeriodicCallback(_ping_db, pool_recycle_time * 1000).start()

But still hit the issue often, what's strange is, actually, the exception 
raises during many db operations..
That is, several db operations are successful, then, for the coming request 
just after that, suddenly, mysql gone away...


在 2014年3月19日星期三UTC+8上午7时54分21秒,Jameson Lee写道:

 Your connection to the database has been idle for too long and MySQL has 
 disconnected it.  You can mitigate it by having fake traffic that will use 
 the connection.  Increasing the timeout of MySQL.  Set SQLAlchemy to 
 recycle connections in the connection pool more aggressively (and make sure 
 that this value is below the MySQL timeout limit)

 I believe the default is 3600 or 7200 seconds for the above values.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-15 Thread Ni Wesley
So, that's the point,  my tcp server is single process.

Because I have a global dict keep active socket connections, where key is 
cell device token, and value is the active socket associated with the 
device.

the dict cannot be finely shared between multiple processes..

So, I just pull out db operation and throw into a process pool which only 
handle db operation.

在 2014年3月16日星期日UTC+8上午4时14分50秒,Michael Bayer写道:


 On Mar 14, 2014, at 9:18 PM, Ni Wesley nis...@gmail.com javascript: 
 wrote: 

  Seems sqlalchemy engine pool uses thread, eh? 

 SQLAlchemy doesn’t spawn any threads, it only provides library functions 
 which do the thing you ask it to do, synchronously. 






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-14 Thread Ni Wesley
Yes, actually, that's the pattern I used from the beginning, i.e. for every 
request(actually tcp server connection request), create a session, do db 
operation, commit, and close.

But it's too slow, for 1000 connections requests, it takes more than 40 
seconds, so , just yesterday, when using multiprocessing to wrap db 
operation, it's very fast, takes only 8 seconds to handle 1000 requests. 
 Seems sqlalchemy engine pool uses thread, eh? I know python's GIL.

Here is my code snippet:
from sqlalchemy.schema import Table
from sqlalchemy.orm import sessionmaker,scoped_session
from db import dba_logger,metadata,engine#session
from datetime import datetime
from exctrace import exctrace
from sqlalchemy import and_

direct_engine = True
use_raw = False

#import gevent  
#from gevent import monkey
#monkey.patch_all()
import multiprocessing

def tmp_dbwrite(tablename,**kwargs):

Used to insert exception info into database.

Params:
module : module name, indicating who raises the exception, e.g. 
android,ios,psg,adpns,db .etc
type : exception type, 0 means service level while 1 is system 
level.
message : exception description,length limit to 256 bytes

try:
_table=Table(tablename, metadata, autoload=True)
i=_table.insert().values(**kwargs) 
if direct_engine:
engine.execute(i)
#gevent.spawn(engine.execute,i)
#gevent.sleep(0)
#gevent.joinall([gevent.spawn(engine.execute,i)])
else:
session = scoped_session(sessionmaker(bind=engine))
session.execute(i)
session.commit()
session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when writing 
db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
% str(kwargs))
if not direct_engine:
session.rollback()
session.close()



def tmp_dbupdate(tablename,whereclause,**kwargs):

Used to insert exception info into database.

Params:
module : module name, indicating who raises the exception, e.g. 
android,ios,psg,adpns,db .etc
type : exception type, 0 means service level while 1 is system 
level.
message : exception description,length limit to 256 bytes

try:
_table=Table(tablename, metadata, autoload=True)
i=_table.update().values(**kwargs).where(whereclause) 
if direct_engine:
engine.execute(i)
#gevent.spawn(engine.execute,i)
else:
session = scoped_session(sessionmaker(bind=engine))
session.execute(i)
session.commit()
session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when updating 
db',dba_logger,'Exception when dbupdate:%s' % str(e),'Exception detail:%s' 
% str(kwargs))
if not direct_engine:
session.rollback()
session.close()

def dbquery(tablename,whereclause):
try:
_table=Table(tablename, metadata, autoload=True)
i=_table.select().where(whereclause) 
if direct_engine:
res = engine.execute(i)
return res
else:
session = scoped_session(sessionmaker(bind=engine))
res = session.execute(i)
return res
session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when querying 
db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
str(whereclause))
#session.rollback()
if not direct_engine:
session.close()
#res = 
session.execute(connection_writer._table.select().where(connection_writer._table.c.app_key==self.app_key).where(connection_writer._table.c.device_token==self._devicetoken))
pool = multiprocessing.Pool()
def dbwrite(tablename,**kwargs):
pool.apply_async(tmp_dbwrite, (tablename,), kwargs)

def dbupdate(tablename,whereclause,**kwargs):
pool.apply_async(tmp_dbupdate, (tablename,whereclause), kwargs)

在 2014年3月15日星期六UTC+8上午12时24分38秒,Jonathan Vanasco写道:

 You would probably do better with a pattern where you have a Session for 
 every request , and just use that session.  That is how most people 
 implement SqlAlchemy for web.

 * request start
 * create a sqlalchemy session, either scoped or explicit
 * do things with your session : read , write, etc.
 * commit/rollback session
 * request close; cleanup with a session.remove or session.close 

 Here are two great pieces from the documentation:

 * 
 

Re: [sqlalchemy] sql expression performance is bad ?

2014-03-13 Thread Ni Wesley
I thought this before.

But, actually, set to 0 is twice as fast as other settings, say, maybe 50, 
really hard to understand.

I tried to use gevent + sqlalchemy + mysqlconnector, but just a little 
improve.

Tomorrow I wanna write some mysql procedures and call the procedures , 
instead of core sqls. 

在 2014年3月13日星期四UTC+8下午8时59分34秒,Michael Bayer写道:


 On Mar 13, 2014, at 12:10 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote: 

  
  We can see that, from the start point to end point, it takes 50 ms, 
 including 40ms for engine commit. 
  
  For 1000 entries, it takes 40 seconds, here I have one question, when 
 1000 connections are on-going, I watch the connections between tcp server 
 and db server, 
  
  via command watch -n 1 netstat -anpto|grep dbserver_ip|wc -l, I find 
 the spike value is just 26, actually, I gave 0 to engine pool_size, why it 
 takes only 26? 

 pool size as zero is a pretty bad setting here, it means for every 
 connect/execute() it has to reconnect to the database from scratch, because 
 connections aren’t pooled. 

 taking off that setting and leaving at the default would help. 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: sql expression performance is bad ?

2014-03-13 Thread Ni Wesley
1. Yeah, I do have index on app_key and device_token

2. I realized the issue with how I use sessions every request, so , later, 
instead of sessions, I directly call engine.execute where engine is 
initiated just once from another file.

Seems no improvement compared to session...

在 2014年3月13日星期四UTC+8下午11时33分31秒,Jonathan Vanasco写道:

 in the database, is there an index on `app_key` ?  I'd imagine selects are 
 going to be slow on a sequential scan. 

 If there is a unique constraint on the app_key or it's a primary key, 
 that's going to be a performance hit too.

 Personally, I've found digest/hash style keys to have terrible 
 performance.  I would probably have the table like this:

 id = primary key + serial/auto-increment
 app_key = unique index

 i'd grab the app_key at the start of the web-request and note the object 
 id.  then i'd do updates using the object id.  

 the database should optimize locating the record by app_key using the 
 index field; and they tend to be faster finding records with an integer key 
 than strings (for your updates)

 Other than that, this looks to me like an anti-patten.  You're not 
 opening/closing new Sessions for every request, but for every Database 
 operation.

 Your session should ideally start at the beginning of the http web 
 request, and end when you're ready to commit/close.

 Your design will not scale.  SqlAlchemy doesn't have much overhead, but 
 you've designed your application in a way that you get hit by virtually all 
 the overhead multiple times on every action.  That's your biggest problem.

 Honestly, I think your best way to improve performance would probably be 
 to toss this module, and start from scratch.  the various python web 
 frameworks integrate sqlalchemy differently, but most have recommended 
 practices.  I'd start with that.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-13 Thread Ni Wesley
Yes, I don't use gevent now.

Just trying to improve from sqlalchemy and mysql server itself.

It takes 50 ms to commit one entry update. I don't know reason exactly now.

I'am trying to write mysql procedure and have a try to see the performace 
if calling procedure via python.

在 2014年3月13日星期四UTC+8下午11时55分49秒,Michael Bayer写道:

 OK well if you’re using gevent, then I really have very little experience 
 to help you on this.   I wasn’t aware of what integration gevent has with 
 the MySQL drivers, I assume you’re using a pure Python driver at least.   
  It’s very likely there’s some interaction with gevent + MySQL drivers 
 getting in the way. That’s a pretty advanced configuration so you sort 
 of need to figure out how to get a handle on what it’s doing if you want to 
 roll that way.   I love the idea of gevent but as far as real world use 
 cases, I’ve never had any.  If i need parallelism i just throw up 
 multiprocessing and I’m done.  But i know that’s just not exciting enough 
 for most people.

 you’d need to run what you’re doing under non-gevent, non-parallel 
 circumstances and get a baseline for how long a “commit” takes.

 On Mar 13, 2014, at 9:30 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 I thought this before.

 But, actually, set to 0 is twice as fast as other settings, say, maybe 50, 
 really hard to understand.

 I tried to use gevent + sqlalchemy + mysqlconnector, but just a little 
 improve.

 Tomorrow I wanna write some mysql procedures and call the procedures , 
 instead of core sqls. 

 在 2014年3月13日星期四UTC+8下午8时59分34秒,Michael Bayer写道:


 On Mar 13, 2014, at 12:10 AM, Ni Wesley nis...@gmail.com wrote: 

  
  We can see that, from the start point to end point, it takes 50 ms, 
 including 40ms for engine commit. 
  
  For 1000 entries, it takes 40 seconds, here I have one question, when 
 1000 connections are on-going, I watch the connections between tcp server 
 and db server, 
  
  via command watch -n 1 netstat -anpto|grep dbserver_ip|wc -l, I find 
 the spike value is just 26, actually, I gave 0 to engine pool_size, why it 
 takes only 26? 

 pool size as zero is a pretty bad setting here, it means for every 
 connect/execute() it has to reconnect to the database from scratch, because 
 connections aren’t pooled. 

 taking off that setting and leaving at the default would help. 


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-13 Thread Ni Wesley
Yes, I don't use gevent now.

Just trying to improve from sqlalchemy and mysql server itself.

It takes 50 ms to commit one entry update. I don't know reason exactly now.

I'am trying to write mysql procedure and have a try to see the performace 
if calling procedure via python.

在 2014年3月13日星期四UTC+8下午11时55分49秒,Michael Bayer写道:

 OK well if you’re using gevent, then I really have very little experience 
 to help you on this.   I wasn’t aware of what integration gevent has with 
 the MySQL drivers, I assume you’re using a pure Python driver at least.   
  It’s very likely there’s some interaction with gevent + MySQL drivers 
 getting in the way. That’s a pretty advanced configuration so you sort 
 of need to figure out how to get a handle on what it’s doing if you want to 
 roll that way.   I love the idea of gevent but as far as real world use 
 cases, I’ve never had any.  If i need parallelism i just throw up 
 multiprocessing and I’m done.  But i know that’s just not exciting enough 
 for most people.

 you’d need to run what you’re doing under non-gevent, non-parallel 
 circumstances and get a baseline for how long a “commit” takes.

 On Mar 13, 2014, at 9:30 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 I thought this before.

 But, actually, set to 0 is twice as fast as other settings, say, maybe 50, 
 really hard to understand.

 I tried to use gevent + sqlalchemy + mysqlconnector, but just a little 
 improve.

 Tomorrow I wanna write some mysql procedures and call the procedures , 
 instead of core sqls. 

 在 2014年3月13日星期四UTC+8下午8时59分34秒,Michael Bayer写道:


 On Mar 13, 2014, at 12:10 AM, Ni Wesley nis...@gmail.com wrote: 

  
  We can see that, from the start point to end point, it takes 50 ms, 
 including 40ms for engine commit. 
  
  For 1000 entries, it takes 40 seconds, here I have one question, when 
 1000 connections are on-going, I watch the connections between tcp server 
 and db server, 
  
  via command watch -n 1 netstat -anpto|grep dbserver_ip|wc -l, I find 
 the spike value is just 26, actually, I gave 0 to engine pool_size, why it 
 takes only 26? 

 pool size as zero is a pretty bad setting here, it means for every 
 connect/execute() it has to reconnect to the database from scratch, because 
 connections aren’t pooled. 

 taking off that setting and leaving at the default would help. 


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-13 Thread Ni Wesley
Actually I have a question here, I see the main time consuming is commit..

But how to control the time to commit.

Because I don't know when request will come and how many requests will 
come, so seems my current code is to commit every engine.execute



在 2014年3月14日星期五UTC+8上午2时14分43秒,Jonathan Vanasco写道:

 Before writing stored procedures, you might want to test speeds like : 

 1. raw speed within the mysql client for these queries.
 2. speed of pushing these queries through your python dbapi driver  ( ie, 
 mysqldb )

 those should give you a better idea on where the potential speed problems, 
 if any are.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] sqlalchemy only has 1 or 2 tcp connections to mysql server

2014-03-13 Thread Ni Wesley
Hi all,
   I hit a problem that cannot use the connection pool effectively.

engine = create_engine(db_url,pool_size=50,max_overflow=60, 
echo=engine_echo,pool_recycle=3600)

from sqlalchemy.sql.expression import text, bindparam
from sqlalchemy.sql import select,insert, delete, update
from sqlalchemy.schema import Table
from sqlalchemy.orm import sessionmaker,scoped_session
from db import dba_logger,metadata,engine#,session
from datetime import datetime
from exctrace import exctrace
from sqlalchemy import and_

direct_engine = True
use_raw = False

#import gevent
#from gevent import monkey
#monkey.patch_socket()

class DBWriter(object):

def __init__(self,tablename):

self.tablename = tablename
try:
self._table=Table(tablename, metadata, autoload=True)
except Exception,e:
exctrace('db','1','DBWriter init failed',dba_logger,'DBWriter 
init failed','Exception when DBWriter initing table:%s' % str(e))
#dba_logger.log(40,'Exception when DBWriter initing table:%s' % 
str(e))

def dbwrite(self,**kwargs):

Used to insert exception info into database.

Params:
module : module name, indicating who raises the exception, e.g. 
android,ios,psg,adpns,db .etc
type : exception type, 0 means service level while 1 is system 
level.
message : exception description,length limit to 256 bytes

try:

i=self._table.insert().values(**kwargs)
if direct_engine:
#engine.execute(i)
conn = engine.connect()
conn.execute(i)
conn.close()
#gevent.spawn(engine.execute,i)
#gevent.sleep(0)
#gevent.joinall([gevent.spawn(engine.execute,i)])
else:
session = 
scoped_session(sessionmaker(bind=engine,autocommit=True))
session.execute(i)
#session.commit()
#session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when writing 
db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
% str(kwargs))
if not direct_engine:
session.rollback()
session.close()

def dbupdate(self,whereclause,**kwargs):

Used to insert exception info into database.

Params:
module : module name, indicating who raises the exception, e.g. 
android,ios,psg,adpns,db .etc
type : exception type, 0 means service level while 1 is system 
level.
message : exception description,length limit to 256 bytes

try:

i=self._table.update().values(**kwargs).where(whereclause)
if direct_engine:
#engine.execute(i)
#gevent.spawn(engine.execute,i)
#gevent.sleep(0)
conn = engine.connect()
conn.execute(i)
conn.close()
#gevent.joinall([gevent.spawn(engine.execute,i)])
else:
session = 
scoped_session(sessionmaker(bind=engine,autocommit=True))
session.execute(i)
#session.commit()
#session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when updating 
db',dba_logger,'Exception when dbupdate:%s' % str(e),'Exception detail:%s' 
% str(kwargs))
if not direct_engine:
session.rollback()
session.close()

def dbquery(self,whereclause):
try:

i=self._table.select().where(whereclause)
if direct_engine:
#res = engine.execute(i)
conn = engine.connect()
res=conn.execute(i)
conn.close()
return res
else:
session = 
scoped_session(sessionmaker(bind=engine,autocommit=True))
res = session.execute(i)
return res
#session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when querying 
db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
str(whereclause))
#session.rollback()
if not direct_engine:
session.close()
#res = 
session.execute(connection_writer._table.select().where(connection_writer._table.c.app_key==self.app_key).where(connection_writer._table.c.device_token==self._devicetoken))


messages_writer = 

Re: [sqlalchemy] sqlalchemy only has 1 or 2 tcp connections to mysql server

2014-03-13 Thread Ni Wesley
No, I don't use gevent anymore, everything goes from engine, you can see I 
have a direct_engine=True to control


在 2014年3月14日星期五UTC+8下午12时22分09秒,Michael Bayer写道:


 On Mar 13, 2014, at 11:51 PM, Ni Wesley nis...@gmail.com javascript: 
 wrote: 

  Hi all, 
 I hit a problem that cannot use the connection pool effectively. 
  
  messages_writer = DBWriter('messages') 
  connection_writer = DBWriter('sessions_details') 
  messages_details_writer = DBWriter('messages_details') 
  
  
  In other place, I call the connection_writer to do rapid db operations. 
  For example, keep operatons rapidly. 
  
  And, I watch from the client by netstat -anpto|grep mysql_server_ip|wc 
 -l 
  
  I see the number is always 1 or 2. 

 the code you sent does not do anything concurrently?  it only has code 
 referring to “gevent”, but that code is all commented out. 

 if you’d like to see a connection pool open more than two connections, run 
 this program: 

 from sqlalchemy import create_engine 

 e = create_engine(“mysql://scott:tiger@localhost/test”) 

 conn1 = e.connect() 
 conn2 = e.connect() 
 conn3 = e.connect() 

 raw_input(‘take a look at your netstat now’) 


 run your netstat in a separate window at the prompt, and you’ll see three 
 connections open.   

 Add this: 

 conn4 = e.connect() 

 then you’ll see four.   Simple right? 

 that’s all I can give you.   There is nothing mysterious going on, so as 
 to why your program isn’t concurrent, I have no idea, you aren’t really 
 sharing the actual program here (and you don’t have to, I don’t have time 
 to debug a whole gevent program).   

 But SQLAlchemy likely has nothing to do with the issue.  If it does, you’d 
 need to boil it down to a simple test case that any of us can run which 
 confirms a specific issue in SQLAlchemy.   Maybe QueuePool locks in some 
 funny way with gevent (noone has reported that, but it does have some 
 mutexes in there on checkout and I’ve no idea what mutexes do when you’re 
 using gevent).Try using NullPool instead then.   That’s about it.  Or 
 just use a straight MySQL connection and cursor, this program is so 
 simplistic running just three SQL statements, I think you’ll find if you 
 run it using no SQLAlchemy at all, it will do exactly the same thing with 
 the same concurrency/performance issues. 


  Actually, the other script who is calling db opeartion is slow, why the 
 tcp connections not increased to improve the performance? 

 As I said I’ve not used gevent very much, maybe gevent has a community 
 that can be consulted also. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] sql expression performance is bad ?

2014-03-12 Thread Ni Wesley
Hi guys,
   I hit a problem when using sqlalchemy operating mysql.

First, let me clarify my code frames.

I have a web service to receive http requests, then, it send task to a tcp 
server via zeromq pull/push mode, tcp server pull and do some push work to 
cell phone.

I hit a problem that, tcp server pushing to cell phone is too slow...and 
finally I find the bottleneck is sqlalchemy operating on mysql.

if without query/insert/update mysql, for 1000 requests, takes 1.5 seconds 
to handle all, but if with db operation, takes about 100 seconds...

So, here paste my sqlalchemy code:

engine = create_engine(db_url, 
pool_size=0,max_overflow=200,echo=engine_echo,pool_recycle=3600)
session = scoped_session(sessionmaker(bind=engine))
metadata = MetaData(bind=engine)

class DBWriter(object):

def __init__(self,tablename):

self.tablename = tablename
try:
self._table=Table(tablename, metadata, autoload=True)
except Exception,e:
exctrace('db','1','DBWriter init failed',dba_logger,'DBWriter 
init failed','Exception when DBWriter initing table:%s' % str(e))
#dba_logger.log(40,'Exception when DBWriter initing table:%s' % 
str(e))   

def dbwrite(self,**kwargs):

Used to insert exception info into database.

Params:
module : module name, indicating who raises the exception, e.g. 
android,ios,psg,adpns,db .etc
type : exception type, 0 means service level while 1 is system 
level.
message : exception description,length limit to 256 bytes

try:
session = scoped_session(sessionmaker(bind=engine))
i=self._table.insert().values(**kwargs) 

session.execute(i)
session.commit()
session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when writing 
db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
% str(kwargs))
session.rollback()
session.close()

def dbupdate(self,whereclause,**kwargs):

Used to insert exception info into database.

Params:
module : module name, indicating who raises the exception, e.g. 
android,ios,psg,adpns,db .etc
type : exception type, 0 means service level while 1 is system 
level.
message : exception description,length limit to 256 bytes

try:
session = scoped_session(sessionmaker(bind=engine))
i=self._table.update().values(**kwargs).where(whereclause) 

session.execute(i)
session.commit()
session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when updating 
db',dba_logger,'Exception when dbupdate:%s' % str(e),'Exception detail:%s' 
% str(kwargs))
session.rollback()
session.close()

def dbquery(self,whereclause):
try:
session = scoped_session(sessionmaker(bind=engine))
i=self._table.select().where(whereclause) 

res = session.execute(i)
return res
session.close()
except Exception,e:
#dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
#dba_logger.log(20,'Exception detail:%s' % str(kwargs))
exctrace('db','1','Error happened when querying 
db',dba_logger,'Exception when dbquery:%s' % str(e),'Exception detail:%s' % 
str(whereclause))
#session.rollback()
session.close()
#res = 
session.execute(connection_writer._table.select().where(connection_writer._table.c.app_key==self.app_key).where(connection_writer._table.c.device_token==self._devicetoken))


messages_writer = DBWriter('messages')
connection_writer = DBWriter('sessions_details')
messages_details_writer = DBWriter('messages_details')

So, for any place needing db query, for example, query/update/insert on 
table 'sessions_details', import connection_writer and use it's 
dbwrite/dbupdate/dbquery method.

So, anybody has suggestion to improve the performance?

Thanks.
Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-12 Thread Ni Wesley
I am sorry...
The reason why I close the session everytime is, from some googled docs, 
somebody say the engine pool is full because some sessions are not closed...

I just realized that I can have a try directly use engine.execute, instead 
of session...

I will look at the link you pasted later.

Thanks.
Wesley


在 2014年3月12日星期三UTC+8下午9时35分08秒,Michael Bayer写道:


 On Mar 12, 2014, at 8:32 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote: 

  Hi guys, 
 I hit a problem when using sqlalchemy operating mysql. 
  
  First, let me clarify my code frames. 
  
  I have a web service to receive http requests, then, it send task to a 
 tcp server via zeromq pull/push mode, tcp server pull and do some push work 
 to cell phone. 
  
  I hit a problem that, tcp server pushing to cell phone is too slow...and 
 finally I find the bottleneck is sqlalchemy operating on mysql. 
  
  if without query/insert/update mysql, for 1000 requests, takes 1.5 
 seconds to handle all, but if with db operation, takes about 100 seconds... 
  
  So, here paste my sqlalchemy code: 
  
  engine = create_engine(db_url, 
 pool_size=0,max_overflow=200,echo=engine_echo,pool_recycle=3600) 
  session = scoped_session(sessionmaker(bind=engine)) 
  metadata = MetaData(bind=engine) 
  
  def dbwrite(self,**kwargs): 
   
  Used to insert exception info into database. 
  
  Params: 
  module : module name, indicating who raises the exception, 
 e.g. android,ios,psg,adpns,db .etc 
  type : exception type, 0 means service level while 1 is 
 system level. 
  message : exception description,length limit to 256 bytes 
   
  try: 
  session = scoped_session(sessionmaker(bind=engine)) 
  i=self._table.insert().values(**kwargs) 
  
  session.execute(i) 
  session.commit() 
  session.close() 
  except Exception,e: 
  #dba_logger.log(40,'Exception when dbwriter:%s' % str(e)) 
  #dba_logger.log(20,'Exception detail:%s' % str(kwargs)) 
  exctrace('db','1','Error happened when writing 
 db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
 % str(kwargs)) 
  session.rollback() 
  session.close() 


 I’m not thrilled about the pattern here where you’re opening/closing new 
 Sessions for every request, if you’re just using Core you can stick with 
 engine.connect() at least which will do less work, though the overhead of a 
 Session used just for execute() shouldn’t be terrible. 

 for deep performance tuning you first need to look at queries and how fast 
 they are and then beyond that look at python profiling.An introduction 
 to all these techniques is here: 
 http://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677
  




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-12 Thread Ni Wesley
Just have a try.

Move all operations upon session directly to engine.execute.

Almost the same performance.

How to improve? :-(
 

在 2014年3月12日星期三UTC+8下午9时35分08秒,Michael Bayer写道:


 On Mar 12, 2014, at 8:32 AM, Ni Wesley nis...@gmail.com javascript: 
 wrote: 

  Hi guys, 
 I hit a problem when using sqlalchemy operating mysql. 
  
  First, let me clarify my code frames. 
  
  I have a web service to receive http requests, then, it send task to a 
 tcp server via zeromq pull/push mode, tcp server pull and do some push work 
 to cell phone. 
  
  I hit a problem that, tcp server pushing to cell phone is too slow...and 
 finally I find the bottleneck is sqlalchemy operating on mysql. 
  
  if without query/insert/update mysql, for 1000 requests, takes 1.5 
 seconds to handle all, but if with db operation, takes about 100 seconds... 
  
  So, here paste my sqlalchemy code: 
  
  engine = create_engine(db_url, 
 pool_size=0,max_overflow=200,echo=engine_echo,pool_recycle=3600) 
  session = scoped_session(sessionmaker(bind=engine)) 
  metadata = MetaData(bind=engine) 
  
  def dbwrite(self,**kwargs): 
   
  Used to insert exception info into database. 
  
  Params: 
  module : module name, indicating who raises the exception, 
 e.g. android,ios,psg,adpns,db .etc 
  type : exception type, 0 means service level while 1 is 
 system level. 
  message : exception description,length limit to 256 bytes 
   
  try: 
  session = scoped_session(sessionmaker(bind=engine)) 
  i=self._table.insert().values(**kwargs) 
  
  session.execute(i) 
  session.commit() 
  session.close() 
  except Exception,e: 
  #dba_logger.log(40,'Exception when dbwriter:%s' % str(e)) 
  #dba_logger.log(20,'Exception detail:%s' % str(kwargs)) 
  exctrace('db','1','Error happened when writing 
 db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
 % str(kwargs)) 
  session.rollback() 
  session.close() 


 I’m not thrilled about the pattern here where you’re opening/closing new 
 Sessions for every request, if you’re just using Core you can stick with 
 engine.connect() at least which will do less work, though the overhead of a 
 Session used just for execute() shouldn’t be terrible. 

 for deep performance tuning you first need to look at queries and how fast 
 they are and then beyond that look at python profiling.An introduction 
 to all these techniques is here: 
 http://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677
  




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql expression performance is bad ?

2014-03-12 Thread Ni Wesley
I have not used cprofile before, but I enable engine echo and some logging, 
here is output:
INFO:devsrv:httpMQ started...
DEBUG:devsrv:Got connection from ('10.0.3.96', 62560)
One device connected:('10.0.3.96', 62560)
Connection 
object,appkey:e32c72bab0e4d8e225318f98,devicetoken:1,response:{msg:**#*,id:1,flag:*#*},address:('10.0.3.96',
 
62560)
current clients number is : 1
DEBUG:devsrv:one entry start point : 2014-03-12 22:25:20.432274
2014-03-12 22:25:20,434 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit)
2014-03-12 22:25:20,435 INFO sqlalchemy.engine.base.Engine SELECT 
sessions_details.device_token, sessions_details.app_key, 
sessions_details.create_time, sessions_details.end_time, 
sessions_details.session_status 
FROM sessions_details 
WHERE sessions_details.app_key = %s AND sessions_details.device_token = %s
INFO:sqlalchemy.engine.base.Engine:SELECT sessions_details.device_token, 
sessions_details.app_key, sessions_details.create_time, 
sessions_details.end_time, sessions_details.session_status 
FROM sessions_details 
WHERE sessions_details.app_key = %s AND sessions_details.device_token = %s
2014-03-12 22:25:20,435 INFO sqlalchemy.engine.base.Engine 
(u'e32c72bab0e4d8e225318f98', u'1')
INFO:sqlalchemy.engine.base.Engine:(u'e32c72bab0e4d8e225318f98', u'1')
2014-03-12 22:25:20,448 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit)
2014-03-12 22:25:20,450 INFO sqlalchemy.engine.base.Engine UPDATE 
sessions_details SET create_time=%s, end_time=%s, session_status=%s WHERE 
sessions_details.app_key = %s AND sessions_details.device_token = %s
INFO:sqlalchemy.engine.base.Engine:UPDATE sessions_details SET 
create_time=%s, end_time=%s, session_status=%s WHERE 
sessions_details.app_key = %s AND sessions_details.device_token = %s
2014-03-12 22:25:20,450 INFO sqlalchemy.engine.base.Engine 
(datetime.datetime(2014, 3, 12, 22, 25, 20, 439990), 
datetime.datetime(2014, 3, 12, 22, 25, 20, 439990), '0', 
u'e32c72bab0e4d8e225318f98', u'1')
INFO:sqlalchemy.engine.base.Engine:(datetime.datetime(2014, 3, 12, 22, 25, 
20, 439990), datetime.datetime(2014, 3, 12, 22, 25, 20, 439990), '0', 
u'e32c72bab0e4d8e225318f98', u'1')
2014-03-12 22:25:20,453 INFO sqlalchemy.engine.base.Engine COMMIT
INFO:sqlalchemy.engine.base.Engine:COMMIT
DEBUG:devsrv:one entry end point : 2014-03-12 22:25:20.510025

This is just one device connection operation.
So, seems commit operation takes 60 ms.

Even this, every device takes 80ms, 1000 should take 8 seconds if 
everything is well, but acctually, 1000 devices takes 100 seconds.

BTW, do you got a google talk account, my is nisp...@gmail.com, could I 
invite you to my friend list?


在 2014年3月13日星期四UTC+8上午10时18分01秒,Michael Bayer写道:

 hows your SQL debug output and your profiling going?


 On Mar 12, 2014, at 9:56 PM, Ni Wesley nis...@gmail.com javascript: 
 wrote:

 Just have a try.

 Move all operations upon session directly to engine.execute.

 Almost the same performance.

 How to improve? :-(
  

 在 2014年3月12日星期三UTC+8下午9时35分08秒,Michael Bayer写道:


 On Mar 12, 2014, at 8:32 AM, Ni Wesley nis...@gmail.com wrote: 

  Hi guys, 
 I hit a problem when using sqlalchemy operating mysql. 
  
  First, let me clarify my code frames. 
  
  I have a web service to receive http requests, then, it send task to a 
 tcp server via zeromq pull/push mode, tcp server pull and do some push work 
 to cell phone. 
  
  I hit a problem that, tcp server pushing to cell phone is too 
 slow...and finally I find the bottleneck is sqlalchemy operating on mysql.
  
  
  if without query/insert/update mysql, for 1000 requests, takes 1.5 
 seconds to handle all, but if with db operation, takes about 100 seconds...
  
  
  So, here paste my sqlalchemy code: 
  
  engine = create_engine(db_url, 
 pool_size=0,max_overflow=200,echo=engine_echo,pool_recycle=3600) 
  session = scoped_session(sessionmaker(bind=engine)) 
  metadata = MetaData(bind=engine) 
  
  def dbwrite(self,**kwargs): 
   
  Used to insert exception info into database. 
  
  Params: 
  module : module name, indicating who raises the exception, 
 e.g. android,ios,psg,adpns,db .etc 
  type : exception type, 0 means service level while 1 is 
 system level. 
  message : exception description,length limit to 256 bytes 
   
  try: 
  session = scoped_session(sessionmaker(bind=engine)) 
  i=self._table.insert().values(**kwargs) 
  
  session.execute(i) 
  session.commit() 
  session.close() 
  except Exception,e: 
  #dba_logger.log(40,'Exception when dbwriter:%s' % str(e)) 
  #dba_logger.log(20,'Exception detail:%s' % str(kwargs)) 
  exctrace('db','1','Error happened when writing 
 db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s

[sqlalchemy] Re: sql expression performance is bad ?

2014-03-12 Thread Ni Wesley
In case of network problem.

I test in a vlan.

Here is engine ouput and my logging:
DEBUG:devsrv:Got connection from ('192.168.1.7', 63181)
One device connected:('192.168.1.7', 63181)
Connection 
object,appkey:e32c72bab0e4d8e225318f98,devicetoken:1,response:{msg:**#*,id:1,flag:*#*},address:('192.168.1.7',
 
63181)
current clients number is : 1
DEBUG:devsrv:one entry start point : 2014-03-13 12:01:27.322528
2014-03-13 12:01:27,325 INFO sqlalchemy.engine.base.Engine SELECT 
sessions_details.device_token, sessions_details.app_key, 
sessions_details.create_time, sessions_details.end_time, 
sessions_details.session_status 
FROM sessions_details 
WHERE sessions_details.app_key = %s AND sessions_details.device_token = %s
INFO:sqlalchemy.engine.base.Engine:SELECT sessions_details.device_token, 
sessions_details.app_key, sessions_details.create_time, 
sessions_details.end_time, sessions_details.session_status 
FROM sessions_details 
WHERE sessions_details.app_key = %s AND sessions_details.device_token = %s
2014-03-13 12:01:27,325 INFO sqlalchemy.engine.base.Engine 
(u'e32c72bab0e4d8e225318f98', u'1')
INFO:sqlalchemy.engine.base.Engine:(u'e32c72bab0e4d8e225318f98', u'1')
2014-03-13 12:01:27,331 INFO sqlalchemy.engine.base.Engine UPDATE 
sessions_details SET create_time=%s, end_time=%s, session_status=%s WHERE 
sessions_details.app_key = %s AND sessions_details.device_token = %s
INFO:sqlalchemy.engine.base.Engine:UPDATE sessions_details SET 
create_time=%s, end_time=%s, session_status=%s WHERE 
sessions_details.app_key = %s AND sessions_details.device_token = %s
2014-03-13 12:01:27,331 INFO sqlalchemy.engine.base.Engine 
(datetime.datetime(2014, 3, 13, 12, 1, 27, 330055), datetime.datetime(2014, 
3, 13, 12, 1, 27, 330055), '0', u'e32c72bab0e4d8e225318f98', u'1')
INFO:sqlalchemy.engine.base.Engine:(datetime.datetime(2014, 3, 13, 12, 1, 
27, 330055), datetime.datetime(2014, 3, 13, 12, 1, 27, 330055), '0', 
u'e32c72bab0e4d8e225318f98', u'1')
2014-03-13 12:01:27,334 INFO sqlalchemy.engine.base.Engine COMMIT
INFO:sqlalchemy.engine.base.Engine:COMMIT
DEBUG:devsrv:one entry end point : 2014-03-13 12:01:27.373776

We can see that, from the start point to end point, it takes 50 ms, 
including 40ms for engine commit.

For 1000 entries, it takes 40 seconds, here I have one question, when 1000 
connections are on-going, I watch the connections between tcp server and db 
server,

via command watch -n 1 netstat -anpto|grep dbserver_ip|wc -l, I find the 
spike value is just 26, actually, I gave 0 to engine pool_size, why it 
takes only 26?

Is this related



在 2014年3月12日星期三UTC+8下午8时32分53秒,Ni Wesley写道:

 Hi guys,
I hit a problem when using sqlalchemy operating mysql.

 First, let me clarify my code frames.

 I have a web service to receive http requests, then, it send task to a tcp 
 server via zeromq pull/push mode, tcp server pull and do some push work to 
 cell phone.

 I hit a problem that, tcp server pushing to cell phone is too slow...and 
 finally I find the bottleneck is sqlalchemy operating on mysql.

 if without query/insert/update mysql, for 1000 requests, takes 1.5 seconds 
 to handle all, but if with db operation, takes about 100 seconds...

 So, here paste my sqlalchemy code:

 engine = create_engine(db_url, 
 pool_size=0,max_overflow=200,echo=engine_echo,pool_recycle=3600)
 session = scoped_session(sessionmaker(bind=engine))
 metadata = MetaData(bind=engine)

 class DBWriter(object):
 
 def __init__(self,tablename):
 
 self.tablename = tablename
 try:
 self._table=Table(tablename, metadata, autoload=True)
 except Exception,e:
 exctrace('db','1','DBWriter init failed',dba_logger,'DBWriter 
 init failed','Exception when DBWriter initing table:%s' % str(e))
 #dba_logger.log(40,'Exception when DBWriter initing table:%s' 
 % str(e))   
 
 def dbwrite(self,**kwargs):
 
 Used to insert exception info into database.
 
 Params:
 module : module name, indicating who raises the exception, 
 e.g. android,ios,psg,adpns,db .etc
 type : exception type, 0 means service level while 1 is system 
 level.
 message : exception description,length limit to 256 bytes
 
 try:
 session = scoped_session(sessionmaker(bind=engine))
 i=self._table.insert().values(**kwargs) 
 
 session.execute(i)
 session.commit()
 session.close()
 except Exception,e:
 #dba_logger.log(40,'Exception when dbwriter:%s' % str(e))
 #dba_logger.log(20,'Exception detail:%s' % str(kwargs))
 exctrace('db','1','Error happened when writing 
 db',dba_logger,'Exception when dbwriter:%s' % str(e),'Exception detail:%s' 
 % str(kwargs))
 session.rollback()
 session.close()
 
 def dbupdate(self,whereclause,**kwargs):
 
 Used to insert exception info

[sqlalchemy] Session commit block?

2014-03-01 Thread Ni Wesley
Hi all,
  I hit a problem these day that sqlalchemy session execute hangs some time.
And if one db operation hangs, all the following operations are blocked...

Here is exception snippet:
ERROR:tornado.application:Uncaught exception, closing connection.
Traceback (most recent call last):
  File /root/workspace/Push Server/server/tornado/iostream.py, line 341, 
in wrapper
callback(*args)
  File /root/workspace/Push Server/server/tornado/stack_context.py, line 
331, in wrapped
raise_exc_info(exc)
  File /root/workspace/Push Server/server/tornado/stack_context.py, line 
302, in wrapped
ret = fn(*args, **kwargs)
  File deviceserver.py, line 241, in send_message
res = 
session.execute(connection_writer._table.select().where(connection_writer._table.c.app_key==self.app_key).where(connection_writer._table.c.device_token==self._devicetoken))
  File /root/workspace/Push Server/server/sqlalchemy/orm/scoping.py, line 
149, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File /root/workspace/Push Server/server/sqlalchemy/orm/session.py, line 
978, in execute
clause, params or {})
  File /root/workspace/Push Server/server/sqlalchemy/engine/base.py, line 
717, in execute
return meth(self, multiparams, params)
  File /root/workspace/Push Server/server/sqlalchemy/sql/elements.py, 
line 317, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File /root/workspace/Push Server/server/sqlalchemy/engine/base.py, line 
814, in _execute_clauseelement
compiled_sql, distilled_params
  File /root/workspace/Push Server/server/sqlalchemy/engine/base.py, line 
927, in _execute_context
context)
  File /root/workspace/Push Server/server/sqlalchemy/engine/base.py, line 
1076, in _handle_dbapi_exception
exc_info
  File /root/workspace/Push Server/server/sqlalchemy/util/compat.py, line 
185, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /root/workspace/Push Server/server/sqlalchemy/engine/base.py, line 
920, in _execute_context
context)
  File /root/workspace/Push Server/server/sqlalchemy/engine/default.py, 
line 425, in do_execute
cursor.execute(statement, parameters)
  File /root/workspace/Push Server/server/pymysql/cursors.py, line 102, 
in execute
result = self._query(query)
  File /root/workspace/Push Server/server/pymysql/cursors.py, line 202, 
in _query
conn.query(q)
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
729, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
852, in _read_query_result
result.read()
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
1055, in read
first_packet = self.connection._read_packet()
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
820, in _read_packet
packet = packet_type(self)
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
240, in __init__
self._recv_packet(connection)
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
246, in _recv_packet
packet_header = connection._read_bytes(4)
  File /root/workspace/Push Server/server/pymysql/connections.py, line 
829, in _read_bytes
Lost connection to MySQL server during query (%r) % (e,))
OperationalError: (OperationalError) (2013, Lost connection to MySQL 
server during query (error(110, 'Connection timed out'))) 'SELECT 
sessions_details.device_token, sessions_details.app_key, 
sessions_details.create_time, sessions_details.end_time, 
sessions_details.session_status \nFROM sessions_details \nWHERE 
sessions_details.app_key = %s AND sessions_details.device_token = %s' 
(u'e32c72bab0e4d8e225318f98', u'b8066d12-f34d-4464-bff7-7ff6dcd2e196')

I don't why this happens, actually, the session blocked for about 5 minutes,but 
mysql server works fine at that moment.*.is 
5 minutes special? Or is there any timeout settings to control session 
execution timeout?*

Then, I checked the code, and find that, in my whole project, every db 
operation use the same global session, is this related? I cannot access the 
code now, but raise the question here for discussion. I see some articles 
on the net saying, using new created session everytime when db opeation is 
required.

Anybody has such expericenc? should I use new created session everytime?

Thanks.
Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Session commit block?

2014-03-01 Thread Ni Wesley
Here is my code:

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker,scoped_session
from sqlalchemy.schema import MetaData

engine = create_engine(db_url, echo=engine_echo,pool_recycle=300)
session = scoped_session(sessionmaker(bind=engine))

So, for code blocks that need db operation, they all refer to the above 
session to do select, update , insert.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Sqlalchemy ORM operates on database not created from sqlalchemy

2014-01-22 Thread Ni Wesley
I mean sqlalchemy ORM, that is, use python class mapping to database table.

I know sqlalchemy core expression does work, this guy doesn't care how
database is created.

Wesley
2014年1月22日 下午7:04于 Simon King si...@simonking.org.uk写道:

 On Wed, Jan 22, 2014 at 3:41 AM, Ni Wesley nisp...@gmail.com wrote:
  Hi guys,
 I have a question here.
  Just as the title, I mean, for example:
 
  I am using mysql as the database engine.
  Now, I use sqlalchemy ORM to map python class to database table.
 
  My question is here:
  Actually, there are specified teams to do the database design and tables
  creation work.
  Which means that, I won't sync table structures from sqlalchemy something
  like from session.create_all().
 
  So,if I keep my model class synced with the database table structure,
 can I
  still use ORM operations, say, session.query(Class), session.add,
  session.commit .etc
 

 Yes, absolutely. SQLAlchemy doesn't care how the database was created.
 You should be able to use all the normal operations on a database
 created by some other means.

 Hope that helps,

 Simon

 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/l94LE3Am2f8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Sqlalchemy ORM operates on database not created from sqlalchemy

2014-01-21 Thread Ni Wesley
Hi guys,
   I have a question here.
Just as the title, I mean, for example:

I am using mysql as the database engine.
Now, I use sqlalchemy ORM to map python class to database table.

My question is here:
Actually, there are specified teams to do the database design and tables 
creation work.
Which means that, I won't sync table structures from sqlalchemy something 
like from session.create_all().

So,if I keep my model class synced with the database table structure, can I 
still use ORM operations, say, session.query(Class), session.add, 
session.commit .etc 

Thanks.
Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Sqlalchemy ORM operates on database not created from sqlalchemy

2014-01-21 Thread Ni Wesley
Hi guys,
   I have a question here.
Just as the title, I mean, for example:

I am using mysql as the database engine.
Now, I use sqlalchemy ORM to map python class to database table.

My question is here:
Actually, there are specified teams to do the database design and tables 
creation work.
Which means that, I won't sync table structures from sqlalchemy something 
like from session.create_all().

So,if I keep my model class synced with the database table structure, can I 
still use ORM operations, say, session.query(Class), session.add, 
session.commit .etc 

Thanks.
Wesley

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: Mysql server has gone away

2014-01-13 Thread Ni Wesley
I just find a solution that try to ping database server by select 1 every 
give time.

Thanks all you guys.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Mysql server has gone away

2014-01-12 Thread Ni Wesley
Hi all,
  I hit a problem when use sqlalchemy with mysql.
I found the problem is mysql's auto close to nonactive connection over 8 
hours.

And , I googled a lot and find pool_recycle option for create_engine.
But seems does not work.

Here is my code:
engine = create_engine(db_url, echo=engine_echo,pool_recycle=30)
session = scoped_session(sessionmaker(bind=engine))

And in mysql, I set wait_timeout to 60 secs for testing.

After this, I still got exception here:
2014-01-12 22:32:45,325 INFO sqlalchemy.engine.base.Engine SELECT device.id 
AS device_id, device.devicetoken AS device_devicetoken 
FROM device 
WHERE device.devicetoken = %s 
 LIMIT %s
INFO:sqlalchemy.engine.base.Engine:SELECT device.id AS device_id, 
device.devicetoken AS device_devicetoken 
FROM device 
WHERE device.devicetoken = %s 
 LIMIT %s
2014-01-12 22:32:45,326 INFO sqlalchemy.engine.base.Engine ('323', 1)
INFO:sqlalchemy.engine.base.Engine:('323', 1)
ERROR:tornado.application:Uncaught exception, closing connection.
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/tornado/iostream.py, line 341, in 
wrapper
callback(*args)
  File /usr/lib/python2.6/site-packages/tornado/stack_context.py, line 
331, in wrapped
raise_exc_info(exc)
  File /usr/lib/python2.6/site-packages/tornado/stack_context.py, line 
302, in wrapped
ret = fn(*args, **kwargs)
  File deviceserver.py, line 182, in send_message
tmp_usr = 
session.query(Device).filter_by(devicetoken=self._devicetoken).first()
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py,
 
line 2282, in first
ret = list(self[0:1])
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py,
 
line 2149, in __getitem__
return list(res)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py,
 
line 2353, in __iter__
return self._execute_and_instances(context)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/orm/query.py,
 
line 2368, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 
line 662, in execute
params)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 
line 761, in _execute_clauseelement
compiled_sql, distilled_params
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 
line 874, in _execute_context
context)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 
line 1024, in _handle_dbapi_exception
exc_info
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/util/compat.py,
 
line 196, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 
line 867, in _execute_context
context)
  File 
/usr/lib/python2.6/site-packages/SQLAlchemy-0.8.3-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py,
 
line 324, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py, line 201, 
in execute
self.errorhandler(self, exc, value)
  File /usr/lib64/python2.6/site-packages/MySQLdb/connections.py, line 
36, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (OperationalError) (2006, 'MySQL server has gone away') 
'SELECT device.id AS device_id, device.devicetoken AS device_devicetoken 
\nFROM device \nWHERE device.devicetoken = %s \n LIMIT %s' ('323', 1)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.