[sqlalchemy] Re: filtering with an association_proxy

2009-11-18 Thread scott
Great, thanks, I found a ticket that was already open for this and
added to it.

http://www.sqlalchemy.org/trac/ticket/1372


On Nov 17, 7:23 am, Michael Bayer mike...@zzzcomputing.com wrote:
 scott wrote:

  Is there a way to filter a query involving an association_proxy?

  For example, say I have a one to many relation between Pages and Tags,
  and an association_proxy like this to let me represent tags as a list
  of strings.

  tag_objects = orm.relation('Tag')
  tags = association_proxy('tag_objects', 'name')

  Now I want to find all the pages tagged with 'foo'. As far as I know I
  have to break the abstraction barrier provided by the
  association_proxy and do something like:

  sess.query(Page.tag_objects.any(name='foo'))

  Is there any mechanism for doing something like this instead?

  sess.query(Page.tags.any('foo'))

  If there's nothing similar already existing, is this functionality
  desirable? It seems like it could be really useful for clarifying
  complex filtering, especially involving many to many relations with
  association objects. I wrote an example patch implementing this
  for .any() and .has(), with tests. I'm happy to post a ticket and
  flesh it out more if it seems reasonable.

 http://web.mit.edu/storborg/Public/better-associationproxy-filtering

 we absolutely would want associationproxy to provide the standard
 comparison functions for free - right now its a do-it-yourself thing.   If
 you want to work on that that would be great !





  Thanks,
  Scott

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] mysql + query.execute memory usage

2009-11-18 Thread James Casbon
Hi,

I'm using sqlalchemy to generate a query that returns lots of data.
The trouble is, when calling query.execute() instead of returning
the resultproxy straight away and allowing me to fetch data as I would
like, query.execute blocks and the memory usage grows to gigabytes
before getting killed for too much memory.  This looks to me like
execute is prefetching the entire result.

Is there any way to prevent query.execute loading the entire result?

Thanks,
James

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




Re: [sqlalchemy] mysql + query.execute memory usage

2009-11-18 Thread Michael Bayer

On Nov 18, 2009, at 9:57 AM, James Casbon wrote:

 Hi,
 
 I'm using sqlalchemy to generate a query that returns lots of data.
 The trouble is, when calling query.execute() instead of returning
 the resultproxy straight away and allowing me to fetch data as I would
 like, query.execute blocks and the memory usage grows to gigabytes
 before getting killed for too much memory.  This looks to me like
 execute is prefetching the entire result.
 
 Is there any way to prevent query.execute loading the entire result?

for ORM look into using yield_per() or applying limit()/offset().  without the 
ORM no rows are buffered on the SQLA side.  Note however that MySQLdb is likely 
prefetching the entire result set in any case (this is psycopg2s behavior but 
haven't confirmed for MySQLdb).

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] Re: mysql + query.execute memory usage

2009-11-18 Thread James Casbon


On Nov 18, 3:01 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 18, 2009, at 9:57 AM, James Casbon wrote:

  Hi,

  I'm using sqlalchemy to generate a query that returns lots of data.
  The trouble is, when calling query.execute() instead of returning
  the resultproxy straight away and allowing me to fetch data as I would
  like, query.execute blocks and the memory usage grows to gigabytes
  before getting killed for too much memory.  This looks to me like
  execute is prefetching the entire result.

  Is there any way to prevent query.execute loading the entire result?

 for ORM look into using yield_per() or applying limit()/offset().  without 
 the ORM no rows are buffered on the SQLA side.  Note however that MySQLdb is 
 likely prefetching the entire result set in any case (this is psycopg2s 
 behavior but haven't confirmed for MySQLdb).

Thanks, but not using the ORM.

Looks like you have to specify a server side cursor - see SSCursor in
http://mysql-python.sourceforge.net/MySQLdb.html

I don't recall any way of forcing sqlalchemy to use a particular
cursor?

James

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




Re: [sqlalchemy] Re: UnboundExecutionError with object on session

2009-11-18 Thread Fernando Takai
This is the exact code that is running now:

log.info(Creating proxies for file %s % job.file)

job = session.merge(job)

if agent.transcoder.run(job.local_filename):
log.info(Transcoding finished for file %s % job.local_filename)
else:
log.error(Occurred a problem while transcoding the file. Please, verify 
the logs.)
job.error_status = Occurred a problem while transcoding the file. Please, 
verify the logs.

(The problem is occurring on the log.info line)
As you can see, i *am* merging the job on the current session - so, the 
exception shouldn't be happening.

Also, i use the local_filename variable before, so it's not expired (afaik).

On Nov 17, 2009, at 1:35 PM, Michael Bayer wrote:

 
 Fernando Takai wrote:
 
 I have changed my code to be like this:
 
 job = session.merge(job)
 # Merge docs says that object does not get into the session
 session.add(job)
 
 log.info(Job finished! %s % job.file)
 
 When using latest SQLAlchemy trunk and 0.5.6 sometimes i get
 UnboundExecutionError.
 
 Also, i can assure that all my attributes are loaded - i use all of them
 before and no error occurs.
 
 but they may be expired at some point.  If their host session is rolled
 back or committed, for example.   The stack trace will illustrate exactly
 what action is initiating the load operation. For example, in the stack
 trace you posted, the job.local_filename attribute is unloaded or
 expired.  You can test for this by seeing that local_filename is not
 present in job.__dict__.
 
 
 
 Maybe there's something wrong with the way i'm getting the session:
 
 session = Session.object_session(job)
 if not session:
session = Session()
 
 there's nothing wrong with it per se except it appears to be guessing as
 to what session should be used - well lets see if this object has a
 session from somewhere, or otherwise we'll just make one, maybe not.
 
 Sessions can be created as much as you like but they work best when they
 are in charge of their scope, i.e.:
 
 def do_something():
# build the one session we care about for this thread/operation
sess = session()
 
# ensure everyone from outside is merged into our one session
o1 = sess.merge(o1)
o2 = sess.merge(o2)
o3 = sess.merge(o3)
 
# commit with authority
sess.commit()
 
 
 as opposed to:
 
 def do_something():
s1 = object_session(o1) or Session()
s2 = object_session(o2) or Session()
s3 = object_session(o3) or Session()
 
s3.commit()
s2.commit() ?  or no ?  where did this session come from ?
 
 
 
 
 
 
 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---
 

--
Fernando Takai
http://twitter.com/fernando_takai





--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Re: mysql + query.execute memory usage

2009-11-18 Thread Michael Bayer
James Casbon wrote:


 On Nov 18, 3:01 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 18, 2009, at 9:57 AM, James Casbon wrote:

  Hi,

  I'm using sqlalchemy to generate a query that returns lots of data.
  The trouble is, when calling query.execute() instead of returning
  the resultproxy straight away and allowing me to fetch data as I would
  like, query.execute blocks and the memory usage grows to gigabytes
  before getting killed for too much memory.  This looks to me like
  execute is prefetching the entire result.

  Is there any way to prevent query.execute loading the entire result?

 for ORM look into using yield_per() or applying limit()/offset().
  without the ORM no rows are buffered on the SQLA side.  Note however
 that MySQLdb is likely prefetching the entire result set in any case
 (this is psycopg2s behavior but haven't confirmed for MySQLdb).

 Thanks, but not using the ORM.

 Looks like you have to specify a server side cursor - see SSCursor in
 http://mysql-python.sourceforge.net/MySQLdb.html

 I don't recall any way of forcing sqlalchemy to use a particular
 cursor?


Here's a comment from Jason Kirtland on this feature:

http://www.sqlalchemy.org/trac/ticket/1089

AFAIK mysqldb (as of 1.2.2 anyhow) doesn't support mysql server side
cursors. it's SSCursor fetches rows in 'use_result' mode, which basically
just directly streams out the query results on demand, holding locks on
the tables for the entire duration. it's IMHO nigh useless if you have
multiple database readers. mysql has real server side cursors that
materialize as temporary tables and hold no locks, but mysqldb doesn't use
them.

ticket #1619 discusses an enhancement to select(), query() and text() that
would instruct SQLA to use streaming results as available for that
particular execution.  right now the feature is only available on the PG
dialect as an always on feature.



 James

 --

 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=.




--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Re: UnboundExecutionError with object on session

2009-11-18 Thread Michael Bayer
Fernando Takai wrote:
 This is the exact code that is running now:

 log.info(Creating proxies for file %s % job.file)

 job = session.merge(job)

 if agent.transcoder.run(job.local_filename):
 log.info(Transcoding finished for file %s % job.local_filename)
 else:
 log.error(Occurred a problem while transcoding the file. Please,
 verify the logs.)
 job.error_status = Occurred a problem while transcoding the file.
 Please, verify the logs.

 (The problem is occurring on the log.info line)
 As you can see, i *am* merging the job on the current session - so, the
 exception shouldn't be happening.

 Also, i use the local_filename variable before, so it's not expired
 (afaik).

from that code above, the error message you are seeing is impossible. 
Unless a concurrent thread is calling close() or similar on that session,
perhaps.




 On Nov 17, 2009, at 1:35 PM, Michael Bayer wrote:


 Fernando Takai wrote:

 I have changed my code to be like this:

 job = session.merge(job)
 # Merge docs says that object does not get into the session
 session.add(job)

 log.info(Job finished! %s % job.file)

 When using latest SQLAlchemy trunk and 0.5.6 sometimes i get
 UnboundExecutionError.

 Also, i can assure that all my attributes are loaded - i use all of
 them
 before and no error occurs.

 but they may be expired at some point.  If their host session is rolled
 back or committed, for example.   The stack trace will illustrate
 exactly
 what action is initiating the load operation. For example, in the stack
 trace you posted, the job.local_filename attribute is unloaded or
 expired.  You can test for this by seeing that local_filename is not
 present in job.__dict__.



 Maybe there's something wrong with the way i'm getting the session:

 session = Session.object_session(job)
 if not session:
session = Session()

 there's nothing wrong with it per se except it appears to be guessing
 as
 to what session should be used - well lets see if this object has a
 session from somewhere, or otherwise we'll just make one, maybe not.

 Sessions can be created as much as you like but they work best when they
 are in charge of their scope, i.e.:

 def do_something():
# build the one session we care about for this thread/operation
sess = session()

# ensure everyone from outside is merged into our one session
o1 = sess.merge(o1)
o2 = sess.merge(o2)
o3 = sess.merge(o3)

# commit with authority
sess.commit()


 as opposed to:

 def do_something():
s1 = object_session(o1) or Session()
s2 = object_session(o2) or Session()
s3 = object_session(o3) or Session()

s3.commit()
s2.commit() ?  or no ?  where did this session come from ?






 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---


 --
 Fernando Takai
 http://twitter.com/fernando_takai





 --

 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] create_engine causes exception in urllib?

2009-11-18 Thread gottfried
When I run the following three lines in a standalone test.py

import urllib
from sqlalchemy import create_engine
urllib.urlopen(http://www.google.com;)

then I get the following exception:

Exception AttributeError: 'NoneType' object has no attribute
'print_exc' in bound method FancyURLopener.__del__ of
urllib.FancyURLopener instance at 0x026F2558 ignored

Does anyone know why this happens?

Cheers,
Gottfried

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] create_engine causes exception in urllib?

2009-11-18 Thread Michael Bayer
gottfried wrote:
 When I run the following three lines in a standalone test.py

 import urllib
 from sqlalchemy import create_engine
 urllib.urlopen(http://www.google.com;)

 then I get the following exception:

 Exception AttributeError: 'NoneType' object has no attribute
 'print_exc' in bound method FancyURLopener.__del__ of
 urllib.FancyURLopener instance at 0x026F2558 ignored

 Does anyone know why this happens?

I get

Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type help, copyright, credits or license for more information.
 import urllib
 from sqlalchemy import create_engine
 urllib.urlopen(http://www.google.com;)
addinfourl at 16056888 whose fp = socket._fileobject object at 0xf2b4b0





 Cheers,
 Gottfried

 --

 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.