[sqlalchemy] Re: problem with server_default (and/or sa.PassiveDefault in 0.5.beta1

2008-06-18 Thread Werner F. Bruhin

I did some more research on this.

If I change back to 0.4.3, i.e. change my model to use:
##pkg_resources.require(sqlalchemy) # get latest version
pkg_resources.require(sqlalchemy==0.4.3) # use a specific version

Then I can debug my code, i.e. my application starts.

So, I changed it back to 0.5.beta1 and tried to find out were the 
problem starts in my code, it is on the following line.

self.prefs = self.ds.query(db.prefminimal).get(1)

prefminimal is a redefinition of the preferences table, i.e. it 
only maps columns which are present in the preferences table regardless 
of the version of my application.

The model for it is:
metadataMin = sa.MetaData()
prefminimal_table = sa.Table(u'preferences', metadataMin,
sa.Column(u'prefid', sa.Integer(), 
sa.Sequence('gen_preferences_prefid'), primary_key=True, nullable=False),
sa.Column(u'fk_langid', sa.Integer(), 
sa.ForeignKey(u'language.langid'), nullable=False),
sa.Column(u'dbstructure', sa.Integer()),
sa.Column(u'dbdata', sa.Integer()),
)


class Prefminimal(OrmObject):
pass


prefminimal = sao.mapper(Prefminimal, prefminimal_table,
properties={
})

All other tables (including the full version of preferences) are using 
metadata = sa.MetaData(), instead of the metadataMin I use for the 
prefminimal definition.

I got this working some month ago, but maybe this is not the right way 
to go about it and it bits me now?

Werner



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with server_default (and/or sa.PassiveDefault in 0.5.beta1

2008-06-18 Thread Werner F. Bruhin

Werner F. Bruhin wrote:
 I did some more research on this.

 If I change back to 0.4.3, i.e. change my model to use:
 ##pkg_resources.require(sqlalchemy) # get latest version
 pkg_resources.require(sqlalchemy==0.4.3) # use a specific version

 Then I can debug my code, i.e. my application starts.

 So, I changed it back to 0.5.beta1 and tried to find out were the 
 problem starts in my code, it is on the following line.

 self.prefs = self.ds.query(db.prefminimal).get(1)
   
I think it has nothing to do with the prefminimal, it happens on 
whatever is the first query on whatever table.

I just run a small test script with the idle debugger and I see the same 
exception.

If I run it in Idle with the debugger but use SA 0.4.3 then it runs o.k.

I hope this info helps.

Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Problem:maximum recursion depth exceeded

2008-06-18 Thread Marin

I get this error when i try to execute the folowing code (Test,
Section, Question, Answer are mepped tables):
test = Test()
section = Section()
section.Test=test
for i in range(60):
q=Question()
q.Section=section
for j in range(6)
a = Answer()
a.Question = q

db_session.save(test)
db_session.commit()    this command generates the error, and only
if the number of Questions is above cca. 45

But if I do a first commit before the for i, and one commit after
each question has been created then no error is generated. Is this a
big in SQLAlchemy?


  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 930, in execute
self.execute_save_steps(trans, task)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 947, in execute_save_steps
self.execute_per_element_childtasks(trans, task, False)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 970, in
execute_per_element_childtasks
self.execute_element_childtasks(trans, element, isdelete)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 974, in execute_element_childtasks
self.execute(trans, [child], isdelete)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 930, in execute
self.execute_save_steps(trans, task)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 945, in execute_save_steps
self.save_objects(trans, task)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/unitofwork.py, line 936, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/orm/mapper.py, line 1161, in _save_obj
c = connection.execute(statement.values(value_params), params)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/engine/base.py, line 844, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/engine/base.py, line 895, in execute_clauseelement
return self._execute_compiled(elem.compile(dialect=self.dialect,
column_keys=keys, inline=len(params)  1), distilled_params=params)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/expression.py, line 1132, in compile
compiler.compile()
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/compiler.py, line 181, in compile
self.string = self.process(self.statement)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/compiler.py, line 189, in process
return meth(obj, **kwargs)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/databases/postgres.py, line 712, in visit_insert
text = super(PGCompiler, self).visit_insert(insert_stmt)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/compiler.py, line 598, in visit_insert
colparams = self._get_colparams(insert_stmt)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/compiler.py, line 674, in _get_colparams
values.append((c, create_bind_param(c, None)))
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/compiler.py, line 634, in create_bind_param
return self.bindparam_string(self._truncate_bindparam(bindparam))
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.4.6-py2.5.egg/
sqlalchemy/sql/compiler.py, line 440, in bindparam_string
return self.bindtemplate % {'name':name,
'position':len(self.positiontup)}
RuntimeError: maximum recursion depth exceeded

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with server_default (and/or sa.PassiveDefault in 0.5.beta1

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 4:05 AM, Werner F. Bruhin wrote:

 All other tables (including the full version of preferences) are using
 metadata = sa.MetaData(), instead of the metadataMin I use for the
 prefminimal definition.

 I got this working some month ago, but maybe this is not the right way
 to go about it and it bits me now?


using two separate MetaData objects, if I correctly understand that's  
what you're doing, is not going to work at all if that Table has any  
ForeignKey objects (which it does).   I dont see the use case for two  
distinct MDs.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem:maximum recursion depth exceeded

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 4:16 AM, Marin wrote:


 I get this error when i try to execute the folowing code (Test,
 Section, Question, Answer are mepped tables):
 test = Test()
 section = Section()
 section.Test=test
 for i in range(60):
q=Question()
q.Section=section
for j in range(6)
a = Answer()
a.Question = q

 db_session.save(test)
 db_session.commit()    this command generates the error, and only
 if the number of Questions is above cca. 45

 But if I do a first commit before the for i, and one commit after
 each question has been created then no error is generated. Is this a
 big in SQLAlchemy?

cant really tell unless you provide a fully functional, as simple as  
possible test case, preferably against sqlite, which can be run with  
no dependencies.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] memory and cpu usage growing like crazy

2008-06-18 Thread Arun Kumar PG
hi all folks,

i have a search form that allows user to search for records.  i am eager
loading 4 attributes on the master object which results in 4 left outer
joins in the sa's sql query. the problem is that when i look at the memory
consumption using top command it looks crazy.

the memory shoots up by 50-60 MB instantly (some times even 100+ MB). i
executed the query on db directly and the results are returned in 3 secs
(close to around 60,000 rows). sa is spending a good amount of time
processing the results and while it is doing that i see abnormal memory
growth. also the cpu is used almost 98% during this time.

the interesting thing is that after processing the request the memory does
not comes down. it stays there only. i dont know why its not gc'ed.

my environment:
- mysql 4.1
- sa 3.9
- python 2.4

is there any chance that memory is getting leaked as i don't see memory come
down even after some time.

please give me some pointers  as my app is totally based on sa. memory and
cpu consumption is too high.

thanks.

cheers,

- a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Querying and Session.

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 6:23 AM, Harish wrote:


 Hello,

 I am new to SQL Alchemy and this is a terrific tool for my project. I
 am facing problems when I run a multi threaded application. The
 situation is as below :

 I have a table in the DB to generate a unique key everytime I request
 for it. DB.py is the file which interfaces with the database. The code
 to get object ID is below :

 def getObjId(self):
#ObjId is not Persistent

DB.SessionLock.acquire()
g = ''

print Currently Servicing : %s %
 threading.currentThread().getName()
if len(self.session.query(GlobalObjectID).all()) == 0:
print First Time
g =
 GlobalObjectID(GlobalObjectID.START_VAL)

#If persistent in DB, Increment, Update and return.
else :
print Querying DB
g = self.session.query(GlobalObjectID).one()
print Got : %d % g.getCurObjId()
g.setCurObjId(g.getCurObjId() + GlobalObjectID.INCR_VAL)

print g
#Update DB. Make the Object persistent in DB
try:
print repr(self.session.identity_map.values())
self.session.save_or_update(g)
print repr(self.session.identity_map.values())
self.session.commit()
print repr(self.session.identity_map.values())
except:
printException(Error Saving ObjectID Counter to
 DB!,False)
self.session.rollback()
os.system(pause)
DB.SessionLock.release()
return -

DB.SessionLock.release()
return g.getCurObjId()


 (DB is the class where this method is defined).
 GlobalObjectID is a class linked with globalobjectid table in the
 database. From my test code, I create multiple threads, each of which
 gets the same DB object (say dbo) and they print dbo.getObjecdtId 10
 times.

 After a while, I start getting exceptions during session.commit :

 KeyError: (class 'Model.GlobalObjectID', (1078,), None).

 I dont understand why, because the critical operation of fetching an
 object ID from dB, Incrementing it and Saving it back is under a lock.
 Also, I dont understand what does it mean, when I see objects in
 session.identity_map.values() and dont see anything in session.new or
 session.dirty or session.deleted. What other state could the object
 be?

 The session above is a transactional session with autoflush set to
 True. Is there anything wrong with the code above? please let me know.



if there is any access to that Session, or any objects loaded from  
that Session which are still attached, outside of your critical  
section, then the Session is being accessed in a non-threadsafe  
manner, and because you have autoflush set to True, a flush could  
occur anytime the database is queried, including when an attribute is  
lazily loaded upon first access.  It is strongly recommended not to  
share Sessions between threads.

The locking you're doing here would be better served using database  
level locking; in this case, you can just issue a SELECT...FOR UPDATE  
which will lock your guid row upon read, thereby making it safe  
against any other thread or process for when you next issue an  
UPDATE.  You can do this with the ORM but its probably simpler to use  
direct SQL expressions, such as:

def nextid():
 conn = engine.connect()
 trans = conn.begin()
 try:
 id = conn.execute(select([mytable.c.id],  
for_update=True)).scalar()
 id += 1
 conn.execute(mytable.update().values(id=id))
 trans.commit()
 return id
except:
 trans.rollback()





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread az

do u have any cache on the pipeline?

can u run that part of code in some newer env, e.g. sa 0.4last / 
python2.5 and see how it works?

if u do clear_mappers() will it drop?


On Wednesday 18 June 2008 16:59:20 Arun Kumar PG wrote:
 hi all folks,

 i have a search form that allows user to search for records.  i am
 eager loading 4 attributes on the master object which results in 4
 left outer joins in the sa's sql query. the problem is that when i
 look at the memory consumption using top command it looks crazy.

 the memory shoots up by 50-60 MB instantly (some times even 100+
 MB). i executed the query on db directly and the results are
 returned in 3 secs (close to around 60,000 rows). sa is spending a
 good amount of time processing the results and while it is doing
 that i see abnormal memory growth. also the cpu is used almost 98%
 during this time.

 the interesting thing is that after processing the request the
 memory does not comes down. it stays there only. i dont know why
 its not gc'ed.

 my environment:
 - mysql 4.1
 - sa 3.9
 - python 2.4

 is there any chance that memory is getting leaked as i don't see
 memory come down even after some time.

 please give me some pointers  as my app is totally based on sa.
 memory and cpu consumption is too high.

 thanks.

 cheers,

 - a

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread az

another thing, reduce the query filter to return say 5-10 rows, and 
execute it 1000 times, and see if mem goes up and up or stays.

On Wednesday 18 June 2008 16:59:20 Arun Kumar PG wrote:
 hi all folks,

 i have a search form that allows user to search for records.  i am
 eager loading 4 attributes on the master object which results in 4
 left outer joins in the sa's sql query. the problem is that when i
 look at the memory consumption using top command it looks crazy.

 the memory shoots up by 50-60 MB instantly (some times even 100+
 MB). i executed the query on db directly and the results are
 returned in 3 secs (close to around 60,000 rows). sa is spending a
 good amount of time processing the results and while it is doing
 that i see abnormal memory growth. also the cpu is used almost 98%
 during this time.

 the interesting thing is that after processing the request the
 memory does not comes down. it stays there only. i dont know why
 its not gc'ed.

 my environment:
 - mysql 4.1
 - sa 3.9
 - python 2.4

 is there any chance that memory is getting leaked as i don't see
 memory come down even after some time.

 please give me some pointers  as my app is totally based on sa.
 memory and cpu consumption is too high.

 thanks.

 cheers,

 - a

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 9:59 AM, Arun Kumar PG wrote:

 hi all folks,

 i have a search form that allows user to search for records.  i am  
 eager loading 4 attributes on the master object which results in 4  
 left outer joins in the sa's sql query. the problem is that when i  
 look at the memory consumption using top command it looks crazy.

 the memory shoots up by 50-60 MB instantly (some times even 100+  
 MB). i executed the query on db directly and the results are  
 returned in 3 secs (close to around 60,000 rows). sa is spending a  
 good amount of time processing the results and while it is doing  
 that i see abnormal memory growth. also the cpu is used almost 98%  
 during this time.

 the interesting thing is that after processing the request the  
 memory does not comes down. it stays there only. i dont know why its  
 not gc'ed.

 my environment:
 - mysql 4.1
 - sa 3.9
 - python 2.4

 is there any chance that memory is getting leaked as i don't see  
 memory come down even after some time.


The Session in 0.3 does not lose references to any data loaded  
automatically, it has to be cleaned out manually using  
session.expunge(obj) or session.clear().From 0.4 on forward the  
Session is weak referencing so that unreferenced, clean objects fall  
out of scope automatically.  0.4 also eager loads many rows about 30%  
faster than 0.3 and 0.5 is then about 15% faster than 0.4. ORMs in  
general are designed for rich in-memory functionality and are not  
optimized for loads of many tens of thousands of rows, so for better  
performance overall consider non-ORM access to these rows.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with server_default (and/or sa.PassiveDefault in 0.5.beta1

2008-06-18 Thread Werner F. Bruhin

Michael Bayer wrote:
 On Jun 18, 2008, at 4:05 AM, Werner F. Bruhin wrote:

   
 All other tables (including the full version of preferences) are using
 metadata = sa.MetaData(), instead of the metadataMin I use for the
 prefminimal definition.

 I got this working some month ago, but maybe this is not the right way
 to go about it and it bits me now?

 

 using two separate MetaData objects, if I correctly understand that's  
 what you're doing, is not going to work at all if that Table has any  
 ForeignKey objects (which it does).   I dont see the use case for two  
 distinct MDs.
   
What I need to do at start up of the application is to check a couple of 
columns in table preferences to check what version the database is, 
then if it is an older version I do the upgrade of the database 
(metadata etc) and update the columns in the table.

I can not read all the columns of the table as there might be columns 
defined for this new release which are not present until the upgrade is 
finished.

Would something like this work for me session.query(Preferences.col1, 
Preferences.col2) and have columns defined in the model which are not 
yet present in the database?

If the query approach does not work what alternatives would I have?

Werner


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with server_default (and/or sa.PassiveDefault in 0.5.beta1

2008-06-18 Thread az

On Wednesday 18 June 2008 17:21:00 Werner F. Bruhin wrote:
 Michael Bayer wrote:
  On Jun 18, 2008, at 4:05 AM, Werner F. Bruhin wrote:
  All other tables (including the full version of preferences) are
  using metadata = sa.MetaData(), instead of the metadataMin I
  use for the prefminimal definition.
 
  I got this working some month ago, but maybe this is not the
  right way to go about it and it bits me now?
 
  using two separate MetaData objects, if I correctly understand
  that's what you're doing, is not going to work at all if that
  Table has any ForeignKey objects (which it does).   I dont see
  the use case for two distinct MDs.

 What I need to do at start up of the application is to check a
 couple of columns in table preferences to check what version the
 database is, then if it is an older version I do the upgrade of the
 database (metadata etc) and update the columns in the table.

 I can not read all the columns of the table as there might be
 columns defined for this new release which are not present until
 the upgrade is finished.

 Would something like this work for me
 session.query(Preferences.col1, Preferences.col2) and have
 columns defined in the model which are not yet present in the
 database?

 If the query approach does not work what alternatives would I have?


cant u drop the Min-version once u finished using it? thus having only 
one metadata to same engine at same time.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread Arun Kumar PG
thanks for the instant reply guys!

as my app is on production so i cannot afford to bring things down right
away for 0.4/0.5 migration. eventually, i will be going to (in next month)
use 0.4/0.5. so for the time being (at least for the next one month) i am
looking for the best solution on 0.3.x so that users are not affected.

michael, as you mentioned about explicit cleaning of session, i am doing
that currently. let me quickly mention the flow of request so that you guys
can have more information:

- search request comes
- if orm mapping is not created it's get created now (only happens one time)
- new session is created and attached to the current thread (this is done so
that different DAOs can access the same session from the current thread)
- all orm queries are fired.. results processed
- finally, current thread is accessed again, session attached earlier is
accessed, session.clear() invoked and del session done.

what's the best way to deal with the problem now...

thanks,

- A



On Wed, Jun 18, 2008 at 7:49 PM, Michael Bayer [EMAIL PROTECTED]
wrote:



 On Jun 18, 2008, at 9:59 AM, Arun Kumar PG wrote:

  hi all folks,
 
  i have a search form that allows user to search for records.  i am
  eager loading 4 attributes on the master object which results in 4
  left outer joins in the sa's sql query. the problem is that when i
  look at the memory consumption using top command it looks crazy.
 
  the memory shoots up by 50-60 MB instantly (some times even 100+
  MB). i executed the query on db directly and the results are
  returned in 3 secs (close to around 60,000 rows). sa is spending a
  good amount of time processing the results and while it is doing
  that i see abnormal memory growth. also the cpu is used almost 98%
  during this time.
 
  the interesting thing is that after processing the request the
  memory does not comes down. it stays there only. i dont know why its
  not gc'ed.
 
  my environment:
  - mysql 4.1
  - sa 3.9
  - python 2.4
 
  is there any chance that memory is getting leaked as i don't see
  memory come down even after some time.
 

 The Session in 0.3 does not lose references to any data loaded
 automatically, it has to be cleaned out manually using
 session.expunge(obj) or session.clear().From 0.4 on forward the
 Session is weak referencing so that unreferenced, clean objects fall
 out of scope automatically.  0.4 also eager loads many rows about 30%
 faster than 0.3 and 0.5 is then about 15% faster than 0.4. ORMs in
 general are designed for rich in-memory functionality and are not
 optimized for loads of many tens of thousands of rows, so for better
 performance overall consider non-ORM access to these rows.

 



-- 
cheers,

- a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread az

try cuttin piece by piece until it stops growing. all regardless if 
app does correctly work for the user - this is to just find the hole.
 a) what i suggested for reducing one 10 row query into 1000 small 
(same) ones - to see if its because of orm.query or else
 b) from the workflow below, cut things one by one until the mem stops 
growing - e.g. does it grow?
  - if no query
  - if no eagerloading
  - if 10 objects are made by you (not by query)
  - ... whatever u guess

On Wednesday 18 June 2008 17:40:14 Arun Kumar PG wrote:
 thanks for the instant reply guys!

 as my app is on production so i cannot afford to bring things down
 right away for 0.4/0.5 migration. eventually, i will be going to
 (in next month) use 0.4/0.5. so for the time being (at least for
 the next one month) i am looking for the best solution on 0.3.x so
 that users are not affected.

 michael, as you mentioned about explicit cleaning of session, i am
 doing that currently. let me quickly mention the flow of request so
 that you guys can have more information:

 - search request comes
 - if orm mapping is not created it's get created now (only happens
 one time) - new session is created and attached to the current
 thread (this is done so that different DAOs can access the same
 session from the current thread) - all orm queries are fired..
 results processed
 - finally, current thread is accessed again, session attached
 earlier is accessed, session.clear() invoked and del session done.

 what's the best way to deal with the problem now...

 thanks,

 - A



 On Wed, Jun 18, 2008 at 7:49 PM, Michael Bayer
 [EMAIL PROTECTED]

 wrote:
  On Jun 18, 2008, at 9:59 AM, Arun Kumar PG wrote:
   hi all folks,
  
   i have a search form that allows user to search for records.  i
   am eager loading 4 attributes on the master object which
   results in 4 left outer joins in the sa's sql query. the
   problem is that when i look at the memory consumption using top
   command it looks crazy.
  
   the memory shoots up by 50-60 MB instantly (some times even
   100+ MB). i executed the query on db directly and the results
   are returned in 3 secs (close to around 60,000 rows). sa is
   spending a good amount of time processing the results and while
   it is doing that i see abnormal memory growth. also the cpu is
   used almost 98% during this time.
  
   the interesting thing is that after processing the request the
   memory does not comes down. it stays there only. i dont know
   why its not gc'ed.
  
   my environment:
   - mysql 4.1
   - sa 3.9
   - python 2.4
  
   is there any chance that memory is getting leaked as i don't
   see memory come down even after some time.
 
  The Session in 0.3 does not lose references to any data loaded
  automatically, it has to be cleaned out manually using
  session.expunge(obj) or session.clear().From 0.4 on forward
  the Session is weak referencing so that unreferenced, clean
  objects fall out of scope automatically.  0.4 also eager loads
  many rows about 30% faster than 0.3 and 0.5 is then about 15%
  faster than 0.4. ORMs in general are designed for rich
  in-memory functionality and are not optimized for loads of many
  tens of thousands of rows, so for better performance overall
  consider non-ORM access to these rows.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 11:17 AM, Arun Kumar PG wrote:

 one more thing here, i noticed now that the query formed by sa when  
 we do an eager load has got some problems (may be i am not doing the  
 right thing)

 here's the problem

 i have entities A, B. where A - B (1:N relationship)

 i form a query like this

 clauses = []
 clauses.append(A.c.col1 == 'xyz')
 clauses.append(B.c.col == 'xcv')

 qry = session.query(B).filter_by(*clauses)
 eager_qry = qry.options(sqlalchemy.eagerload('a')
 eager_qry.all()

 the sql shows:

 select ... from A, B left outer join A as alias on alias.key == B.key

 why is A included for join two times ? i understand eager load might  
 be creating the outer join but looks like because i am having a  
 clause on A, A  is included in the first join as well.

 what is the right way to use it so that i can get rid off first join  
 and eager load A.

 this is creating a huge result set.


the joins created by eager loading are insulated from anything you do  
with filter(), order_by(), etc.  This is so that your collections load  
properly even if criterion were applied that would otherwise limit the  
collection from loading fully.

See 
http://www.sqlalchemy.org/trac/wiki/FAQ#ImusinglazyFalsetocreateaJOINOUTERJOINandSQLAlchemyisnotconstructingthequerywhenItrytoaddaWHEREORDERBYLIMITetc.whichreliesupontheOUTERJOIN
 
  .

You have two options here:  either use an explicit join in your query,  
so that the Query works regardless of the eager loading being present  
or not.  Or, if you truly want the eager loaded collection to reflect  
the query criterion, instead of using eagerload you can use  
contains_eager, as described in 
http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_strategies_containseager
 
  .  Those are the 0.5 docs but the same technique applies to the most  
recent 0.4 version.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] understanding thread-safe connections, engines and sessions

2008-06-18 Thread Hans

Hi -

This is a general clarification question.  I have read the docs, but
am still a little unclear on the exact relationship between sessions,
engines, and connections -- at least as this concerns thread safety.

Background:  I'm using sqlalchemy in a web application context
(cherrypy) and hence figure that I should make sure that I'm doing
things in a thread-safe way :)

Currently, I am just creating a normal engine and then using that to
initialize a scoped session.  (I am setting Session =
scoped_session(sessionmaker(, bind=engine)).  I understand that
this is the way to ensure that Session instances are thread-local.

Now, if I want to do some stuff with the engine or connection
directly, should I create my engine with context=threadlocal
additionally?  (And then pass that threadlocal engine off to my
scoped_session ?)

And when I get a connection from such an engine, do I need to always
specify that I want a contextual connection?  i.e. conn =
engine.contextual_connect() ?  (What happens if I just run
engine.connect() as normal with a threadlocal connection?)

I assume all of this is necessary in a multi-threaded environment?

Thanks for the help!

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: understanding thread-safe connections, engines and sessions

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 11:43 AM, Hans wrote:


 Now, if I want to do some stuff with the engine or connection
 directly, should I create my engine with context=threadlocal
 additionally?  (And then pass that threadlocal engine off to my
 scoped_session ?)

if you deal with the engine, it uses connection resources on an as  
needed basis, such as when you say engine.execute(somestatement).  the  
threadlocal context doesn't really grant any additional thread  
safety, its just a convenience method so that operations common to a  
thread can all implicitly share a transaction without the need to pass  
an explicit Connection object.  It is typically useful in a non-ORM  
oriented application.  In an ORM oriented application, the Session  
tends to be the home base for transactional/database context.

Im not entirely sure what you mean by pass that engine to my  
scoped_session.   You typically bind the underlying sessionmaker() to  
an engine when its first created, threadlocal or not, and you're done.

 And when I get a connection from such an engine, do I need to always
 specify that I want a contextual connection?  i.e. conn =
 engine.contextual_connect() ?  (What happens if I just run
 engine.connect() as normal with a threadlocal connection?)

the difference between connect() and contextual_connect() is only  
significant if you're using the threadlocal context.  I would advise  
not using threadlocal for now as you should familiarize with the  
basics first.   If you are using Connections, its best to stick with  
connect().  contextual_connect() is only for applications that are  
trying to interact with the engine in such a way that they participate  
within the threadlocal context if used that way.  But normally you  
wouldn't use explicit Connections with threadlocal since the whole  
point of threadlocal is to remove the need to pass around Connection  
objects.

I would comment that if you are working with ORM transactions and wish  
to have direct execute() access within the context of that  
transaction, you're best sticking with the execute() and connection()  
methods off of Session itself; this will ensure that those operations  
are participating with whatever transaction the current Session may  
have in progress.

 I assume all of this is necessary in a multi-threaded environment?

just dont share a single Session or Connection between threads, and  
you're good to go.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How can I execute alter session commands at application initialization?

2008-06-18 Thread Dr.T

I want to make my oracle 10g queries case insensitive.

To do this, I need to execute:

alter session set NLS_SORT=BINARY_CI;
alter session set NLS_COMP=LINGUISTIC;

at application initialization.

How might I do this via SQLAlchemy?

Thanks for your help,

Tim
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How can I execute alter session commands at application initialization?

2008-06-18 Thread jason kirtland

Dr.T wrote:
 I want to make my oracle 10g queries case insensitive.
 
 To do this, I need to execute:
 
 alter session set NLS_SORT=BINARY_CI;
 alter session set NLS_COMP=LINGUISTIC;
 
 at application initialization.
 
 How might I do this via SQLAlchemy?
 
 Thanks for your help,

class MySetup:
def connect(self, dbapi_con, con_record):
dbapi_con.execute('alter session set NLS_SORT=BINARY_CI')
dbapi_con.execute('alter session set NLS_COMP=LINGUISTIC')

engine = create_engine('oracle:...', listeners=[MySetup()])

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: understanding thread-safe connections, engines and sessions

2008-06-18 Thread Hans

On Jun 18, 12:12 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 18, 2008, at 11:43 AM, Hans wrote:



  Now, if I want to do some stuff with the engine or connection
  directly, should I create my engine with context=threadlocal
  additionally?  (And then pass that threadlocal engine off to my
  scoped_session ?)

 if you deal with the engine, it uses connection resources on an as
 needed basis, such as when you say engine.execute(somestatement).  the
 threadlocal context doesn't really grant any additional thread
 safety, its just a convenience method so that operations common to a
 thread can all implicitly share a transaction without the need to pass
 an explicit Connection object.  It is typically useful in a non-ORM
 oriented application.  In an ORM oriented application, the Session
 tends to be the home base for transactional/database context.

Ok, I think I understand that.  So threadlocal basically let's the
Engine reuse  connection objects and is really for implicit
connections rather than creating and using connections explicitly.

 Im not entirely sure what you mean by pass that engine to my
 scoped_session.   You typically bind the underlying sessionmaker() to
 an engine when its first created, threadlocal or not, and you're done.

Yeah, that's what I meant -- I would bind the threadlocal engine to
the sessionmaker in the app setup phase.

  And when I get a connection from such an engine, do I need to always
  specify that I want a contextual connection?  i.e. conn =
  engine.contextual_connect() ?  (What happens if I just run
  engine.connect() as normal with a threadlocal connection?)

 the difference between connect() and contextual_connect() is only
 significant if you're using the threadlocal context.  I would advise
 not using threadlocal for now as you should familiarize with the
 basics first.   If you are using Connections, its best to stick with
 connect().  contextual_connect() is only for applications that are
 trying to interact with the engine in such a way that they participate
 within the threadlocal context if used that way.  But normally you
 wouldn't use explicit Connections with threadlocal since the whole
 point of threadlocal is to remove the need to pass around Connection
 objects.

Ok, this makes sense.  I guess I don't need to worry about the
threadlocal stuff if I'm always creating and disposing of connections.

For example, if I have a DAO method (classmethod) that peforms some
custom SQL query, it sounds like the clean  simple way to do this
would be to just create and dispose of the connection within that
method.  Then I don't need to worry about thread safety issues
(right?).

def get_some_data(cls, params):
  conn = engine.connect()
  rs = conn.execute('stored proc or complex SQL here')
  return rs

(And then close connection via rs.close() in calling code, for
example.)

On a follow-up note, If I am creating a new connection (calling
engine.connect()) for every method, will that actually be creating a
new connection (assuming I'm not using strategy=threadlocal)?  If
so, is there a better convention?  -- I assume that something like a
singleton pattern would be right out, since that would imply sharing
between threads.

 I would comment that if you are working with ORM transactions and wish
 to have direct execute() access within the context of that
 transaction, you're best sticking with the execute() and connection()
 methods off of Session itself; this will ensure that those operations
 are participating with whatever transaction the current Session may
 have in progress.

Yeah, for the ORM stuff, I definitely plan to stick with the
ScopedSession and let that handle it all for me.  We have a lot of
legacy SQL at this point and for the sake of simplicity (and also
because rewriting some of this SQL is not going to be any prettier
with the sql builder api) it will be necessary to use connections
directly for some components of the application.

Thanks again for the help in understanding this.

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: understanding thread-safe connections, engines and sessions

2008-06-18 Thread Michael Bayer


On Jun 18, 2008, at 12:41 PM, Hans wrote:


 Ok, this makes sense.  I guess I don't need to worry about the
 threadlocal stuff if I'm always creating and disposing of connections.

 For example, if I have a DAO method (classmethod) that peforms some
 custom SQL query, it sounds like the clean  simple way to do this
 would be to just create and dispose of the connection within that
 method.  Then I don't need to worry about thread safety issues
 (right?).

 def get_some_data(cls, params):
  conn = engine.connect()
  rs = conn.execute('stored proc or complex SQL here')
  return rs

you could just as easily use engine.execute() for a case like the  
above, which does basically the same thing.  theres no thread safety  
issues inherent.


 On a follow-up note, If I am creating a new connection (calling
 engine.connect()) for every method, will that actually be creating a
 new connection (assuming I'm not using strategy=threadlocal)?  If
 so, is there a better convention?  -- I assume that something like a
 singleton pattern would be right out, since that would imply sharing
 between threads.

when you say engine.connect() it draws the connection from the  
connection pool.  So whether or not a new connection is created at  
that point depends on the configuration and current state of the pool.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: understanding thread-safe connections, engines and sessions

2008-06-18 Thread Hans



On Jun 18, 1:18 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 18, 2008, at 12:41 PM, Hans wrote:



  Ok, this makes sense.  I guess I don't need to worry about the
  threadlocal stuff if I'm always creating and disposing of connections.

  For example, if I have a DAO method (classmethod) that peforms some
  custom SQL query, it sounds like the clean  simple way to do this
  would be to just create and dispose of the connection within that
  method.  Then I don't need to worry about thread safety issues
  (right?).

  def get_some_data(cls, params):
   conn = engine.connect()
   rs = conn.execute('stored proc or complex SQL here')
   return rs

 you could just as easily use engine.execute() for a case like the
 above, which does basically the same thing.  theres no thread safety
 issues inherent.

Ok, that's good to know.  I thought that somehow that was for more ORM-
related queries (or, rather, using the metadata Tables).  So I suppose
the only reason I'd need to work with a Connection is if I wanted the
transaction control.  For most of my single-statement cases, I don't
care about the transactions (or rather, want them to happen in single
transaction).

  On a follow-up note, If I am creating a new connection (calling
  engine.connect()) for every method, will that actually be creating a
  new connection (assuming I'm not using strategy=threadlocal)?  If
  so, is there a better convention?  -- I assume that something like a
  singleton pattern would be right out, since that would imply sharing
  between threads.

 when you say engine.connect() it draws the connection from the
 connection pool.  So whether or not a new connection is created at
 that point depends on the configuration and current state of the pool.

Ok, that's what I suspected, but wanted to make sure I understood that
correctly.  I read the section on connection pooling, but that seemed
to involve wrapping the low-level driver (e.g. psycopg) with a proxy
object.  Not sure if that's a completely different beast.  In any
event, it sounds like I can worry about the pooling (and general
resource performance) at a later point in the development -- and make
those change w/o having to change the API my app uses to get
connections at all.

Thanks again -
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem:maximum recursion depth exceeded

2008-06-18 Thread Marin

 cant really tell unless you provide a fully functional, as simple as
 possible test case, preferably against sqlite, which can be run with
 no dependencies.

I'll try to make one. I tried to fix it doday and got that the next
code does not produce the error:
test = Test()
section = Section()
section.Test=test
db_session.save(test)
for i in range(60):
q=Question()
q.Section=section
for j in range(6)
a = Answer()
a.Question = q

db_session.commit()

Is there a max limit on the number of SQL commands that SA can isue
between a BEGIN-COMMIT?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem:maximum recursion depth exceeded

2008-06-18 Thread Michael Bayer

the error you're getting suggests an unusually long chain of
dependencies between individual rows.  There is a recursive element to
the way that a flush plan is executed, but it generally is a shallow
recursion.Your stack trace isn't making it clear where the bulk of
the recursion is occurring.

Since the concepts you're representing don't seem like they should be
like this, this is why we need to see the exact mapping that produces
this error.

On Jun 18, 3:37 pm, Marin [EMAIL PROTECTED] wrote:
  cant really tell unless you provide a fully functional, as simple as
  possible test case, preferably against sqlite, which can be run with
  no dependencies.

 I'll try to make one. I tried to fix it doday and got that the next
 code does not produce the error:
 test = Test()
 section = Section()
 section.Test=test
 db_session.save(test)
 for i in range(60):
 q=Question()
 q.Section=section
 for j in range(6)
 a = Answer()
 a.Question = q

 db_session.commit()

 Is there a max limit on the number of SQL commands that SA can isue
 between a BEGIN-COMMIT?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread Arun Kumar PG
one more point that i forgot to mention in the workflow, so i re-wrote it
again:

- search request comes
- if orm mapping is not created it's get created now (only happens one time)
- new session is created using orm.create_session(weak_identity_map=True).
now this new session is added to a python dict like this:

resources = {
  SESSION: session
  OTHER_RESOURCE: obj
}

and then this resources dict is attached to the current request thread (this
is done so that different DAOs can access the same session and other
resources from the current thread).
- all orm queries are fired.. results processed
- finally, current thread is accessed again and tear down happens as below:

resources = currentThread().resources
resources[SESSION].clear()
del resources

my question is that i am deleting resources dict but not resources[SESSION]
(session object) which might be being pointed to by sa data structure
associated as a part of initial orm.create_session call? i have not done a
deep dive in sa source code but just guessing.


On Wed, Jun 18, 2008 at 8:57 PM, Michael Bayer [EMAIL PROTECTED]
wrote:



 On Jun 18, 2008, at 11:17 AM, Arun Kumar PG wrote:

  one more thing here, i noticed now that the query formed by sa when
  we do an eager load has got some problems (may be i am not doing the
  right thing)
 
  here's the problem
 
  i have entities A, B. where A - B (1:N relationship)
 
  i form a query like this
 
  clauses = []
  clauses.append(A.c.col1 == 'xyz')
  clauses.append(B.c.col == 'xcv')
 
  qry = session.query(B).filter_by(*clauses)
  eager_qry = qry.options(sqlalchemy.eagerload('a')
  eager_qry.all()
 
  the sql shows:
 
  select ... from A, B left outer join A as alias on alias.key == B.key
 
  why is A included for join two times ? i understand eager load might
  be creating the outer join but looks like because i am having a
  clause on A, A  is included in the first join as well.
 
  what is the right way to use it so that i can get rid off first join
  and eager load A.
 
  this is creating a huge result set.


 the joins created by eager loading are insulated from anything you do
 with filter(), order_by(), etc.  This is so that your collections load
 properly even if criterion were applied that would otherwise limit the
 collection from loading fully.

 See
 http://www.sqlalchemy.org/trac/wiki/FAQ#ImusinglazyFalsetocreateaJOINOUTERJOINandSQLAlchemyisnotconstructingthequerywhenItrytoaddaWHEREORDERBYLIMITetc.whichreliesupontheOUTERJOIN
  .

 You have two options here:  either use an explicit join in your query,
 so that the Query works regardless of the eager loading being present
 or not.  Or, if you truly want the eager loaded collection to reflect
 the query criterion, instead of using eagerload you can use
 contains_eager, as described in
 http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_strategies_containseager
  .  Those are the 0.5 docs but the same technique applies to the most
 recent 0.4 version.



 



-- 
cheers,

- a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---