[sqlalchemy] memory and cpu usage growing like crazy

2008-06-18 Thread Arun Kumar PG
hi all folks,

i have a search form that allows user to search for records.  i am eager
loading 4 attributes on the master object which results in 4 left outer
joins in the sa's sql query. the problem is that when i look at the memory
consumption using top command it looks crazy.

the memory shoots up by 50-60 MB instantly (some times even 100+ MB). i
executed the query on db directly and the results are returned in 3 secs
(close to around 60,000 rows). sa is spending a good amount of time
processing the results and while it is doing that i see abnormal memory
growth. also the cpu is used almost 98% during this time.

the interesting thing is that after processing the request the memory does
not comes down. it stays there only. i dont know why its not gc'ed.

my environment:
- mysql 4.1
- sa 3.9
- python 2.4

is there any chance that memory is getting leaked as i don't see memory come
down even after some time.

please give me some pointers  as my app is totally based on sa. memory and
cpu consumption is too high.

thanks.

cheers,

- a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread Arun Kumar PG
thanks for the instant reply guys!

as my app is on production so i cannot afford to bring things down right
away for 0.4/0.5 migration. eventually, i will be going to (in next month)
use 0.4/0.5. so for the time being (at least for the next one month) i am
looking for the best solution on 0.3.x so that users are not affected.

michael, as you mentioned about explicit cleaning of session, i am doing
that currently. let me quickly mention the flow of request so that you guys
can have more information:

- search request comes
- if orm mapping is not created it's get created now (only happens one time)
- new session is created and attached to the current thread (this is done so
that different DAOs can access the same session from the current thread)
- all orm queries are fired.. results processed
- finally, current thread is accessed again, session attached earlier is
accessed, session.clear() invoked and del session done.

what's the best way to deal with the problem now...

thanks,

- A



On Wed, Jun 18, 2008 at 7:49 PM, Michael Bayer [EMAIL PROTECTED]
wrote:



 On Jun 18, 2008, at 9:59 AM, Arun Kumar PG wrote:

  hi all folks,
 
  i have a search form that allows user to search for records.  i am
  eager loading 4 attributes on the master object which results in 4
  left outer joins in the sa's sql query. the problem is that when i
  look at the memory consumption using top command it looks crazy.
 
  the memory shoots up by 50-60 MB instantly (some times even 100+
  MB). i executed the query on db directly and the results are
  returned in 3 secs (close to around 60,000 rows). sa is spending a
  good amount of time processing the results and while it is doing
  that i see abnormal memory growth. also the cpu is used almost 98%
  during this time.
 
  the interesting thing is that after processing the request the
  memory does not comes down. it stays there only. i dont know why its
  not gc'ed.
 
  my environment:
  - mysql 4.1
  - sa 3.9
  - python 2.4
 
  is there any chance that memory is getting leaked as i don't see
  memory come down even after some time.
 

 The Session in 0.3 does not lose references to any data loaded
 automatically, it has to be cleaned out manually using
 session.expunge(obj) or session.clear().From 0.4 on forward the
 Session is weak referencing so that unreferenced, clean objects fall
 out of scope automatically.  0.4 also eager loads many rows about 30%
 faster than 0.3 and 0.5 is then about 15% faster than 0.4. ORMs in
 general are designed for rich in-memory functionality and are not
 optimized for loads of many tens of thousands of rows, so for better
 performance overall consider non-ORM access to these rows.

 



-- 
cheers,

- a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: memory and cpu usage growing like crazy

2008-06-18 Thread Arun Kumar PG
one more point that i forgot to mention in the workflow, so i re-wrote it
again:

- search request comes
- if orm mapping is not created it's get created now (only happens one time)
- new session is created using orm.create_session(weak_identity_map=True).
now this new session is added to a python dict like this:

resources = {
  SESSION: session
  OTHER_RESOURCE: obj
}

and then this resources dict is attached to the current request thread (this
is done so that different DAOs can access the same session and other
resources from the current thread).
- all orm queries are fired.. results processed
- finally, current thread is accessed again and tear down happens as below:

resources = currentThread().resources
resources[SESSION].clear()
del resources

my question is that i am deleting resources dict but not resources[SESSION]
(session object) which might be being pointed to by sa data structure
associated as a part of initial orm.create_session call? i have not done a
deep dive in sa source code but just guessing.


On Wed, Jun 18, 2008 at 8:57 PM, Michael Bayer [EMAIL PROTECTED]
wrote:



 On Jun 18, 2008, at 11:17 AM, Arun Kumar PG wrote:

  one more thing here, i noticed now that the query formed by sa when
  we do an eager load has got some problems (may be i am not doing the
  right thing)
 
  here's the problem
 
  i have entities A, B. where A - B (1:N relationship)
 
  i form a query like this
 
  clauses = []
  clauses.append(A.c.col1 == 'xyz')
  clauses.append(B.c.col == 'xcv')
 
  qry = session.query(B).filter_by(*clauses)
  eager_qry = qry.options(sqlalchemy.eagerload('a')
  eager_qry.all()
 
  the sql shows:
 
  select ... from A, B left outer join A as alias on alias.key == B.key
 
  why is A included for join two times ? i understand eager load might
  be creating the outer join but looks like because i am having a
  clause on A, A  is included in the first join as well.
 
  what is the right way to use it so that i can get rid off first join
  and eager load A.
 
  this is creating a huge result set.


 the joins created by eager loading are insulated from anything you do
 with filter(), order_by(), etc.  This is so that your collections load
 properly even if criterion were applied that would otherwise limit the
 collection from loading fully.

 See
 http://www.sqlalchemy.org/trac/wiki/FAQ#ImusinglazyFalsetocreateaJOINOUTERJOINandSQLAlchemyisnotconstructingthequerywhenItrytoaddaWHEREORDERBYLIMITetc.whichreliesupontheOUTERJOIN
  .

 You have two options here:  either use an explicit join in your query,
 so that the Query works regardless of the eager loading being present
 or not.  Or, if you truly want the eager loaded collection to reflect
 the query criterion, instead of using eagerload you can use
 contains_eager, as described in
 http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_strategies_containseager
  .  Those are the 0.5 docs but the same technique applies to the most
 recent 0.4 version.



 



-- 
cheers,

- a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] IMPORTANT: Does SA caches objects in memory forever?

2007-12-06 Thread Arun Kumar PG
Hi All,

I am having this problem with memory consumption when using SA. I have 200
MB ram allocated for the application and when I look at the usage statistics
using top or any other memory monitor I see that after every request to the
application the memory consumption is increasing and the memory is not
getting returned back to the pool. So if the memory consumption is 80 MB
initially and after a request comes and we do a whole bunch of SA processing
then if I look at the memory it will be let's say 82 MB and will stay there
forever and it keeps on going up and up with many requests coming in. And
after a while when it reaches 200 MB the application fails and no more
memory is available.

Does that means that objects retrieved are cached forever without any
timeout? How can I alleviate this problem as this is very high priority
right now and needs to be fixed immediately.

thoughts/suggestions/solutions all welcome.

Thanks all!

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?

2007-12-06 Thread Arun Kumar PG
FYI: I am using SA 0.3.9.

On Dec 6, 2007 6:52 PM, Arun Kumar PG [EMAIL PROTECTED] wrote:

 Hi All,

 I am having this problem with memory consumption when using SA. I have 200
 MB ram allocated for the application and when I look at the usage statistics
 using top or any other memory monitor I see that after every request to the
 application the memory consumption is increasing and the memory is not
 getting returned back to the pool. So if the memory consumption is 80 MB
 initially and after a request comes and we do a whole bunch of SA processing
 then if I look at the memory it will be let's say 82 MB and will stay there
 forever and it keeps on going up and up with many requests coming in. And
 after a while when it reaches 200 MB the application fails and no more
 memory is available.

 Does that means that objects retrieved are cached forever without any
 timeout? How can I alleviate this problem as this is very high priority
 right now and needs to be fixed immediately.

 thoughts/suggestions/solutions all welcome.

 Thanks all!

 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Profiling code to understand memory utilisation

2007-10-12 Thread Arun Kumar PG
I want profile my code to understand the amount of memory SA ORM uses during
a handling a request. Does SA exposes some logging mechanism which can dump
the usage like echo on engine ?

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Many tables in eagerloading

2007-09-04 Thread Arun Kumar PG
Guys,

Was wondering if we have 10 tables or so which are related to each other and
are required during  let's say report generation then if I specify
eagerloading for all those attributes which are related to these tables then
down the line as the records in the table grows the temp tables generated in
the join (left outer in eagerloading) will be massive before appying the
where clause. So I guess we should worry about this or is that fine as long
as the tables are getting join on primary/foreign key as the query plan
looks decent ?

I am doing this for 7-8 tables out of which data is growing continuously in
couple tables with a factor of 2XN every month. I am worried if eagerloading
may be a problem in the sense if it will bring the db server down to knees
some day considering the joins happening ? FYI: the eager loading is
specified at the Query level so that I can optimize where I really need.

But currently it's faster as compared to executing individual query. And in
my case if I go with individual queries due to lazy load it takes forever.
And in many cases the request times out when using a web browser. So
eargerloading is better but just worried about speed. Any good guidelines on
how we should use eagerloading, best practises, any limitation etc ?

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread Arun Kumar PG
bit tough as many apps on that server and won't be easy as of now.

On 9/4/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Sep 4, 2007, at 10:15 AM, Arun Kumar PG wrote:

  i thought so earlier but unfortunately i am on a lower version of
  mysql :(

 upgrade.


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Join in where and using .join()

2007-08-23 Thread Arun Kumar PG
Guys,

Quick clarification:

If we have two tables A and B with relationship keys 'XYZ' in both (B
references A) then which is faster:

1) session.query(A).select_by(*[A.c.XYZ == B.c.XYZ])

or

2) session.query(A, B).join('XYZ')

2 should be faster as 1 may require more row scans ?

Also, the below one is returning multiple records when it should return one
(may be join is not happening correctly - anything missing ?.)

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-21 Thread Arun Kumar PG
Sounds good. Thanks Jason!

On 8/20/07, jason kirtland [EMAIL PROTECTED] wrote:


 Arun wrote:
  So in short if I specify use_unicode=True at the SA engine level
  then I can skip specifying use_unicode and specify only
  charset=utf8 at mysqldb level ?

 If you configure this DB-API driver for all-Unicode (which is what
 happens when you only give it a 'charset') all strings will come
 back from the database to SQLAlchemy as Unicode.  You can ask the
 Engine and/or types to convert_unicode=True, but it won't do
 anything except add processing overhead- the strings are already
 Unicode from the driver.

 Try playing with the following to find a combination that suits
 your needs.  The first two engine configurations aren't options for
 you obviously, but they make a good demo.

 from sqlalchemy import *
 e = create_engine('mysql:///test')
 #e = create_engine('mysql:///test', convert_unicode=True)
 #e = create_engine('mysql:///test?charset=utf8')
 #e = create_engine('mysql:///test?charset=utf8',
 #  convert_unicode=True)
 #e = create_engine('mysql:///test?charset=utf8use_unicode=0')
 #e = create_engine('mysql:///test?charset=utf8use_unicode=0',
 #  convert_unicode=True)

 m = MetaData(e)
 t = Table('unicodings', m,
   Column('string', String(32)),
   Column('unicode', Unicode(32)))

 if not t.exists():
 t.create()
 t.insert().execute({'string':'foo',
 'unicode':'bar'})

 print repr(list(t.select().execute()))


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-20 Thread Arun Kumar PG
So in short if I specify use_unicode=True at the SA engine level then I can
skip specifying use_unicode and specify only charset=utf8 at mysqldb level ?

On 8/19/07, jason kirtland [EMAIL PROTECTED] wrote:


 Arun Kumar PG wrote:
  Ok, you need to get that charset to the driver.  Try removing SET
  NAMES from your init_command, and instead pass charset=utf8 and
  use_unicode=0 in your database connection URL.
 
  why do we want to say use_unicode=0 instead or use_unicode=True here?

 You can go either way with that.  The MySQLdb driver's default behavior
 when given a 'charset' is to also turn on its return all strings in
 Unicode mode.  If you want all of your strings as Unicode that's just
 dandy, but if you expecting them to come back as regular strings encoded
 in the charset you requested you'd be in for a surprise...

 In my own code I enable use_unicode and I don't specify any Unicode
 options or column types at the SQLAlchemy level.

 -j


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-19 Thread Arun Kumar PG
why do we want to say use_unicode=0 instead or use_unicode=True here?

On 8/16/07, jason kirtland [EMAIL PROTECTED] wrote:


 Ok you need to get tArun wrote:
  I am using mysqldb-1.2.2. I am passing 'SET NAMES' to connect
  method as a value for init_command parameter. All tables have
  utf8 charset. And I pass convert_unicode=True to engine.
 
  Let me know if anything else is required.

 Ok, you need to get that charset to the driver.  Try removing SET
 NAMES from your init_command, and instead pass charset=utf8 and
 use_unicode=0 in your database connection URL.




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-16 Thread Arun Kumar PG
Whooops. the problem in coming on the box which is running mysqldb1.2.0.
Actually it's a non-upgraded server with mysqldb 1.2.0. I was looking at the
code on my box which contains 1.2.2.

But I am sure that even with mysqldb 1.2.0 the existing SA version was
working fine. and that's why init_command is being used there as charset
is not there in 1.2.0 as connect kwargs.

Any clue?


On 8/16/07, jason kirtland [EMAIL PROTECTED] wrote:


 Ok you need to get tArun wrote:
  I am using mysqldb-1.2.2. I am passing 'SET NAMES' to connect
  method as a value for init_command parameter. All tables have
  utf8 charset. And I pass convert_unicode=True to engine.
 
  Let me know if anything else is required.

 Ok, you need to get that charset to the driver.  Try removing SET
 NAMES from your init_command, and instead pass charset=utf8 and
 use_unicode=0 in your database connection URL.




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
Hi All,

Recently I upgraded to the version 3.9 of SA. Post that whenever I am trying
to save characters in different language in the table I am getting the below
exception:

 File /src/sqlalchemy/engine/base.py,
line 601, in _execute
   raise exceptions.SQLError(context.statement, context.parameters, e)
SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
position 1: ordinal not in range(128)

I am wondering why it is using ascii codec instead of unicode ?

FYI: I am using MySQL 4.1 and the charset of table is utf-8.

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
All tables are having a charset of utf8. Additionally, I am issuing SET
NAMES 'utf8' statement as a part of connection establishment.

Anything that is wrong here or missing ?

On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:


 Arun Kumar PG wrote:
  Hi All,
 
  Recently I upgraded to the version 3.9 of SA. Post that whenever I am
  trying to save characters in different language in the table I am
  getting the below exception:
 
   File /src/sqlalchemy/engine/base.py,
  line 601, in _execute
 raise exceptions.SQLError(context.statement, context.parameters, e)
  SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
  position 1: ordinal not in range(128)
 
  I am wondering why it is using ascii codec instead of unicode ?
 
  FYI: I am using MySQL 4.1 and the charset of table is utf-8.

 Odd to see ascii there instead of latin1.  Is your database configured
 for utf-8 client connections?


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
Yes. it's being done. I create the engine and then set convert unicode =
True.

On 8/15/07, Michael Bayer [EMAIL PROTECTED] wrote:

 are you using convert_unicode=True and/or the Unicode type ?
 On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:

 All tables are having a charset of utf8. Additionally, I am issuing SET
 NAMES 'utf8' statement as a part of connection establishment.

 Anything that is wrong here or missing ?

 On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:
 
 
  Arun Kumar PG wrote:
   Hi All,
  
   Recently I upgraded to the version 3.9 of SA. Post that whenever I am
   trying to save characters in different language in the table I am
   getting the below exception:
  
   File /src/sqlalchemy/engine/base.py,
   line 601, in _execute
   raise exceptions.SQLError(context.statement, context.parameters, e)
   SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
 
   position 1: ordinal not in range(128)
  
   I am wondering why it is using ascii codec instead of unicode ?
  
   FYI: I am using MySQL 4.1 and the charset of table is utf-8.
 
  Odd to see ascii there instead of latin1. Is your database configured
  for utf-8 client connections?
 
 
 


 --
 Cheers,

 - A




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
Any other clue that may be helpful in troubleshooting the cause ?

On 8/15/07, Arun Kumar PG [EMAIL PROTECTED] wrote:

 Yes. it's being done. I create the engine and then set convert unicode =
 True.

 On 8/15/07, Michael Bayer  [EMAIL PROTECTED] wrote:
 
  are you using convert_unicode=True and/or the Unicode type ?
  On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:
 
  All tables are having a charset of utf8. Additionally, I am issuing SET
  NAMES 'utf8' statement as a part of connection establishment.
 
  Anything that is wrong here or missing ?
 
  On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:
  
  
   Arun Kumar PG wrote:
Hi All,
   
Recently I upgraded to the version 3.9 of SA. Post that whenever I
   am
trying to save characters in different language in the table I am
getting the below exception:
   
File /src/sqlalchemy/engine/base.py,
line 601, in _execute
raise exceptions.SQLError(context.statement, context.parameters, e)
SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3
   in
position 1: ordinal not in range(128)
   
I am wondering why it is using ascii codec instead of unicode ?
   
FYI: I am using MySQL 4.1 and the charset of table is utf-8.
  
   Odd to see ascii there instead of latin1. Is your database configured
   for utf-8 client connections?
  
  
  
 
 
  --
  Cheers,
 
  - A
 
 
 
 
   
 


 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
I am using mysqldb-1.2.2. I am passing 'SET NAMES' to connect method as a
value for init_command parameter. All tables have utf8 charset. And I pass
convert_unicode=True to engine.

Let me know if anything else is required.

thanks!

On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:


 So to recap, you are setting the character set on the dbapi
 connection via the MySQLdb method in addition to issuing a manual
 SET NAMES query?


 Arun wrote:
  Any other clue that may be helpful in troubleshooting the cause ?
 
 
  On 8/15/07, Arun Kumar PG [EMAIL PROTECTED]  wrote:
 
  Yes. it's being done. I create the engine and then set convert
  unicode = True.
 
 
 
  On 8/15/07, Michael Bayer  [EMAIL PROTECTED] wrote:
 
 
  are you using convert_unicode=True and/or the Unicode type ?
 
 
 
  On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:
 
  All tables are having a charset of utf8. Additionally, I am
  issuing SET NAMES 'utf8' statement as a part of connection
  establishment.
 
  Anything that is wrong here or missing ?
 
 
  On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:
 
 
  Arun Kumar PG wrote:
  Hi All,
 
  Recently I upgraded to the version 3.9 of SA. Post that whenever
  I am trying to save characters in different language in the
  table I am getting the below exception:
 
  File /src/sqlalchemy/engine/base.py,
  line 601, in _execute
  raise exceptions.SQLError(context.statement, context.parameters,
  e) SQLError: (UnicodeDecodeError) 'ascii' codec can't decode
  byte 0xc3 in  position 1: ordinal not in range(128)
 
  I am wondering why it is using ascii codec instead of unicode ?
 
  FYI: I am using MySQL 4.1 and the charset of table is utf-8.
 
  Odd to see ascii there instead of latin1. Is your database
  configured
  for utf-8 client connections?
 
 
 
 
 
 
  --
  Cheers,
 
  - A
 
 
 
 
 
 
 
 
 
 
  --
  Cheers,
 
  - A



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: *all* *new* *tutorials* !!!!

2007-08-06 Thread Arun Kumar PG
Cool. thx Michael!

On 8/7/07, Michael Bayer [EMAIL PROTECTED] wrote:


 Hi gang -

 The documentation for 0.4 is undergoing tremendous changes, and is now
 released, in its almost-there format, at
 http://www.sqlalchemy.org/docs/04/
 .  The goal with these docs is not just to update to new 0.4
 paradigms, but to also raise the bar for accuracy and clarity.

 Of major note is that the datamapping and sql construction sections,
 as well as the old tutorial, have been entirely replaced by two new
 and very comprehensive tutorials, one targeted at ORM and the other at
 SQL Expression Language.  Both have no prerequisites to start, they
 each can be the first thing you ever read about SQLAlchemy.  Both are
 also fully executable doctest format, so they are guaranteed not to
 have my usual array of mistakes.

 Also here is a rewritten mapper configuration document to replace
 advanced datamapping.  It includes clearer, up-to-date, and more
 correct examples of virtually every major mapper pattern we have,
 including all the new stuff like dynamic relations.

 With recently updated engine and metadata sections, the only major
 section left is sessions, which already includes information about
 the new autoflush and transactional sessions, as well as two-phase
 and SAVEPOINT recipes...I hope to simplify some of the older content
 here as well as standardize on the new sessionmaker function and its
 cousin, scoped_session, which replaces SessionContext as well as
 assignmapper (both are deprecated in 0.4).

 I hope everyone can check out the docs, come back with feedback/
 corrections/questions, and start getting ready for 0.4 !

 - mike


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-31 Thread Arun Kumar PG
I am using pool.QueuePool for managing connections and this pool is fed with
a creator function which returns an instance of my own custom DBAPI class (I
need this because of some logging stuff that I am doing). This custom DBAPI
class returns a Connection object returned by MySQLdb.connect.

 The FOUND_ROWS client flag must be enabled on MySQL connections to make
rowcount return what you (and SA) are expecting.

This has been enabled recently on SA 3.9 ? because the previous version was
working fine.


On 7/31/07, jason kirtland [EMAIL PROTECTED] wrote:


 Arun Kumar PG wrote:
  Looks like the problem is coming because of the fact when we are
  updating a row in table with the same data the rowcount returned by
  mysql is 0. Only when there is a change in data the rowcount is
 returned.

 Are you creating connections outside of SQLAlchemy?  (I seem to recall
 you were using a custom creator function.)  The FOUND_ROWS client flag
 must be enabled on MySQL connections to make rowcount return what you
 (and SA) are expecting.


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-31 Thread Arun Kumar PG
The MySQLdb library has CLIENT.FOUND_ROWS = 2. What value ideally it should
have ?

I am still now clear why this problem was not coming in the earlier SA
version!

On 7/31/07, Arun Kumar PG [EMAIL PROTECTED] wrote:

 I am using pool.QueuePool for managing connections and this pool is fed
 with a creator function which returns an instance of my own custom DBAPI
 class (I need this because of some logging stuff that I am doing). This
 custom DBAPI class returns a Connection object returned by MySQLdb.connect
 .

  The FOUND_ROWS client flag must be enabled on MySQL connections to
 make rowcount return what you (and SA) are expecting.

 This has been enabled recently on SA 3.9 ? because the previous version
 was working fine.


 On 7/31/07, jason kirtland [EMAIL PROTECTED] wrote:
 
 
  Arun Kumar PG wrote:
   Looks like the problem is coming because of the fact when we are
   updating a row in table with the same data the rowcount returned by
   mysql is 0. Only when there is a change in data the rowcount is
  returned.
 
  Are you creating connections outside of SQLAlchemy?  (I seem to recall
  you were using a custom creator function.)  The FOUND_ROWS client flag
  must be enabled on MySQL connections to make rowcount return what you
  (and SA) are expecting.
 
 
   
 


 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-31 Thread Arun Kumar PG
Passing client_flag = 2 to the MySQLDb.connect solves the problem but again
I am confused why this was working with the earlier version. Looks
like 3.9version has  an update which makes  the code that does not
specifies
client_flag fail ?



On 7/31/07, King Simon-NFHD78 [EMAIL PROTECTED] wrote:

  databases/mysql.py has this snippet in create_connect_args (0.3.10):

 # FOUND_ROWS must be set in CLIENT_FLAGS for to enable
 # supports_sane_rowcount.
 client_flag = opts.get('client_flag', 0)
 if self.dbapi is not None:
 try:
 import MySQLdb.constants.CLIENT as CLIENT_FLAGS
 client_flag |= CLIENT_FLAGS.FOUND_ROWS
 except:
 pass
 opts['client_flag'] = client_flag
 So CLIENT.FOUND_ROWS is a constant that should be passed as part of the
 'client_flag' options to MySQLdb's connect method. I don't know you didn't
 need this before though.

 Hope that helps,

 Simon

  --
 *From:* sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED] *On
 Behalf Of *Arun Kumar PG
 *Sent:* 31 July 2007 09:47
 *To:* sqlalchemy@googlegroups.com
 *Subject:* [sqlalchemy] Re: ConcurrentModificationError: Updated rowcount
 0 does not match number of objects updated 1

 The MySQLdb library has CLIENT.FOUND_ROWS = 2. What value ideally it
 should have ?

 I am still now clear why this problem was not coming in the earlier SA
 version!

 On 7/31/07, Arun Kumar PG [EMAIL PROTECTED] wrote:
 
  I am using pool.QueuePool for managing connections and this pool is fed
  with a creator function which returns an instance of my own custom DBAPI
  class (I need this because of some logging stuff that I am doing). This
  custom DBAPI class returns a Connection object returned by
  MySQLdb.connect.
 
   The FOUND_ROWS client flag must be enabled on MySQL connections to
  make rowcount return what you (and SA) are expecting.
 
  This has been enabled recently on SA 3.9 ? because the previous version
  was working fine.
 
 
  On 7/31/07, jason kirtland  [EMAIL PROTECTED] wrote:
  
  
   Arun Kumar PG wrote:
Looks like the problem is coming because of the fact when we are
updating a row in table with the same data the rowcount returned by
mysql is 0. Only when there is a change in data the rowcount is
   returned.
  
   Are you creating connections outside of SQLAlchemy?  (I seem to recall
   you were using a custom creator function.)  The FOUND_ROWS client flag
   must be enabled on MySQL connections to make rowcount return what you
   (and SA) are expecting.
  
  
  
 
 
  --
  Cheers,
 
  - A




 --
 Cheers,

 - A
 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-31 Thread Arun Kumar PG
As mentioned in my earlier email thread:

I am using pool.QueuePool for managing connections and this pool is fed with
a creator function which returns an instance of my own custom DBAPI class (I
need this because of some logging stuff that I am doing). This custom DBAPI
class returns a Connection object returned by MySQLdb.connect.

 we explicitly set an option on the MySQLDB connection
 so that rowcount works properly.
Are you referring to the changes in sa/databases/mysql.py where in we are
setting the 'client_flag'  ?

Quick question: the rowcount related update as mentioned by you will work
even if I am creating an engine passing in a pool - right?

Looks like something is going wrong somewhere in my code.. any helpful
pointers for troubleshooting this down ?



On 7/31/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 31, 2007, at 3:40 AM, Arun Kumar PG wrote:

  Looks like the problem is coming because of the fact when we are
  updating a row in table with the same data the rowcount returned by
  mysql is 0. Only when there is a change in data the rowcount is
  returned.
 
  Assuming RollValue column is changing from 99 to 100 if I execute
  the following statements:
 
  UPDATE TABLE_X Set RollValue=100 WHERE ID=100;
 
   this will return rowcount = 1
 
  Again running this, UPDATE TABLE_X Set RollValue=100 WHERE ID=100;
 
   will return rowcount = 0 and therefore the transaction fails.
 
  Has anything changed in latest version of SA and I recently moved
  to SA 3.9 after post which I am facing this problem.
 

 are you using a custom connection function ?  or some ancient version
 of MySQLDB ?  we explicitly set an option on the MySQLDB connection
 so that rowcount works properly.  very old versions of SA didnt set
 this variable and ignored MySQLDB's rowcount...but that would be
 early versions of 0.3 at the most recent.




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-31 Thread Arun Kumar PG
Yeah this is what I did and it works. Was curious to know where the problem
was. I guess I need to run through the code. Thanks Michael and all others!

On 7/31/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 31, 2007, at 10:48 AM, Arun Kumar PG wrote:

  As mentioned in my earlier email thread:
 
  I am using pool.QueuePool for managing connections and this pool is
  fed with a creator function which returns an instance of my own
  custom DBAPI class (I need this because of some logging stuff that
  I am doing). This custom DBAPI class returns a Connection object
  returned by MySQLdb.connect.
 
   we explicitly set an option on the MySQLDB connection
   so that rowcount works properly.
  Are you referring to the changes in sa/databases/mysql.py where in
  we are setting the 'client_flag'  ?
 
  Quick question: the rowcount related update as mentioned by you
  will work even if I am creating an engine passing in a pool - right?

 if you set the flag yourself, (and your custom DBAPI class is passing
 the flag all the way back to MySQLDB), itll work fine



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ConcurrentModificationError: Updated rowcount 0 does not match number of objects updated 1

2007-07-30 Thread Arun Kumar PG
Guys,

I am using SA with MySQL. I am trying to update a record by making a call to
session.update(obj). In response I am getting ConcurrentModificationError:
Updated rowcount 0 does not match number of objects updated 1

What are the possible reasons for this error?

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] UnicodeDecodeError: 'utf8' codec can't decode bytes in position 94-96: invalid data

2007-07-25 Thread Arun Kumar PG
Guys,

I am getting this error on reading data from a MySQL table. I have specified
the charset of the table as utf-8 and collation utf8_general_ci. Do I need
to do anything else ? Will *convert_unicode=True help?
*
-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
Apologies for not responding for a while Was stuck in the project.

Ok. So this is what happening The mapped objects are created during the
first time request to the application. So create_engine is getting called
one time only passing in the creator as a lambda: db_obj where db_obj is the
ref to method returning MySQldb connection.

starting over again.. request comes, handled by a thread in the threadpool,
check if mapped objects are already created or not. If yes return else
create mapped objects (create_engine()... as mentioned above) and thread
returned back to the pool. (FYI: this is an httpserver having a threadpool
for request handling)

Subsequent request now does not create mapped objects or create engine. It
simply uses the existing mapped objects and does ORM actions.

The problem was coming when lazy loading happens during multiple requests. I
guess the underlying connection pool (in case of using creator approach) is
not using threadlocal approach as different connections are checked in/out
when I look at the pool log and exchanged as well among different request
handling threads.

Can that be the problem ?

Also, is the underlying connection pool use threadlocal strategy in case of
using creator approach while creating engine  ? don't know if *strategy flag
is for that ?*

However, when i passed in a pool.QueuePool instance with use_threadlocal=
True everything worked just fine.

Any thoughts where the problem could be ?


On 7/16/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 15, 2007, at 11:24 PM, Arun Kumar PG wrote:

  Hi Michael,
 
  I figured out the problem. It was a connection sharing issue. Looks
  like different connection objects were getting returned from the
  pool (which was created using the creator approach in create_engine
  ()) when relations were getting loaded as a part of processing. Due
  to this sometimes connection swapping was happening among the
  different request threads.

 do you mean, multiple create_engine() calls were occuring ?  or are
 you talking about the issue i mentioned earlier, that lazy-loaders
 were firing off against a session in a different thread ?  does that
 mean your mapped objects *are* in fact being used in threads other
 than where they were created ?

 
  I resolve this I created a threadsafe QueuePool and passed a class
  wrapping the same while creating engine. This helps the same
  connection getting returned for the same thread.

 can you please describe specifically what you mean here ?  QueuePool,
 i would hope, is threadsafe already.  Or do you just mean you passed
 the threadlocal flag to QueuePool ?  that doesnt seem like it would
 fix the session-related problem since that issue occurs when it holds
 onto a single connection while flushing.

 i just need to understand what you did, since if theres any way i can
 defensively prevent or at least document the situation its pretty
 important.





 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 19, 2007, at 2:08 PM, Arun Kumar PG wrote:

  Apologies for not responding for a while Was stuck in the project.
 
  Ok. So this is what happening The mapped objects are created during
  the first time request to the application. So create_engine is
  getting called one time only passing in the creator as a lambda:
  db_obj where db_obj is the ref to method returning MySQldb connection.
 
  starting over again.. request comes, handled by a thread in the
  threadpool, check if mapped objects are already created or not. If
  yes return else create mapped objects (create_engine()... as
  mentioned above) and thread returned back to the pool. (FYI: this
  is an httpserver having a threadpool for request handling)
 
  Subsequent request now does not create mapped objects or create
  engine. It simply uses the existing mapped objects and does ORM
  actions.
 
  The problem was coming when lazy loading happens during multiple
  requests. I guess the underlying connection pool (in case of using
  creator approach) is not using threadlocal approach as different
  connections are checked in/out when I look at the pool log and
  exchanged as well among different request handling threads.
 

 OK, this is exactly the issue; youre caching mapped objects, which
 have unfired lazy loaders, and then sharing those mapped objects
 among threads.   The lazy loader needs to consult a session in order
 to load its contents, since thats where the ORM locates information
 about how to get a connection (for example, if your sessions are
 bound to engines, and not your tables, this would be essential).  The
 session, when its inside of a SessionTransaction as well as within a
 flush() process, holds onto a single pooled connection to do its
 work.  If another thread accesses the session during this time, youll
 get a conflict.


Will this be a problem even if I attach a new session per incoming request
i.e. thread handling request ? So basically it's because of having the same
copy of mapped objects ? How can I solve the above problem in existing way
without using a QueuePool ? By creating mapped objects per request ?

 Also, is the underlying connection pool use threadlocal strategy in
  case of using creator approach while creating engine  ? don't know
  if strategy flag is for that ?
 
  However, when i passed in a pool.QueuePool instance with
  use_threadlocal= True everything worked just fine.

 when you do that, the QueuePool will return the same connection for a
 particular thread which was already in use.  this is part of what
 happens when you use create_engine('...', strategy='threadlocal').
 However it doesnt have any ability to stop you from sharing one of
 those checked-out connections with another thread.  It shouldn't
 change anything here, actually; the session still checks out a
 connection, and holds onto it during a transaction or flush() and
 that's still the same connection it will hand out to any other thread
 during that time.




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
Or, you can create your mapped objects per request, yes, or perhaps
per thread.

 how much can this cost in terms of performance ?

On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:

 
  Will this be a problem even if I attach a new session per incoming
  request i.e. thread handling request ? So basically it's because of
  having the same copy of mapped objects ? How can I solve the above
  problem in existing way without using a QueuePool ? By creating
  mapped objects per request ?

 the objects that were loaded within a particular session stay there
 until you remove them.  therefore, whatever session you are using to
 load the objects, you should dispose of before putting the objects
 into a thread-global scope (you can call clear() on it to empty it
 out).  Also, you probably want to load all of their related items
 either explicitly or through eager loading - since when the objects
 are detached, the lazy loaders will raise errors when called.

 Or, you can create your mapped objects per request, yes, or perhaps
 per thread.



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
or may be just keep on using the QueuePool approach as it will always make
sure to return the same connection to the current thread ?


On 7/20/07, Arun Kumar PG [EMAIL PROTECTED] wrote:

 Or, you can create your mapped objects per request, yes, or perhaps
 per thread.

  how much can this cost in terms of performance ?

 On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
 
  On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:
 
  
   Will this be a problem even if I attach a new session per incoming
   request i.e. thread handling request ? So basically it's because of
   having the same copy of mapped objects ? How can I solve the above
   problem in existing way without using a QueuePool ? By creating
   mapped objects per request ?
 
  the objects that were loaded within a particular session stay there
  until you remove them.  therefore, whatever session you are using to
  load the objects, you should dispose of before putting the objects
  into a thread-global scope (you can call clear() on it to empty it
  out).  Also, you probably want to load all of their related items
  either explicitly or through eager loading - since when the objects
  are detached, the lazy loaders will raise errors when called.
 
  Or, you can create your mapped objects per request, yes, or perhaps
  per thread.
 
 
 
   
 


 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
That is what I am trying to figure out. It works perfectly when I do this.

On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 19, 2007, at 5:06 PM, Arun Kumar PG wrote:

  or may be just keep on using the QueuePool approach as it will
  always make sure to return the same connection to the current thread ?
 

 like i said, i dont see how that helps any.  a single Session thats
 in flush() holds onto a single connection and returns it regardless
 of what thread accesses it.  the threadlocal pool setting doesnt have
 any effect on threadsafety.



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
BoundMetaData is what I am using.

On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:

 and how is your session connected to the database ? are you using
 create_session(bind_to=something) ? or are you binding your MetaData to
 the engine ? are you using BoundMetaData ?
 On Jul 19, 2007, at 11:16 PM, Arun Kumar PG wrote:

 the stack trace points to pool.py (I will get the exact stack trace as I
 am away from my box currently)

  does the conflict occur frequently and easily with just a little bit of
 concurrency or is it
 something that only happens under very high load conditions ?

 this is happening primarily in a use case wherein the logic does some
 processing (this includes accessing many relations - i believe many lazy
 loaders fire here). since this use case generates some csv data it takes
 about 6-7 secs depending on the data set so when other requests comes in
 while other is in progress we encounter the 2014 error.

 however as mentioned earlier when i use threadlocal queue pool it just
 vanishes and no matter how many requests i send after that it just works
 fine.


 On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
  perhaps the nature of the conflict is different, then. are you able
  to observe what stack traces or at least approximately what operations
  are taking place when the conflict occurs ? does the conflict occur
  frequently and easily with just a little bit of concurrency or is it
  something that only happens under very high load conditions ?
 
 
 
 
 
 


 --
 Cheers,

 - A




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-15 Thread Arun Kumar PG
Hi Michael,

I figured out the problem. It was a connection sharing issue. Looks like
different connection objects were getting returned from the pool (which was
created using the creator approach in create_engine()) when relations were
getting loaded as a part of processing. Due to this sometimes connection
swapping was happening among the different request threads.

I resolve this I created a threadsafe QueuePool and passed a class wrapping
the same while creating engine. This helps the same connection getting
returned for the same thread.

Programming error!

Hopefully I have tested everything and this does not crops up again :)

Thanks for the support!

- A



On 7/13/07, Arun Kumar PG [EMAIL PROTECTED] wrote:

 Sure Michael I will get back on this in a while as I am researching on
 this. Thanks for your support. I hope this gets resolved sooner as I am very
 much dependent on this and the application is really critical and should be
 up in next couple days!

 Thanks and i will get back on this in next couple hours.

 On 7/13/07, Michael Bayer [EMAIL PROTECTED]  wrote:
 
 
  also send along a full stack trace so at least it can be seen where
  this is occuring.
 
 
   
 


 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-13 Thread Arun Kumar PG
Thx Michael.

Well, I don't think that I am doing that. To give you a picture of the
object model this is how the hierarchy is:


 BaseOrmDao (+GetSession() this returns the
session attached to the current thread)

   ^


|
Request -PreProcessor  - Controller - Manager - DaoFactory -
DAOs
 |
V
   (orm.session
attached to thread
here)

To clarify the DaoFactory will return a new DAO object back to the manager
always, that means a DAO object. Also, a new instance of Controller and
Manager is made per request.

To answer your question there is no point in the communication where two
threads share the same object. (Not sure if SQLAlchemy does so when mapped
objects are used in the DAO layer ?)

Any thoughts?

On 7/13/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 13, 2007, at 2:27 AM, Arun Kumar PG wrote:

  I create an orm session as soon as the request comes in and store
  the reference to the same in the curent thread i.e.
  threading.currentThread().session = new_session. This session is
  stored in the current thread so that I can get the same session
  across all DAO objects. so basically ever DAO in the request chain
  can simply get the session by saying
  threading.currenrThread.session and use it. Finally, once the
  request is over this session object is removed from the current
  thread i.e. del session.
 
  I can see that during multiple request the thread ids are different
  so I believe that all of them are having their own copy of session.
  Further, I am using pool module of sqlalchemy from where a
  connection is returned to orm session.
 
  My guess is that somewhere in that connection management things are
  getting schewed up -

 there was a bug like this at one time in the pool, but it was fixed
 probably a year ago, and nobody has reported this issue since.  are
 the objects which you load from the session being shared between
 threads ?  i.e. a second thread issues a lazy-load operation on an
 object's attribute ?  that counts as multi-threaded session access.



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-13 Thread Arun Kumar PG
Sure Michael I will get back on this in a while as I am researching on this.
Thanks for your support. I hope this gets resolved sooner as I am very much
dependent on this and the application is really critical and should be up in
next couple days!

Thanks and i will get back on this in next couple hours.

On 7/13/07, Michael Bayer [EMAIL PROTECTED] wrote:


 also send along a full stack trace so at least it can be seen where
 this is occuring.


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how to call group_concat function in MySQL?

2007-04-22 Thread Arun Kumar PG
Hi Guys,

How can I call group_concat() function in MySQL using the function gateway ?

something like select([child, func.group_concat(Child.c.xxx).label(count)],
group_by=[])..

any clue ?

Thx

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: detached instance or is already persistent in a different Sess

2007-04-19 Thread Arun Kumar PG
Looks like you are trying to use objects across different sessions.

try to do an explicit session.expunge(obj) to the first object returned
before use the object in other session.

On 4/19/07, jose [EMAIL PROTECTED] wrote:


 hi group,

 I have the following error that I don't know how to solve...
 *
 --

 *self.record = Comune.get(pk)
 *...

 *self.record.get_from_dict(data=data,update=True)
 if self.record._state['modified']:
  self.record.save()

 *sqlalchemy.exceptions.InvalidRequestError: (Instance 'Comune 8150'
 is a detached instance or is already persistent in a different Session,
 bound method Controller.save of sicer.controllers.comune.Controller
 instance at 0xb6b541ac)

 *any ideas?

 jo


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Child to parent - Uni-directional relationships - Is that supported ?

2007-04-13 Thread Arun Kumar PG
cool thx.

On 4/13/07, svilen [EMAIL PROTECTED] wrote:



  I have a Parent - Child (1:N) relationship between Class and Exam
  table.
 
  Class - Exam
1   :N
 
  Now since a Class could have millions of Exam I don't want have
  an attribute on Class called exams. Instead I only want an
  attribute on Exam to the parent Class.
 
  Can we do this in SA ?
 
  Will do the below do the job ? Just asking out of curiosity without
  testing.
 
  mapper(Exam, examtable, properties = {class: relation(Class)})

 yes, u'll get the other side only if u require backref=something

 also do specify relation(  uselist=False,) as it mistakes
 sometimes.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Bit strange - no expunge required?

2007-04-12 Thread Arun Kumar PG
I think SessionContext makes senses especially for architecture involving
multiple layers like mine where manager - DAO interaction happens.

Thx Michael.

On 4/12/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Apr 11, 2007, at 11:15 PM, Arun Kumar PG wrote:

  Hi Michael,
 
  So how can I prevent this as I can't access the lazyload attributes
  in my manager class once I get the result set from DAO as i get no
  parent session/contextual session exists exception.
 
  should I maintain a reference to the session object in  the DAO
  class so that it is not garbage collected ?

 heres the approaches in descending order of inconvenience:

 you can, as a workaround, immediately access all the lazy load
 relations in your getResults() method...i do this with hibernate a lot.

 otherwise, one option is to explicitly keep a Session around that
 doesnt get garbage collected, like you mention.

 but what a lot of people do is use the SessionContext extension with
 your mappers.  that way when the lazy loader fires off, it looks for
 the Session, but if it cant find it, calls mapper.get_session() which
 then calls the SessionContextSessionContext then creates a new
 session if one does not exist already for the current thread.  so
 using SessionContext is kind of like replacing the weakly-referenced
 Session with a Session that is bound to a thread.




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] session.expunge() detaches all objects recursively?

2007-04-11 Thread Arun Kumar PG
Hi Guys,

not tried but quick answer would help me:

session.expunge()'ing an object would also expunge the child attributes
which are relations ? e.g. expunging User object would also expunge
user_addresses (a list of UserAddress object) ?

Also, if I make any modification in the detached instance both to User and
the User Address child attributes within the object and then try to attach
and flush will propagate the changes to both the parent and child table --
right ?

thx.

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Bit strange - no expunge required?

2007-04-11 Thread Arun Kumar PG
Hi Guys,

I am having two layers in my application: Manager and DAO.

DAO deals with ORM and manager simply calls method on DAO.

This is what I am doing in manager

manager:
  rs = dao.getResults(params)
  obj = rs[0]
  some logic.
  obj.name = 'some name'
  dao.Update(obj)


The getResults() in DAOis implemented like this:

def getResults(params):
  s = create_session()
  rs = s.query()
  return rs

def Update(obj)
  s = create_session()
  s.update(obj)
  s.flush()

As per the sqlalchemy documentation, we should explicitly expunge() the
object and then use it with different session. The above logic works fine
even when the object gettting updated actually belongs to the session that
was created in the getResults() call.

When I do the same on Python interactive interpreter prompt *without having
the above methods* it throws:

raise exceptions.InvalidRequestError(Object '%s' is already attached to
session '%s' (this is '%s') % (repr(obj), old, id(self)))

Can anyone help here.

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.expunge() detaches all objects recursively?

2007-04-11 Thread Arun Kumar PG
cool. thx Michael!

On 4/12/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Apr 11, 2007, at 7:27 AM, Arun Kumar PG wrote:

  Hi Guys,
 
  not tried but quick answer would help me:
 
  session.expunge()'ing an object would also expunge the child
  attributes which are relations ? e.g. expunging User object would
  also expunge user_addresses (a list of UserAddress object) ?
 

 expunge is part of the all cascade on a relationship, so if you
 have a cascade of all set up then an expunge operation will also
 follow along that relationship.


  Also, if I make any modification in the detached instance both to
  User and the User Address child attributes within the object and
  then try to attach and flush will propagate the changes to both the
  parent and child table -- right ?
 

 if you have save-update cascade turned on, which is the default
 setting and is also part of the all cascade.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ORM dynamic query ?

2007-04-11 Thread Arun Kumar PG
Thx Michael.

this looks better but can't I do a join here and specify absolute column
names like {User.c.UserId..}

session.query(User).select_by(**{id: 1, foo : bar})

thx

- A

On 4/12/07, Michael Bayer [EMAIL PROTECTED] wrote:


 On Apr 11, 2007, at 9:56 AM, Arun Kumar PG wrote:

 Guys,

 I have a search form and I want to compose the select_by() query
 dynamically as per the prameters but the select_by() on query object does
 not accepts string type query i.e. session.query(User).select_by(
 User.c.Id == 1 and ...)


 the specific thing you want to do there would look like:

 session.query(User).select_by(text( User.c.Id == 1 and ...))

 but maybe you mean:

 session.query(User).select_by(**{id: 1, foo : bar})

 ?




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Bit strange - no expunge required?

2007-04-11 Thread Arun Kumar PG
Hi Michael,

So how can I prevent this as I can't access the lazyload attributes in my
manager class once I get the result set from DAO as i get no parent
session/contextual session exists exception.

should I maintain a reference to the session object in  the DAO class so
that it is not garbage collected ?

thx.

- A

On 4/12/07, Michael Bayer [EMAIL PROTECTED] wrote:


 On Apr 11, 2007, at 11:45 AM, Arun Kumar PG wrote:

 Hi Guys,

 I am having two layers in my application: Manager and DAO.

 DAO deals with ORM and manager simply calls method on DAO.

 This is what I am doing in manager

 manager:
   rs = dao.getResults (params)
   obj = rs[0]
   some logic.
   obj.name = 'some name'
   dao.Update(obj)


 The getResults() in DAOis implemented like this:

 def getResults(params):
   s = create_session()
   rs = s.query()
   return rs

 def Update(obj)
   s = create_session()
   s.update(obj)
   s.flush()

 As per the sqlalchemy documentation, we should explicitly expunge() the
 object and then use it with different session. The above logic works fine
 even when the object gettting updated actually belongs to the session that
 was created in the getResults() call.

 When I do the same on Python interactive interpreter prompt *without
 having the above methods* it throws:

 raise exceptions.InvalidRequestError(Object '%s' is already attached to
 session '%s' (this is '%s') % (repr(obj), old, id(self)))


 most likely that the session created in getResults() is falling out of
 scope in your application, therefore the entity is no longer attached to
 it (the operation thats throwing that error looks at a session_id attached
 to the object, looks in a WeakValueDictionary for that key to get the
 session).  variable scope is more sticky in the python interpreter.




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Putting Session.flush in a seperate thread

2007-04-09 Thread Arun Kumar PG
may be a threadlocal strategy.

On 4/9/07, Koen Bok [EMAIL PROTECTED] wrote:


 We are building a GUI app, and we were thinking about wrapping
 session.flush() in a thread with a timer that detects a timeout. That
 way we would have better performace and we can generate warnings if
 the connection goes down. Do you guys think this is smart, or are
 there complications?

 I tried to build this already but I cannot figure out how to assign
 the shared connection to a thread. I always get 'No connection
 defined'.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] One database Metadata - multiple sessions?

2007-04-08 Thread Arun Kumar PG
Hi All,

Can we initialize metadata one time (let's say during the first time
application access) , store the same in the global context and then reuse
the already intialized metatada for catering multiple requests ? Any
threading related issues ?

So here is what I want to do and wanna confirm if this is right:

As soon as the application boots up:

Application Initialization - create engine and bound metadata - Load all
tables (i.e. create Table objects, autoload=True) - Map tables and classes
- Store Metadata and loaded Tables in a user defined Factory class

When ever application gets subsequent requests the below will happen:

Get the required table objects/metadata from the above factory - create
session - do whatever -- flush/close session.

Also, the application can get multiple concurrent requests.

Any thread safety issues that you guys see here in re-using one time
instantiated Table/mapper objects ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: One database Metadata - multiple sessions?

2007-04-08 Thread Arun Kumar PG
Thx Michael!

On 4/8/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Apr 8, 2007, at 5:54 AM, Arun Kumar PG wrote:

  Hi All,
 
  Can we initialize metadata one time (let's say during the first
  time application access) , store the same in the global context and
  then reuse the already intialized metatada for catering multiple
  requests ? Any threading related issues ?
 

 metadata / engine is meant to be a global object thats used across
 threads.  its the individual Connections and Transactions usually
 need to be kept in one thread (and Sessions if youre using ORM).

 
  Application Initialization - create engine and bound metadata -
  Load all tables (i.e. create Table objects, autoload=True) - Map
  tables and classes - Store Metadata and loaded Tables in a user
  defined Factory class
 
  When ever application gets subsequent requests the below will happen:
 
  Get the required table objects/metadata from the above factory -
  create session - do whatever -- flush/close session.
 
  Also, the application can get multiple concurrent requests.
 

 thats pretty much the model, yeah.



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ordering by field in related object

2007-04-08 Thread Arun Kumar PG
Since I am new to SA just want if that means that even if we have an eager
load on a 1:N relationships we should still do an explicit JOIN if the query
involves columns from both side of relations?

On 4/9/07, Michael Bayer [EMAIL PROTECTED] wrote:


 eagerly loaded relationships are not part of the main Query criterion/
 select.  the eager loads are always tacked on secondary to the inner
 query.  the main goal being that if you had lazy or eagerly loaded
 relationships, in both cases youd get the identical result.  so any
 tables that  you add to the Query criterion are completely distinct
 from their possible appearance in an eager loaded relationship (it
 has to be this way, otherwise eager loads would change the result of
 the query..eager loads are meant to be an optimization only).  thats
 why the StoryStats' table is getting added in to the inner query.

 so the approaches to take are:

 1. explicitly join against StoryStats:

 session.query(Story).join('storystatrelation').order_by
 (StoryStats.c.rating)

 2. create whatever query you want and load its instances via instances
 ():

 s = story_table.outerjoin(story_stats_table).select
 (order_by=story_stats_table.c.rating)
 session.query(Story).options(contains_eager
 ('storystatrelation')).instances(s.execute())

 3. a little less dynamic, specify order_by in the eagerly loaded
 relation() in the mapper setup, and specify None for the query
 ordering (this is more of a trick).

 mapper(Story, story_table, properties={
 'storystatrelation':relation(StoryStats,
 story_stats_table,
 lazy=False, order_by=[story_stats_table.c.rating])
 })

 session.query(Story).order_by(None)


 On Apr 8, 2007, at 5:39 PM, Norjee wrote:

 
  It seems I don't understand how i can order the results of a query.
  Assume that i have two object Story and StoryStats. Each Story has
  one
  StoryStats, mapped by ForeignKey. The relation is eagerloaded
  (lazy=False)
  (The actual model is a tad more complicated, but the idea is the
  same)
 
  When i now try to select Stories, ordering by create_date goes fine,
  e.g.
  session.query(Story).order_by(Story.c.create_date)
 
 
  But ordering by the realated StoryStats goes awry :/
  session.query(Story).order_by(StoryStats.c.rating), only a singe
  Story
  is returned
  Now errors are thrown however.
 
 
  Is there something I'm missing here? (I know I probably could do
  session.query(StoryStats).order_by(StoryStats.c.rating), but that
  kind
  of defeats the purpose as the ordering is dynamic)
 
 
  


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---