[sqlalchemy] Re: Error with autoload=True and ForeignKey for postgresql

2007-07-19 Thread jdu

 ah, ok...are you saying you changed your own search_path ?

Yes, ive changed my search_path so that my 'runtime' user (different
than user
'jdu') don't need the schema prefix in request (schema 'jdu' was
prepend in the
search_path). I removed this change to default the search_path and it
worked!

 since i didnt write these PG queries, is there a way to modify them
 such that it will always give us the schema ?  (we can always detect
 if its the default schema)

You can detect if the schema was prefixed but in case of none, I don't
known
how to detect 'which' schema was found first in the search_path.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Insert through ORM performance (was: Performance question)

2007-07-19 Thread Gaetan de Menten

[in response to a batch-insert-is-slow complaint on the Elixir list]

On 7/19/07, AndCycle [EMAIL PROTECTED] wrote:

 I don't think db define is the major problem,
 it could be sqlalchemy's problem,
 because currently it haven't implement real transaction command in
 most db implementation,
 all the do_begin define is bypass, so you won't get any efficient
 batch db access right now.

I'm not sure how much of this is true. I'd like to hear any comment
about this by people more knowledgeable than me on that topic.

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] AW: [sqlalchemy] Insert through ORM performance (was: Performance question)

2007-07-19 Thread Andreas Kostyrka

Sqlalchemy almist certainly implements transactions. The point is that insert 
is bad for bulk loading data. (I presume you are bulk loading because you want 
to use transactions for batch processing)

Correctly and quickly loading data is strongly depending upon the DB.
E.g. For PostgreSQL you can achieve a magnitude of speedup by using COPY FROM 
STDIN;

But the kinds hacks are out of scope for sqlalchemy.

Andreas


-- Urspr�ngl. Mitteil. --
Betreff:[sqlalchemy] Insert through ORM performance (was: Performance 
question)
Von:Gaetan de Menten [EMAIL PROTECTED]
Datum:  19.07.2007 08:15


[in response to a batch-insert-is-slow complaint on the Elixir list]

On 7/19/07, AndCycle [EMAIL PROTECTED] wrote:

 I don't think db define is the major problem,
 it could be sqlalchemy's problem,
 because currently it haven't implement real transaction command in
 most db implementation,
 all the do_begin define is bypass, so you won't get any efficient
 batch db access right now.

I'm not sure how much of this is true. I'd like to hear any comment
about this by people more knowledgeable than me on that topic.

-- 
Ga�tan de Menten
http://openhex.org



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dbcook 0.1

2007-07-19 Thread Jonas

Isn't it what does already Elixir?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dbcook 0.1

2007-07-19 Thread svilen

 Isn't it what does already Elixir?
not really. Frankly, i dont know much elixir, just some impressions.
elixir is sort of syntax sugar over SA, with very little 
decision-making inside. It leaves all the decisions - the routine 
ones too - to the programmer. At least thats how i got it.

This one hides / automates _everything_ possible - the very concept of 
existing of relational SQL underneath is seen only by side-effects, 
e.g,. the DB_inheritance types concrete-, joined-, single- 
table. It decides things like where to break cyclical references with 
alter_table/post_update; makes up the polymorphic inheritances, etc.

Of course this is only the declaration/creation part (building the DB 
model); after that it can cover only small/simple part of the queries 
(model usage) - combinative possibilities there are endless.
That's why u have plain SA stuff, once the python function over 
object converted into SA-expression over tables path gets too 
narrow.

dbcook does not have assign_mapper-like things, putting query methods 
on the objects. it leaves all that to you. Although one day there 
will be a usage-case/example of some way to do it - once i get there.

elixir is lighter, this one might be heavier - depends on how u 
measure it.

more differences maybe - no idea, someone has to have time to try both 
(:-)

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Insert through ORM performance (was: Performance question)

2007-07-19 Thread Michael Bayer

I will comment that DBAPI has no begin() method.  when you use DBAPI
with autocommit=False, youre in a transaction - always.  SQLAlchemy
defines a transaction abstraction on top of this that pretends to
have a begin.  Its when theres *not* a sqlalchemy transaction going
on that youll see a COMMIT issued after every insert/update/delete;
otherwise youre transactional.

Anyway, if the email is talking about batched inserts of this type
being slow (i.e. non-ORM inserts):

table.insert().execute({params1}, {params2}, {params3}, )

thats because SA still does a lot of work on each batch of {params} to
check for defaults and also to process bind parameters.   We might
look into optimizing some of the redundant work which occurs within
this process in 0.4, however as long as people still want their
unicodes converted to utf-8, their datetimes converted to strings on
sqlite, their binaries correctly massaged, their Python side defaults
to fire off, this overhead will still be present for those types.

So, if you truly want DBAPI-speed inserts, use the raw connection:

engine.connect().connection.executemany(your statement, [{params1},
{params2}, {params3}, ...])



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Data types specific to engines

2007-07-19 Thread Michael Bayer

this capability already exists.

For example, if you want specific SQL types, those are implemented.
if you want specfically a CHAR column, use types.CHAR.  Or VARCHAR,
use types.VARCHAR.  Other implemented SQL types are TIMESTAMP, CLOB
and BLOB.

But that's not all.  For types that are totally specific to a certain
database, those are available as well, within the module for each
database.

for example,

import sqlalchemy.databases.mysql as mysql
import sqlalchemy.databases.postgres as postgres

mysql.MSEnum - mysql's ENUM type
mysql.MSBigInteger - mysql's BIGINTEGER type
postgres.PGInet  - Pg's INET type
postgres.PGArray -  Pg's ARRAY type

We also have a newer class of types that are generic, but will use a
more specific DB type if one is available.  An example is the Interval
type.  This will use a regular date column on most databases,
subtracting the difference from 1/1/1970 to get the result.  But on
postgres, it uses the PGInterval type, which is PG's INTERVAL type.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Data types specific to engines

2007-07-19 Thread Michael Bayer

i just put up a little bit of new docs to this effect.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] What am I missing?

2007-07-19 Thread Ian Wilson

Can anyone explain why this http://paste.turbogears.org/paste/1510
fails at the last assert but this
http://paste.turbogears.org/paste/1511 works ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] simple bulk insert question

2007-07-19 Thread one.person

Sorry for the noob question.  Why do I only end up with 2 inserted
rows when doing this?

 from sqlalchemy import *
 db = create_engine('mysql://login:[EMAIL PROTECTED]/database')
 metadata = BoundMetaData(db)
 temptable = Table('temptable', metadata,
... Column('col1', Integer),
... Column('col2', String(10)))
 temptable.create()
 data = [(1, 'blah1'), (2, 'blah2'), (2, 'blah2'), (3, 'blah3'), (3, 
 'blah3'), (3, 'blah3')]
 temptable.insert(values=data).execute()
sqlalchemy.engine.base.ResultProxy object at 0x01169C70
 s = temptable.select()
 e = s.execute()
 e.fetchall()
[(1L, 'blah1'), (2L, 'blah2')]

But if I continue in another table and insert via a loop, it works
fine:

 temptable2 = Table('temptable2', metadata,
... Column('col1', Integer),
... Column('col2', String(10)))
 temptable2.create()
 for i in data:
... temptable2.insert(values=i).execute()
...
sqlalchemy.engine.base.ResultProxy object at 0x011745B0
sqlalchemy.engine.base.ResultProxy object at 0x011747D0
sqlalchemy.engine.base.ResultProxy object at 0x01174910
sqlalchemy.engine.base.ResultProxy object at 0x01174A50
sqlalchemy.engine.base.ResultProxy object at 0x01174B90
sqlalchemy.engine.base.ResultProxy object at 0x01174CD0


Is there some better way to do bulk inserts that I am missing?

Thanks in advance


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
Apologies for not responding for a while Was stuck in the project.

Ok. So this is what happening The mapped objects are created during the
first time request to the application. So create_engine is getting called
one time only passing in the creator as a lambda: db_obj where db_obj is the
ref to method returning MySQldb connection.

starting over again.. request comes, handled by a thread in the threadpool,
check if mapped objects are already created or not. If yes return else
create mapped objects (create_engine()... as mentioned above) and thread
returned back to the pool. (FYI: this is an httpserver having a threadpool
for request handling)

Subsequent request now does not create mapped objects or create engine. It
simply uses the existing mapped objects and does ORM actions.

The problem was coming when lazy loading happens during multiple requests. I
guess the underlying connection pool (in case of using creator approach) is
not using threadlocal approach as different connections are checked in/out
when I look at the pool log and exchanged as well among different request
handling threads.

Can that be the problem ?

Also, is the underlying connection pool use threadlocal strategy in case of
using creator approach while creating engine  ? don't know if *strategy flag
is for that ?*

However, when i passed in a pool.QueuePool instance with use_threadlocal=
True everything worked just fine.

Any thoughts where the problem could be ?


On 7/16/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 15, 2007, at 11:24 PM, Arun Kumar PG wrote:

  Hi Michael,
 
  I figured out the problem. It was a connection sharing issue. Looks
  like different connection objects were getting returned from the
  pool (which was created using the creator approach in create_engine
  ()) when relations were getting loaded as a part of processing. Due
  to this sometimes connection swapping was happening among the
  different request threads.

 do you mean, multiple create_engine() calls were occuring ?  or are
 you talking about the issue i mentioned earlier, that lazy-loaders
 were firing off against a session in a different thread ?  does that
 mean your mapped objects *are* in fact being used in threads other
 than where they were created ?

 
  I resolve this I created a threadsafe QueuePool and passed a class
  wrapping the same while creating engine. This helps the same
  connection getting returned for the same thread.

 can you please describe specifically what you mean here ?  QueuePool,
 i would hope, is threadsafe already.  Or do you just mean you passed
 the threadlocal flag to QueuePool ?  that doesnt seem like it would
 fix the session-related problem since that issue occurs when it holds
 onto a single connection while flushing.

 i just need to understand what you did, since if theres any way i can
 defensively prevent or at least document the situation its pretty
 important.





 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Insert through ORM performance (was: Performance question)

2007-07-19 Thread Mike Orr

Andreas Kostyrka wrote:
 Correctly and quickly loading data is strongly depending upon the DB.
 E.g. For PostgreSQL you can achieve a magnitude of speedup by using COPY
 FROM STDIN;

 But the kinds hacks are out of scope for sqlalchemy.


On 7/19/07, Michael Bayer [EMAIL PROTECTED] wrote:
 Anyway, if the email is talking about batched inserts of this type
 being slow (i.e. non-ORM inserts):

 table.insert().execute({params1}, {params2}, {params3}, )

 thats because SA still does a lot of work on each batch of {params} to
 check for defaults and also to process bind parameters.   We might
 look into optimizing some of the redundant work which occurs within
 this process in 0.4, however as long as people still want their
 unicodes converted to utf-8, their datetimes converted to strings on
 sqlite, their binaries correctly massaged, their Python side defaults
 to fire off, this overhead will still be present for those types.

Andreas is pointing out that bulk inserts are intrinsically slow in
some database engines, which adds an additional level of overhead that
SQLAAlchemy has no control over.  MySQL suggests LOAD DATA INFILE
... for these situations, to read data from a tab-delimited or CSV
file (with SELECT INTO OUTFILE ... for writing).  PostgreSQL has the
equivalent but with different syntax.Unfortunately that means
putting the data in still *another* format which may have quirks, and
it will have to be an encoded bytestring rather than Unicode.

Perhaps SQLAlchemy could add a side feature to load/save data in this
manner, to smooth out the differences between engines.  But I'm not
sure that's worth much effort.  To do it with SQLAlchemy now you can
create a raw SQL string with the full path of the file to be
read/written.

I'm amazed at the speed of mysqldump and its reloading.  It packs a
bunch of rows into one INSERT statement.  I don't see why that's so
much faster than than executemany but it provides another potential
avenue for speed.  I'm not sure if MySQL is the only engine that does
this.

-- 
Mike Orr [EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Michael Bayer


On Jul 19, 2007, at 2:08 PM, Arun Kumar PG wrote:

 Apologies for not responding for a while Was stuck in the project.

 Ok. So this is what happening The mapped objects are created during  
 the first time request to the application. So create_engine is  
 getting called one time only passing in the creator as a lambda:  
 db_obj where db_obj is the ref to method returning MySQldb connection.

 starting over again.. request comes, handled by a thread in the  
 threadpool, check if mapped objects are already created or not. If  
 yes return else create mapped objects (create_engine()... as  
 mentioned above) and thread returned back to the pool. (FYI: this  
 is an httpserver having a threadpool for request handling)

 Subsequent request now does not create mapped objects or create  
 engine. It simply uses the existing mapped objects and does ORM  
 actions.

 The problem was coming when lazy loading happens during multiple  
 requests. I guess the underlying connection pool (in case of using  
 creator approach) is not using threadlocal approach as different  
 connections are checked in/out when I look at the pool log and  
 exchanged as well among different request handling threads.


OK, this is exactly the issue; youre caching mapped objects, which  
have unfired lazy loaders, and then sharing those mapped objects  
among threads.   The lazy loader needs to consult a session in order  
to load its contents, since thats where the ORM locates information  
about how to get a connection (for example, if your sessions are  
bound to engines, and not your tables, this would be essential).  The  
session, when its inside of a SessionTransaction as well as within a  
flush() process, holds onto a single pooled connection to do its  
work.  If another thread accesses the session during this time, youll  
get a conflict.

 Also, is the underlying connection pool use threadlocal strategy in  
 case of using creator approach while creating engine  ? don't know  
 if strategy flag is for that ?

 However, when i passed in a pool.QueuePool instance with  
 use_threadlocal= True everything worked just fine.

when you do that, the QueuePool will return the same connection for a  
particular thread which was already in use.  this is part of what  
happens when you use create_engine('...', strategy='threadlocal').   
However it doesnt have any ability to stop you from sharing one of  
those checked-out connections with another thread.  It shouldn't  
change anything here, actually; the session still checks out a  
connection, and holds onto it during a transaction or flush() and  
that's still the same connection it will hand out to any other thread  
during that time.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: simple bulk insert question

2007-07-19 Thread Michael Bayer


On Jul 19, 2007, at 2:00 PM, one.person wrote:


 Sorry for the noob question.  Why do I only end up with 2 inserted
 rows when doing this?

 from sqlalchemy import *
 db = create_engine('mysql://login:[EMAIL PROTECTED]/database')
 metadata = BoundMetaData(db)
 temptable = Table('temptable', metadata,
 ... Column('col1', Integer),
 ... Column('col2', String(10)))
 temptable.create()
 data = [(1, 'blah1'), (2, 'blah2'), (2, 'blah2'), (3, 'blah3'),  
 (3, 'blah3'), (3, 'blah3')]
 temptable.insert(values=data).execute()
 sqlalchemy.engine.base.ResultProxy object at 0x01169C70
 s = temptable.select()
 e = s.execute()
 e.fetchall()
 [(1L, 'blah1'), (2L, 'blah2')]


the multiple values get passed to execute(), and with constructed SQL  
are always passed as dictionaries:

temptable.insert().execute({'col1':1, 'col2': 'blah1'}, {'col1':2,  
'col2': 'blah1'}, )

the way you were doing it, using values, which expects a list of  
items in the same order as the columns, was binding (1, 'blah1') to  
col1 and (2, 'blah2') to col2.  but then later on in the chain, when  
converted to sqlite positional arguments, came out as [(1, 'blah1'),  
(2, 'blah2')], which then got picked up as a list of tuples , which  
is the clue that it should use executemany().


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Misspell elixir in SAContext

2007-07-19 Thread Mike Orr

On 7/18/07, Olli Wang [EMAIL PROTECTED] wrote:
 I just found your SAContext has a misspell of elixir, you spell it as
 exilir,

Fixed in 0.3.3.  I tend to pronounce that word the other way so that's
how I spelled it.

http://sluggo.scrapping.cc/python/sacontext/

 Also, I have little question about how to use the ElixirStrategy. It said
 under

 pylons we should use sac = PylonsSAContext(), but the ElixirStrategy tells

 us to use sac = SAContext(strategy=ExilirStrategy), that
 way, it is not

 PylonsSAContext(), does it work fine with Pylons, too?

It should.  You'll need the strategy argument.

 And, could you tell me where to put the sqlalchemy config below?

 sqlalchemy.default.uri = mysql://[EMAIL PROTECTED]/mydb
 sqlalchemy.default.echo = true
 sqlalchemy.default.echo_pool = false
 sqlalchemy.default.pool_recycle = 3600

 I put it in development.ini but it seems doesn't work. :(

That's right.  It appears to be a bug in Pylons or PasteDeploy that I
haven't figured out; it loses the configuration in some circumstances.
 We can discuss it on the other thread in pylons-discuss.

-- 
Mike Orr [EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 19, 2007, at 2:08 PM, Arun Kumar PG wrote:

  Apologies for not responding for a while Was stuck in the project.
 
  Ok. So this is what happening The mapped objects are created during
  the first time request to the application. So create_engine is
  getting called one time only passing in the creator as a lambda:
  db_obj where db_obj is the ref to method returning MySQldb connection.
 
  starting over again.. request comes, handled by a thread in the
  threadpool, check if mapped objects are already created or not. If
  yes return else create mapped objects (create_engine()... as
  mentioned above) and thread returned back to the pool. (FYI: this
  is an httpserver having a threadpool for request handling)
 
  Subsequent request now does not create mapped objects or create
  engine. It simply uses the existing mapped objects and does ORM
  actions.
 
  The problem was coming when lazy loading happens during multiple
  requests. I guess the underlying connection pool (in case of using
  creator approach) is not using threadlocal approach as different
  connections are checked in/out when I look at the pool log and
  exchanged as well among different request handling threads.
 

 OK, this is exactly the issue; youre caching mapped objects, which
 have unfired lazy loaders, and then sharing those mapped objects
 among threads.   The lazy loader needs to consult a session in order
 to load its contents, since thats where the ORM locates information
 about how to get a connection (for example, if your sessions are
 bound to engines, and not your tables, this would be essential).  The
 session, when its inside of a SessionTransaction as well as within a
 flush() process, holds onto a single pooled connection to do its
 work.  If another thread accesses the session during this time, youll
 get a conflict.


Will this be a problem even if I attach a new session per incoming request
i.e. thread handling request ? So basically it's because of having the same
copy of mapped objects ? How can I solve the above problem in existing way
without using a QueuePool ? By creating mapped objects per request ?

 Also, is the underlying connection pool use threadlocal strategy in
  case of using creator approach while creating engine  ? don't know
  if strategy flag is for that ?
 
  However, when i passed in a pool.QueuePool instance with
  use_threadlocal= True everything worked just fine.

 when you do that, the QueuePool will return the same connection for a
 particular thread which was already in use.  this is part of what
 happens when you use create_engine('...', strategy='threadlocal').
 However it doesnt have any ability to stop you from sharing one of
 those checked-out connections with another thread.  It shouldn't
 change anything here, actually; the session still checks out a
 connection, and holds onto it during a transaction or flush() and
 that's still the same connection it will hand out to any other thread
 during that time.




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Michael Bayer


On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:


 Will this be a problem even if I attach a new session per incoming  
 request i.e. thread handling request ? So basically it's because of  
 having the same copy of mapped objects ? How can I solve the above  
 problem in existing way without using a QueuePool ? By creating  
 mapped objects per request ?

the objects that were loaded within a particular session stay there  
until you remove them.  therefore, whatever session you are using to  
load the objects, you should dispose of before putting the objects  
into a thread-global scope (you can call clear() on it to empty it  
out).  Also, you probably want to load all of their related items  
either explicitly or through eager loading - since when the objects  
are detached, the lazy loaders will raise errors when called.

Or, you can create your mapped objects per request, yes, or perhaps  
per thread.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
Or, you can create your mapped objects per request, yes, or perhaps
per thread.

 how much can this cost in terms of performance ?

On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:

 
  Will this be a problem even if I attach a new session per incoming
  request i.e. thread handling request ? So basically it's because of
  having the same copy of mapped objects ? How can I solve the above
  problem in existing way without using a QueuePool ? By creating
  mapped objects per request ?

 the objects that were loaded within a particular session stay there
 until you remove them.  therefore, whatever session you are using to
 load the objects, you should dispose of before putting the objects
 into a thread-global scope (you can call clear() on it to empty it
 out).  Also, you probably want to load all of their related items
 either explicitly or through eager loading - since when the objects
 are detached, the lazy loaders will raise errors when called.

 Or, you can create your mapped objects per request, yes, or perhaps
 per thread.



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
or may be just keep on using the QueuePool approach as it will always make
sure to return the same connection to the current thread ?


On 7/20/07, Arun Kumar PG [EMAIL PROTECTED] wrote:

 Or, you can create your mapped objects per request, yes, or perhaps
 per thread.

  how much can this cost in terms of performance ?

 On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
 
  On Jul 19, 2007, at 4:39 PM, Arun Kumar PG wrote:
 
  
   Will this be a problem even if I attach a new session per incoming
   request i.e. thread handling request ? So basically it's because of
   having the same copy of mapped objects ? How can I solve the above
   problem in existing way without using a QueuePool ? By creating
   mapped objects per request ?
 
  the objects that were loaded within a particular session stay there
  until you remove them.  therefore, whatever session you are using to
  load the objects, you should dispose of before putting the objects
  into a thread-global scope (you can call clear() on it to empty it
  out).  Also, you probably want to load all of their related items
  either explicitly or through eager loading - since when the objects
  are detached, the lazy loaders will raise errors when called.
 
  Or, you can create your mapped objects per request, yes, or perhaps
  per thread.
 
 
 
   
 


 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Michael Bayer


On Jul 19, 2007, at 5:06 PM, Arun Kumar PG wrote:

 or may be just keep on using the QueuePool approach as it will  
 always make sure to return the same connection to the current thread ?


like i said, i dont see how that helps any.  a single Session thats  
in flush() holds onto a single connection and returns it regardless  
of what thread accesses it.  the threadlocal pool setting doesnt have  
any effect on threadsafety.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
That is what I am trying to figure out. It works perfectly when I do this.

On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Jul 19, 2007, at 5:06 PM, Arun Kumar PG wrote:

  or may be just keep on using the QueuePool approach as it will
  always make sure to return the same connection to the current thread ?
 

 like i said, i dont see how that helps any.  a single Session thats
 in flush() holds onto a single connection and returns it regardless
 of what thread accesses it.  the threadlocal pool setting doesnt have
 any effect on threadsafety.



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Michael Bayer

perhaps the nature of the conflict is different, then.  are you able
to observe what stack traces or at least approximately what operations
are taking place when the conflict occurs ?  does the conflict occur
frequently and easily with just a little bit of concurrency or is it
something that only happens under very high load conditions ?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Michael Bayer
and how is your session connected to the database ?  are you using  
create_session(bind_to=something) ?  or are you binding your  
MetaData to the engine ?  are you using BoundMetaData ?

On Jul 19, 2007, at 11:16 PM, Arun Kumar PG wrote:

 the stack trace points to pool.py (I will get the exact stack trace  
 as I am away from my box currently)

  does the conflict occur frequently and easily with just a little  
 bit of concurrency or is it
 something that only happens under very high load conditions ?

 this is happening primarily in a use case wherein the logic does  
 some processing (this includes accessing many relations - i believe  
 many lazy loaders fire here). since this use case generates some  
 csv data it takes about 6-7 secs depending on the data set so when  
 other requests comes in while other is in progress we encounter the  
 2014 error.

 however as mentioned earlier when i use threadlocal queue pool it  
 just vanishes and no matter how many requests i send after that it  
 just works fine.


 On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:

 perhaps the nature of the conflict is different, then.  are you able
 to observe what stack traces or at least approximately what operations
 are taking place when the conflict occurs ?  does the conflict occur
 frequently and easily with just a little bit of concurrency or is it
 something that only happens under very high load conditions ?








 -- 
 Cheers,

 - A
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Arun Kumar PG
BoundMetaData is what I am using.

On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:

 and how is your session connected to the database ? are you using
 create_session(bind_to=something) ? or are you binding your MetaData to
 the engine ? are you using BoundMetaData ?
 On Jul 19, 2007, at 11:16 PM, Arun Kumar PG wrote:

 the stack trace points to pool.py (I will get the exact stack trace as I
 am away from my box currently)

  does the conflict occur frequently and easily with just a little bit of
 concurrency or is it
 something that only happens under very high load conditions ?

 this is happening primarily in a use case wherein the logic does some
 processing (this includes accessing many relations - i believe many lazy
 loaders fire here). since this use case generates some csv data it takes
 about 6-7 secs depending on the data set so when other requests comes in
 while other is in progress we encounter the 2014 error.

 however as mentioned earlier when i use threadlocal queue pool it just
 vanishes and no matter how many requests i send after that it just works
 fine.


 On 7/20/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
  perhaps the nature of the conflict is different, then. are you able
  to observe what stack traces or at least approximately what operations
  are taking place when the conflict occurs ? does the conflict occur
  frequently and easily with just a little bit of concurrency or is it
  something that only happens under very high load conditions ?
 
 
 
 
 
 


 --
 Cheers,

 - A




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: URGENT: 2014, Commands out of sync; you can't run this command now

2007-07-19 Thread Michael Bayer
passing in the creator as a lambda: db_obj where db_obj is the ref  
to method returning MySQldb connection.

...can we see the exact code for this please ?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---