your s.close() is not being reached
I've checked and found that every thread closes connection. And this
traceback :
QueuePool limit of size 5 overflow 10 reached, connection timed out
most probably caused by long running thread which walking over list of
filenames ~ 35 000 long. It renaming
On Dec 14, 2007, at 9:24 PM, imgrey wrote:
as far as I understand 'SHARE ROW EXCLUSIVE' (
http://www.postgresql.org/docs/8.1/static/sql-lock.html ) lock is
necessary here, but I was unable to find how to set it up in
sqlalchemy.
connection.execute(LOCK TABLE IN SHARE ROW EXCLUSIVE)
oops, this part I've added accidentally:
from threading import Thread, Lock
from sqlalchemy import create_engine, MetaData, Table, Column, types,
...
In fact I've removed Lock's, remplaced scoped_session(...) with
create_session(bind=db) in each thread and was running app on 3912
revision.
still getting pool overflows ? turn on echo_pool=True to trace that
out.
On Dec 11, 2007, at 4:44 PM, imgrey wrote:
oops, this part I've added accidentally:
from threading import Thread, Lock
from sqlalchemy import create_engine, MetaData, Table, Column, types,
...
In fact I've
2007-12-12 00:19:24,099 INFO sqlalchemy.pool.QueuePool.0x..90 Created
new connection connection object at 0xda9bd0; dsn: 'dbname=df
host=localhost user=grey password=', closed: 0
2007-12-12 00:19:24,099 INFO sqlalchemy.pool.QueuePool.0x..90
Connection connection object at 0xda9bd0; dsn:
On Dec 11, 2007, at 5:53 PM, imgrey wrote:
2007-12-12 00:19:24,099 INFO sqlalchemy.pool.QueuePool.0x..90 Created
new connection connection object at 0xda9bd0; dsn: 'dbname=df
host=localhost user=grey password=', closed: 0
2007-12-12 00:19:24,099 INFO
Thanks a lot, seems I've managed resolve problem with concurrent
modifications by commit(), clear() and close() at each thread, but
stuck with another one:
Exception in thread Thread-62:
Traceback (most recent call last):
File threading.py, line 442, in __bootstrap
self.run()
File
i can only make comments about fragments like this but i cant address
your full design issue since you havent supplied a fully working
illustration of what it is youre trying to do.
the daemon: http://dpaste.com/hold/27089/
and required utils.py modele: http://dpaste.com/hold/27092/
it
On Dec 6, 2007, at 8:57 AM, imgrey wrote:
i can only make comments about fragments like this but i cant address
your full design issue since you havent supplied a fully working
illustration of what it is youre trying to do.
the daemon: http://dpaste.com/hold/27089/
and required utils.py
I think the error throw of the IntegrityError is totally expensive
from both a DB perspective as well as a python perspective. if
missing data is truly so rare then it might be fine, but the
necessesity of then using SAVEPOINT seems to complicate things more
than necessary. but you'd have to
On Dec 3, 2007, at 6:33 PM, imgrey wrote:
I think the error throw of the IntegrityError is totally expensive
from both a DB perspective as well as a python perspective. if
missing data is truly so rare then it might be fine, but the
necessesity of then using SAVEPOINT seems to complicate
On Dec 1, 7:24 pm, imgrey [EMAIL PROTECTED] wrote:
INFO sqlalchemy.engine.threadlocal.TLEngine.0x..6c ROLLBACK
in postgresql-8.2-main.log: EET LOG: unexpected EOF on client
connection
I'll note that if the issue here is an EOF error in the PG logs, we
can look into seeing why that is;
you should select() for the record first, and if not present, insert()
it. if youre concerned about lots of selects() you can implement a
simple cache in your application, such as a dictionary of ids that
already exist. if theres concern over the cache growing too large,
you can use a
On Dec 1, 7:24 pm, imgrey [EMAIL PROTECTED] wrote:
OK, i guess you are using the SAVEPOINT then since the postgres docs
seem to recommend that as a workaround for the transaction abort its
doing at the IntegrityError point...and the abort is bad for you
because you want to recover from
hey there -
this is line 179 right now:
if bind in conn_dict:
similar mismatches in the stacktrace are present for lines 505 and 509
(line 505 is a blank line now).
the full snip of code is:
if bind in conn_dict:
(conn, trans, autoclose) =
one immediate flaw I see in your application is that you are using
implicit execution of SQL constructs without them having any
connection to the ongoing transaction, such as:
f_table.delete(f_table.c.user_id==theone.id).execute()
f_table.insert().execute(user_id=theone.id, path='/', ls=ls)
On 30 нояб, 21:07, Michael Bayer [EMAIL PROTECTED] wrote:
hey there -
this is line 179 right now:
if bind in conn_dict:
similar mismatches in the stacktrace are present for lines 505 and 509
(line 505 is a blank line now).
the full snip of code is:
if bind
On Nov 30, 6:21 pm, imgrey [EMAIL PROTECTED] wrote:
But how to determine if record is exists and update if it does or
insert instead without executing select, which would be very slow ?
I'm using nested begin_nested() here to avoid rollback of whole
transaction.
you should select() for
based on your stacktrace, here's your bug, fixed in r3839:
cannot understand how this is working
from sqlalchemy import *
from sqlalchemy.orm import *
session = scoped_session(sessionmaker(transactional=True,
autoflush=False))
engine = create_engine('postgres://scott:[EMAIL
i think its impossible for you to be getting the same stacktrace that
you were, can you post what youre getting now please...
Exception in thread Thread-6:
Traceback (most recent call last):
File threading.py, line 442, in __bootstrap
self.run()
File ./camper.py, line 104, in run
On Nov 30, 2007, at 3:06 PM, imgrey wrote:
On 30 нояб, 21:07, Michael Bayer [EMAIL PROTECTED] wrote:
hey there -
this is line 179 right now:
if bind in conn_dict:
similar mismatches in the stacktrace are present for lines 505 and
509
(line 505 is a blank line now).
imgrey wrote:
Based on my observations it happens only with concurent inserts/
updates.
One thread :
{{{
f_table.insert().execute()
session.flush()
transaction = session.begin()
nested = session.begin_nested()
try:
f_table.insert().execute()
except IntegrityError:
#record is
On Nov 27, 2007, at 2:40 PM, imgrey wrote:
default isolation mode settings usually dont need any changes. we
dont have an official API for that yet, so you can apply it to all
connections using a custom connect() function sent to create_engine,
or you can try setting it individually as
I'm probably completely wrong about this, but in your example above, I
don't think the statement f_table.insert().execute() necessarily uses
the same connection as the ORM-level transaction and queries.
If I got it right, then, as far as I know, two transactions is
impossible in one
On Nov 28, 2007, at 10:39 AM, imgrey wrote:
I'm probably completely wrong about this, but in your example
above, I
don't think the statement f_table.insert().execute() necessarily uses
the same connection as the ORM-level transaction and queries.
If I got it right, then, as far as I
the 0.4 pattern for using scoped_session is:
session = scoped_session(sessionmaker(transactional=(True|False),
autoflush=(True|False)))
defined globaly:
session = scoped_session(sessionmaker(transactional=True,
autoflush=False))
result:
Exception in thread Thread-3:
Traceback (most recent
On Nov 28, 2007, at 12:03 PM, imgrey wrote:
the 0.4 pattern for using scoped_session is:
session = scoped_session(sessionmaker(transactional=(True|False),
autoflush=(True|False)))
defined globaly:
session = scoped_session(sessionmaker(transactional=True,
autoflush=False))
result:
it seems like youre doing something else in there that youre not
supposed to (like accessing SessionTransaction perhaps ? dont use the
old 0.3 patterns anymore).
First heard about SessionTransaction. Also I've looked through changes
between 0.3 and 0.4 versions and made changes.
heres a
default isolation mode settings usually dont need any changes. we
dont have an official API for that yet, so you can apply it to all
connections using a custom connect() function sent to create_engine,
or you can try setting it individually as conn = session.connection();
I was thinking to change isolation level in fact.
But first, I don't know how to do this with sqlalchemy, second, with
default isolation level commits should work properly weather or not
something were inserted or updated during commit and third, I was
using nested transaction like this :
On Nov 23, 2007, at 3:59 AM, imgrey wrote:
I was thinking to change isolation level in fact.
But first, I don't know how to do this with sqlalchemy, second, with
default isolation level commits should work properly weather or not
something were inserted or updated during commit and third,
I think all one can do is just forget and start over again, when
concurrent modifications error occurs. That means, catchig the
exception and just showing some failure message to the user is all one
can do.
Sanjay
--~--~-~--~~~---~--~~
You received this message
32 matches
Mail list logo