Kamil Gorlo wrote:
> On Thu, Jun 4, 2009 at 4:20 PM, Michael Bayer wrote:
>> the connection went from good to dead within a few seconds (assuming SQL
>> was successfully emitted on the previous checkout). Your database was
>> restarted or a network failure occurred.
>
> There is no other option?
Kamil Gorlo wrote:
>
> On Thu, Jun 4, 2009 at 4:20 PM, Michael Bayer
> wrote:
>>
>> the connection went from good to dead within a few seconds (assuming SQL
>> was successfully emitted on the previous checkout). Your database was
>> restarted or a network failure occurred.
>
> There is no other
On Thu, Jun 4, 2009 at 4:20 PM, Michael Bayer wrote:
>
> the connection went from good to dead within a few seconds (assuming SQL
> was successfully emitted on the previous checkout). Your database was
> restarted or a network failure occurred.
There is no other option? I'm pretty sure that DB
the connection went from good to dead within a few seconds (assuming SQL
was successfully emitted on the previous checkout). Your database was
restarted or a network failure occurred.
Kamil Gorlo wrote:
>
> Hi,
>
> I know this problem shows on group from time to time, but suggested
> solutions
Hello
Even I am getting the same error. I thought pool_recycle was the
option to look into after doing research on Google.
For testing, I have a MySQL server with wait_timeout set to 10 seconds.
http://paste.pocoo.org/show/112820/
On Wed, Apr 15, 2009 at 11:36 PM, reetesh nigam
wrote:
>
> Hi
as the ticket I mentioned says, the main problem was a bug in the
connection pool whereby rollback() was being called twice on a
connection that was already being used in another thread. so i dont
think theres any similarity to zope here.
--~--~-~--~~~---~--~~
Y
Michael Bayer wrote:
> ok, dont change anything in turbogears yet. i think i have fixed the
> problem, please try out rev 2133. ticket #387 explains the issue.
This issue seems very similar to one I found using the MySQL Zope
database adapter. The symptom was the same (MySQL has gone away), an
> i think i have fixed the
> problem, please try out rev 2133. ticket #387 explains the issue.
This works perfectly for me !!! *Many* thanks for this bugfix.
Cheers
Seb
--
Sébastien LELONG
sebastien.lelong[at]sirloon.net
--~--~-~--~~~---~--~~
You received thi
On 12/8/06, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> ok, dont change anything in turbogears yet. i think i have fixed the
> problem, please try out rev 2133. ticket #387 explains the issue.
>
Well, I've made the changes already to work with the current release
of SQLAlchemy and I'll revert
ok, dont change anything in turbogears yet. i think i have fixed the
problem, please try out rev 2133. ticket #387 explains the issue.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post t
well that makes my day a *lot* harder, since it means the problem can
be fixed. this is now ticket #387.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sql
Good to hear some work is being done one this. I'm having the exact
same problem as the OP describes. In my case, downgrading down to SA
0.2.7, not changing anything else, makes the problem go away - except
of course I don't have many wonderfuls SA 0.3.0 features :o)
On 12/8/06, Michael Bayer <[E
On 12/8/06, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> ok i REALLY cant narrow down the exact thing that causes this to happen
> with MySQLdb. I have gone through pool.py carefully, adding flags and
> assertions to the code to insure that its not sharing connections *or*
> cursors between threa
ok i REALLY cant narrow down the exact thing that causes this to happen
with MySQLdb. I have gone through pool.py carefully, adding flags and
assertions to the code to insure that its not sharing connections *or*
cursors between threads; its not. all cursors are closed before the
connection is e
yes it appears that MySQLDB, among its many limitations, is also
completely not threadsafe. Running a barebones web test to a MySQL
produces the same error. its a little amazing SA has made it for over
a year and nobody has really had this problem before.
So, for non-threadsafe DBAPIs we use th
> I'm still having this problem with MySQL (both 4.1 and 5.0), SA
> (tested with both 0.2.8 and 0.3.1) and TurboGears 1.0b1.
Almost the same for me (MySQL 5.0.22, SA 0.2.8 and 0.3.1, using CherryPy).
Using pool_recycle works nice on a non-high load environments. But while I
benchmark my app (us
I'm still having this problem with MySQL (both 4.1 and 5.0), SA
(tested with both 0.2.8 and 0.3.1) and TurboGears 1.0b1.
I've gone over the TG connection code a number of times and I can't
see anything wrong with it. This isn't after a long idle time either.
Sometimes I can start the app and ha
You can control the amount of time before your connection is
terminated with the MySQL parameter wait_timeout - although addressing
the fundamental problem of the app handling dropped connections should
obviously be addressed. In higher load environments with MySQL it may
actually behoove you to r
pool_recycle is an integer number of seconds which to wait before
reopening a conneciton. However, setting it to "True" is equivalent to
1 which means it will reopen connections constantly.
check out the FAQ entry on this:
http://www.sqlalchemy.org/trac/wiki/FAQ#MySQLserverhasgoneaway/psycopg.I
Still same error with:
db= create_engine('mysql://[EMAIL PROTECTED]/test', pool_recycle=True)
meta = BoundMetaData(db)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send
use the "pool_recycle" setting on your create_engine call.
On Oct 15, 2006, at 8:33 PM, jlowery wrote:
>
> This isn't unique to SQLAlchemy, but when using MySQL after long
> periods in inactivity (in a web-app server) with the objectstore
> session, the error:
>
> MySQL server has gone away
>
>
21 matches
Mail list logo