Thank you for the explanation, with ROLLBACK / pool_reset_on_return I
understand it now.
With this snippet:
import time
from sqlalchemy import create_engine
e = create_engine("postgresql://postgres:sandbox@104.154.217.229/postgres")
def go():
now = time.time()
with e.connect() as
But why is it doing any kind of network activity on a .connect(), if
the previous connection was closed in the pool?
OK, I created a sandbox server, it'll probably be quicker for you as
the server is in US, but this also did 18 sec for me.
import time
from sqlalchemy import create_engine
e =
On Fri, Jan 18, 2019 at 4:54 PM Mike Bayer wrote:
>
> On Fri, Jan 18, 2019 at 4:49 PM Zsolt Ero wrote:
> >
> > I know I'm far away from a remote server, as the server is a Google
> > Cloud SQL instance, and I'm in a residential cable connection. But
> > ping times are only 145 ms, nothing
On Fri, Jan 18, 2019 at 4:49 PM Zsolt Ero wrote:
>
> I know I'm far away from a remote server, as the server is a Google
> Cloud SQL instance, and I'm in a residential cable connection. But
> ping times are only 145 ms, nothing extreme I'd say. If you'd like I
> can quickly setup a sandbox
I know I'm far away from a remote server, as the server is a Google
Cloud SQL instance, and I'm in a residential cable connection. But
ping times are only 145 ms, nothing extreme I'd say. If you'd like I
can quickly setup a sandbox instance on GCP for trying this out.
Still, I don't even
On Fri, Jan 18, 2019 at 3:58 PM Zsolt Ero wrote:
>
> I thought I finally got to understand what does it mean to use a connection
> pool / QueuePool, but this performance problem puzzles me.
>
> If I run:
> for i in range(100):
> with pg_engine.connect() as conn:
>
I thought I finally got to understand what does it mean to use a connection
pool / QueuePool, but this performance problem puzzles me.
If I run:
for i in range(100):
with pg_engine.connect() as conn:
conn.execute('select 1').fetchall()
it takes 47 seconds.
If I run
with