I use a standalone connection pooling for Postgresql in some cases. It is
faster and more efficient than
doing full connects to Postgresql directly in each request.
https://www.pgbouncer.org/
Arndt.
Am Di., 21. Nov. 2023 um 16:14 Uhr schrieb Eldav :
> Thank you Jonathan,
>
> after asking my que
This should not happen. Do you know which cookiecutter you used, and when?
This should not happen in the most recent cookiecutter.
As far as I remember, I used the official cookiecutter, but that was a few
years ago (around the time when Pyramid 2.0 was released, and I felt the
need to sync
Thank you Jonathan,
after asking my question, I did more googling and found this :
https://docs.sqlalchemy.org/en/20/core/pooling.html#pooling-multiprocessing
It does mention Engine.dispose :) I tried their solution #4, which seemed
to be the one which fit best in my code. But somehow I feel i
> Namely, if you deploy with Gunicorn a Pyramid + PostgreSQL app based on
the standard cookiecutter, you will run into problems, because the
connection to the DB can't be shared between the processes, so each process
needs to have its own connection to the DB.
I forgot to mention...
This shoul
On Mon, Nov 20, 2023 at 4:14 PM Jonathan Vanasco wrote:
>
> SQLAlchemy supports this via `Engine.dispose()`, which is the documented way
> of handling a post-fork connection:
>
>https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal
Yes, that sounds familiar.
--
You receiv
SQLAlchemy supports this via `Engine.dispose()`, which is the documented
way of handling a post-fork connection:
https://docs.sqlalchemy.org/en/13/core/connections.html#engine-disposal
Invoking `Pool.recreate()` should be fine, but the documented pattern is to
call `Engine.dispose`
> How sh
Thank you Theron,
I'm not using "--preload", actually not doing anything special, since I'm
trying to use Gunicorn as a drop-in replacement for Waitress, like I always
did, BUT I'm realizing that I was using `psycopg2` in the past, whereas I`m
using `psycopg` (i.e. version 3) now, and version 3
If you aren’t using `—preload` then gunicorn should load the application fresh
for each worker and you shouldn’t have any issues.
If you are using preload, you have to recreate any existing connections on
fork. For SQLAlchemy I use:
def after_fork(registry):
registry['db_engine'].pool.recr
Hello list,
this page seems to describe perfectly a problem I've stumbled on:
https://stackoverflow.com/questions/64995178/decryption-failed-or-bad-record-mac-in-multiprocessing
Namely, if you deploy with Gunicorn a Pyramid + PostgreSQL app based on the
standard cookiecutter, you will run into p