syun64 commented on PR #13265:
URL: https://github.com/apache/airflow/pull/13265#issuecomment-1487202191

   Hi @kaxil, I'm running an Airflow cluster with v2.5.0, CeleryExecutor and 
SQLAlchemy 1.4.4, and I actually ran into the same error noted on this PR.
   ```
   Traceback (most recent call last):
   sqlalchemy/engine/base.py", line 1057, in _rollback_impl
        self.engine.dialect.do_rollback(self.connection)
   sqlalchemy/engine/default.py", line 683, in do_rollback
        dbapi_connection.rollback()
   psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message 
from the libpq
   ```
   
   It seems to have happened when making the call:
   
   airflow/jobs/scheduler_job.py", line 889, in _run_scheduler_loop
        num_finished_events = self._process_executor_events(session=session)
   
   On that 
[link](https://docs.sqlalchemy.org/en/14/core/pooling.html#using-connection-pools-with-multiprocessing-or-os-fork)
 you noted in the PR description, it recommends that we call the dispose 
function with `close=False` ([default is 
close=True](https://docs.sqlalchemy.org/en/14/core/connections.html#sqlalchemy.engine.Engine.dispose))
 to ensure that the new process will not touch any of the parent process’ 
connections and will instead start with new connections.
   
   Was there a reason why we opted to leave the `close` option out of the 
function call, or is this something we could add to address more edge cases 
where the CeleryExecuter could be going into a bad state when forking on 
PostgreSQL connection?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to