Are those errors appearing in the logs only but not blocking requests? those 
are cleanouts of pooled connections where the rollback() method must be called. 
if these pooled connections aren't gracefully closed in your greenlets they 
will be subject to the garbage collector thread where the action will occur in 
that thread. the message says it's in the event loop callback, but then that 
means rollback() should not be a blocking call since it's been patched. 

I think overall the issue is that you have a _finalize_fairy with a rollback 
happening at all. I could see that happening if you had a greenlet interrupted 
but otherwise we would assume your code gracefully closes all Sessions / 
Connections .

On Wed, Jun 3, 2020, at 1:32 AM, Tony Cosentini wrote:
> Hi all,
> 
> Apologies for the long-winded question. I'm currently working on an 
> application that uses SQLAlchemy along with gevent + psycogreen + gunicorn. I 
> recently added a pool_recycle of 120 to one of our engines:
> engine = create_engine(
>  db_url,
>  pool_size=default_manager.database_pool_size(),
>  max_overflow=default_manager.database_pool_overflow(),
>  pool_recycle=120,
>  connect_args={
>  "options": "-c statement_timeout=25000",
>  "application_name": application_name,
>  },
> )
> 
> With this new pool_recycle setting, we started occasionally seeing these 
> Gevent BlockingSwitchOutErrors:
> BlockingSwitchOutError: Impossible to call blocking function in the event 
> loop callback
>  File "sqlalchemy/pool/base.py", line 680, in _finalize_fairy
>  fairy._reset(pool)
>  File "sqlalchemy/pool/base.py", line 867, in _reset
>  pool._dialect.do_rollback(self)
>  File "sqlalchemy/engine/default.py", line 502, in do_rollback
>  dbapi_connection.rollback()
>  File "newrelic/hooks/database_dbapi2.py", line 81, in rollback
>  return self.__wrapped__.rollback()
>  File "psycogreen/gevent.py", line 36, in gevent_wait_callback
>  wait_read(conn.fileno(), timeout=timeout)
>  File "src/gevent/_hub_primitives.py", line 353, in 
> gevent._gevent_c_hub_primitives.wait_read
>  File "src/gevent/_hub_primitives.py", line 367, in 
> gevent._gevent_c_hub_primitives.wait_read
>  File "src/gevent/_hub_primitives.py", line 350, in 
> gevent._gevent_c_hub_primitives.wait_on_watcher
>  File "src/gevent/_hub_primitives.py", line 304, in 
> gevent._gevent_c_hub_primitives._primitive_wait
>  File "src/gevent/_hub_primitives.py", line 46, in 
> gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
>  File "src/gevent/_hub_primitives.py", line 46, in 
> gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
>  File "src/gevent/_hub_primitives.py", line 55, in 
> gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
>  File "src/gevent/_waiter.py", line 151, in gevent._gevent_c_waiter.Waiter.get
>  File "src/gevent/_greenlet_primitives.py", line 61, in 
> gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
>  File "src/gevent/_greenlet_primitives.py", line 61, in 
> gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
>  File "src/gevent/_greenlet_primitives.py", line 64, in 
> gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
>  File "src/gevent/_greenlet_primitives.py", line 67, in 
> gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch_out
>  File "src/gevent/_greenlet_primitives.py", line 68, in 
> gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch_out
> 
> Has anyone ever seen anything similar? Unfortunately I'm unable to reproduce 
> this so it's quite baffling to me. Additionally it only happens on our 
> production environment (we also have a staging environment with the exact 
> same setup, but with much, much less traffic). My only inclination is that it 
> might be performance related as it only happens occasionally on production.
> 
> I did a quick audit of our monkey patching code and everything looks correct 
> - it's happening before anything else and I confirmed that the modules are 
> all patched. Additionally, we have some other processes that run without 
> gevent and they don't seem to have any issue with pool_recycle (although they 
> don't have throughput like this production server does).
> 
> Anyway, sorry for the vague question, just curious if anyone has ever seen 
> anything similar,
> Tony
> 
> Gevent version: 1.4.0 and 20.5.2
> SQLAlchemy version: 1.3.3
> Gunicorn version: 20.0.4
> 

> --
>  SQLAlchemy - 
>  The Python SQL Toolkit and Object Relational Mapper
> 
> http://www.sqlalchemy.org/
> 
>  To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
>  --- 
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/4718b4e6-c1c2-40ef-aae3-a1f5423d970b%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/4718b4e6-c1c2-40ef-aae3-a1f5423d970b%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/38383570-d29f-4c93-ac02-6e457b07887e%40www.fastmail.com.

Reply via email to