Maximilian Hils <python-b...@maximilianhils.com> added the comment:

asvetlov: Sorry if I articulated myself badly, but I do think this is a valid 
bug. It's unfortunately hard to provide a better repro (I tried), but we are 
hitting this regularly when mitmproxy is accepting connections under heavy 
load. We're just calling `asyncio.start_server(handler, "127.0.0.1", 8080)` in 
mitmproxy and never interact with the underlying socket object.

Here are some observations that are true for all crashes:

- The socket fileno is -1 when it crashes.
- `_call_connection_lost` is called by `_ProactorBasePipeTransport.close`, 
which is called by `_ProactorBasePipeTransport.__del__` [1]
- There are no previous calls to `_call_connection_lost`.
- Windows only, loopback connections in our case.
- Wireshark shows that client and server are first happily exchanging packets. 
At some point the client sends a FIN, which the Python server ACKs immediately. 
A few seconds later the Python server sends a FIN back.


An obvious fix without understanding the root cause would be to check fileno in 
https://github.com/python/cpython/blob/d929aa70e2a324ea48fed221c3257f929be05115/Lib/asyncio/proactor_events.py#L161.
 I'm not too familar with proactor to assess if that is a good idea. Sorry for 
not being able to provide more details.


[1] 
https://github.com/python/cpython/blob/d929aa70e2a324ea48fed221c3257f929be05115/Lib/asyncio/proactor_events.py#L102-L116

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue43253>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to