Sophia Wisdom <[email protected]> added the comment:
While not calling executor.shutdown() may leave some resources still used, it
should be small and fixed. Regularly calling executor.shutdown() and then
instantiating a new ThreadPoolExecutor in order to run an asyncio program does
not seem like a good API to me.
You mention there appear to be both an event loop and a futures leak -- I think
I have a good test case for the futures, without using threads at all. This
seems to be leaking `future._result`s somehow even though their __del__ is
called.
```
import asyncio
from concurrent.futures import Executor, Future
import gc
result_gcs = 0
suture_gcs = 0
class ResultHolder:
def __init__(self, mem_size):
self.mem = list(range(mem_size)) # so we can see the leak
def __del__(self):
global result_gcs
result_gc += 1
class Suture(Future):
def __del__(self):
global suture_gcs
suture_gcs += 1
class SimpleExecutor(Executor):
def submit(self, fn):
future = Suture()
future.set_result(ResultHolder(1000))
return future
async def function():
loop = asyncio.get_running_loop()
for i in range(10000):
loop.run_in_executor(SimpleExecutor(), lambda x:x)
def run():
asyncio.run(function())
print(suture_gcs, result_gcs)
```
10MB
```
> run()
10000 10000
```
100MB
Both result_gcs and suture_gcs are 10000 every time. My best guess for why this
would happen (for me it doesn't seem to happen without the
loop.run_in_executor) is the conversion from a concurrent.Future to an
asyncio.Future, which involves callbacks to check on the result, but that
doesn't make sense, because the result itself has __del__ called on it but
somehow it doesn't free the memory!
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue41699>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com