New submission from Vojtěch Boček <[email protected]>:
By default, asyncio spawns as many as os.cpu_count() * 5 threads to run I/O on.
When combined with beefy machines (e.g. kubernetes servers) with, says, 56
cores, it results in very high memory usage.
This is amplified by the fact that the `concurrent.futures.ThreadPoolExecutor`
threads are never killed, and are not re-used until `max_workers` threads are
spawned.
Workaround:
loop.set_default_executor(concurrent.futures.ThreadPoolExecutor(max_workers=8))
This is still not ideal as the program might not need max_workers threads, but
they are still spawned anyway.
I've hit this issue when running asyncio program in kubernetes. It created 260
idle threads and then ran out of memory.
I think the default max_workers should be limited to some max value and
ThreadPoolExecutor should not spawn new threads unless necessary.
----------
components: asyncio
messages: 330101
nosy: Vojtěch Boček, asvetlov, yselivanov
priority: normal
severity: normal
status: open
title: asyncio uses too many threads by default
type: resource usage
versions: Python 3.7
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue35279>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com