Hi all,

for concurrent Spark jobs spawned from the driver, we use Spark's fair
scheduler pools, which are set and unset in a thread-local manner by
each worker thread. Typically (for rather long jobs), this works very
well. Unfortunately, in an application with lots of very short
parallel sections, we see 1000s of these pools remaining in the Spark
UI, which indicates some kind of leak. Each worker cleans up its local
property by setting it to null, but not all pools are properly
removed. I've checked and reproduced this behavior with Spark 2.1-2.3.

Now my question: Is there a way to explicitly remove these pools,
either globally, or locally while the thread is still alive?

Regards,
Matthias

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to