Thanks, Sven!
I know that I’ve messed up the memory allocation, but I’m trying not to think too much about that (because I’ve advertised it to my users as “90GB for Spark works!” and that’s how it displays in the Spark UI (totally ignoring the python processes).
So I’ll need to deal with that at some point… esp since I’ve set the max python memory usage to 4GB to work around other issues!
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
The load issue comes in because we have a lot of background cron jobs (mostly to clean up after spark…), and those will stack up behind the high load and keep stacking until the whole thing comes crashing down. I will look into how to avoid this
stacking, as I think one of my predecessors had a way, but that’s why the high load nukes the nodes. I don’t have the spark.executor.cores set, but will setting that to say, 12 limit the pyspark threads, or will it just limit the jvm threads?
Thanks!
Ken
|
- Limit pyspark.daemon threads Carlile, Ken
- Re: Limit pyspark.daemon threads Ted Yu
- ?????? Limit pyspark.daemon threads Sea
- Re: Limit pyspark.daemon threads Carlile, Ken
- Re: Limit pyspark.daemon threads Carlile, Ken
- Re: Limit pyspark.daemon threads Carlile, Ken
- Re: Limit pyspark.daemon threads Sven Krasser
- Re: Limit pyspark.daemon threads Carlile, Ken
- Re: Limit pyspark.daemon thread... Sven Krasser
- Re: Limit pyspark.daemon thread... Carlile, Ken
- Re: Limit pyspark.daemon thread... Sven Krasser