Thank you all
My company runs java code that uses Spark to read from, and write to, Azure
Blob storage. This code runs more or less 24x7.
Recently we've noticed a few failures that leave stack traces in our logs; what
they have in common are exceptions that look variously like
Caused by:
Hi folks,
We are experiencing slowness in Spark history server, hence trying to find
what config properties we can tune to fix the issue. I found that
SPARK_DAEMON_MEMORY is used to control memory, similarly is there a config
property to increase the number of threads?
Thanks
Nikhil
Had the same issue, it seems that it is simply not possible -
https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L195
There's also a Jira ticket -