Hi All,

I am using Standalone Spark.

I am using dynamic memory allocation. Despite giving max executors, min
executors and initial executors, my  streaming job is taking all executors
available in the cluster. Could anyone please suggest what can be wrong
here?

Please note source is Kafka.

I feel this can be avoided by setting max cores per application. But why
this is happening if max executors is set and also what can be other best
ways to avoid that.

Thanks,
Sachit

Reply via email to