Github user mayuehappy commented on the issue:
https://github.com/apache/spark/pull/19233
@srowen Really thanks for your reply.I think maybe I didn't express it
well.Let's assume that there is a situation like this. If we use spark
streaming to consume a Kafka topic with 10
GitHub user mayuehappy opened a pull request:
https://github.com/apache/spark/pull/19233
[Spark-22008][Streaming]Spark Streaming Dynamic Allocation auto fix
maxNumExecutors
In SparkStreaming DRA .The metric we use to add or remove executor is the
ratio of batch processing time