Github user vc60er commented on the issue:
https://github.com/apache/spark/pull/20078
by set spark.streaming.dynamicAllocation.minExecutors also has same issue
.https://issues.apache.org/jira/browse/SPARK-14788
@felixcheung
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20078
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20078
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20078
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user sharkdtu commented on the issue:
https://github.com/apache/spark/pull/20078
@felixcheung
Have you ever thought about initial num-executors? Actually, it is default
2 executors when you run spark on yarn. How can you make sure that this 2
executors have enougth cores fo
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/20078
hmm, I didn't know that was changed actually (SPARK-13723)
But it seems to me `spark.streaming.dynamicAllocation.minExecutors` is
still a valid approach. To match the non-streaming behavior w
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20078
Originally in Spark dynamic allocation, "spark.executor.instances" and
dynamic allocation conf cannot be co-existed, if "spark.executor.instances" is
set, dynamic allocation will not be enabled. B
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/20078
not saying about this change, but I've use streaming dynamic allocation
quite a bit back in the day.
but in this case I think simply is to set
`spark.streaming.dynamicAllocation.minExecutors
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20078
I'm not against the fix. My concern is that we've shifted to structured
streaming, also this feature (streaming dynamic allocation) is seldom
used/tested, this might not be the only issue regardin
Github user sharkdtu commented on the issue:
https://github.com/apache/spark/pull/20078
@jerryshao
if this PR can fix bugs as you said. why not fix it. Or, it should be
marked as deprecated.
---
-
To unsubscrib
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20078
Sorry to chime in. This feature (streaming dynamic allocation) is obsolete
and has bugs, users seldom enabled this feature, does it still worth to fix?
---
--
Github user sharkdtu commented on the issue:
https://github.com/apache/spark/pull/20078
@felixcheung
if you submit spark on yarn with
`spark.streaming.dynamicAllocation.enabled=true`, the `num-executors` can not
be set. So, at the begining, there are only 2(default value) executor
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/20078
hmm, that sounds like a different problem, why is numReceivers set to >
spark.cores.max?
---
-
To unsubscribe, e-mail: revi
Github user sharkdtu commented on the issue:
https://github.com/apache/spark/pull/20078
@felixcheung
At the beginning, if numReceivers > totleExecutorCores, there is not cpu
cores for batch processing, and `ExecutorAllocationManager` can't listen
metrics of any batches. As a res
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20078
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
15 matches
Mail list logo