Thank you very much for your answer.
Since I don't have dependent jobs I will continue to use this functionality.
On 05/06/2018 13:52, Saisai Shao wrote:
"dependent" I mean this batch's job relies on the previous batch's
result. So this batch should wait for the finish of previous batch, if
"dependent" I mean this batch's job relies on the previous batch's result.
So this batch should wait for the finish of previous batch, if you set "
spark.streaming.concurrentJobs" larger than 1, then the current batch could
start without waiting for the previous batch (if it is delayed), which
On 05/06/2018 13:44, Saisai Shao wrote:
You need to read the code, this is an undocumented configuration.
I'm on it right now, but, Spark is a big piece of software.
Basically this will break the ordering of Streaming jobs, AFAIK it may
get unexpected results if you streaming jobs are not
You need to read the code, this is an undocumented configuration.
Basically this will break the ordering of Streaming jobs, AFAIK it may get
unexpected results if you streaming jobs are not independent.
thomas lavocat 于2018年6月5日周二
下午7:17写道:
> Hello,
>
> Thank's for your answer.
>
> On
Hello,
Thank's for your answer.
On 05/06/2018 11:24, Saisai Shao wrote:
spark.streaming.concurrentJobs is a driver side internal
configuration, this means that how many streaming jobs can be
submitted concurrently in one batch. Usually this should not be
configured by user, unless you're
spark.streaming.concurrentJobs is a driver side internal configuration,
this means that how many streaming jobs can be submitted concurrently in
one batch. Usually this should not be configured by user, unless you're
familiar with Spark Streaming internals, and know the implication of this
Hi everyone,
I'm wondering if the property spark.streaming.concurrentJobs should
reflects the total number of possible concurrent task on the cluster, or
the a local number of concurrent tasks on one compute node.
Thanks for your help.
Thomas