Essentially correct. The latency to start a Spark Job is nowhere close to
2-4 seconds under typical conditions. Creating a new Spark Application
every time instead of running multiple Jobs in one Application is not going
to lead to acceptable interactive or real-time performance, nor is that an
execution model that Spark is ever likely to support in trying to meet
low-latency requirements. As such, reducing Application startup time (not
Job startup time) is not a priority.

On Fri, Jul 6, 2018 at 4:06 PM Timothy Chen <tnac...@gmail.com> wrote:

> I know there are some community efforts shown in Spark summits before,
> mostly around reusing the same Spark context with multiple “jobs”.
>
> I don’t think reducing Spark job startup time is a community priority
> afaik.
>
> Tim
> On Fri, Jul 6, 2018 at 7:12 PM Tien Dat <tphan....@gmail.com> wrote:
>
>> Dear Timothy,
>>
>> It works like a charm now.
>>
>> BTW (don't judge me if I am to greedy :-)), the latency to start a Spark
>> job
>> is around 2-4 seconds, unless I am not aware of some awesome optimization
>> on
>> Spark. Do you know if Spark community is working on reducing this latency?
>>
>> Best
>>
>>
>>
>> --
>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>

Reply via email to