Are the issues related to wal based KafkaReliableReceivers or with any
receiver in general.

Any insights will be helpful.
On 16 Dec 2015 05:44, "Tathagata Das" <t...@databricks.com> wrote:

> Just to be clear. spark.treaming.concurrentJobs is NOT officially
> supported. There are issues with fault-tolerance and data loss if that is
> set to more than 1.
>
>
>
> On Tue, Dec 15, 2015 at 9:19 AM, Mukesh Jha <me.mukesh....@gmail.com>
> wrote:
>
>> Try setting *spark*.streaming.*concurrent*. *jobs* to number of
>> concurrent jobs you want to run.
>> On 15 Dec 2015 17:35, "ikmal" <ikmal.s...@gmail.com> wrote:
>>
>>> The best practice is to set batch interval lesser than processing time.
>>> I'm
>>> sure your application is suffering from constantly increasing of
>>> scheduling
>>> delay.
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-spark-streaming-application-start-working-on-next-batch-before-completing-on-previous-batch-tp25559p25707.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>

Reply via email to