[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14575483#comment-14575483
 ] 

Tathagata Das commented on SPARK-7942:
--------------------------------------

At the very least, we should throw route an exception to the streamingcontext 
(so that awaitTermination throws that exception) when ALL of them have exited. 
That is strictly better than what it is right now.

> Receiver's life cycle is inconsistent with streaming job.
> ---------------------------------------------------------
>
>                 Key: SPARK-7942
>                 URL: https://issues.apache.org/jira/browse/SPARK-7942
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: 1.4.0
>            Reporter: SaintBacchus
>
> Streaming consider the receiver as a common spark job, thus if an error 
> occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
> will no longer get any data but the streaming job is still running. 
> A general scenario is that: we config the 
> `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
> `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
> will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to