[jira] [Commented] (SPARK-7942) Receiver's life cycle is inconsistent with streaming job.

2015-09-11 Thread Tathagata Das (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741800#comment-14741800
 ] 

Tathagata Das commented on SPARK-7942:
--

This has been resolved in Spark 1.5.0 where we reimplemented the receiver 
scheduling to keep relaunching and not be limited by the max task retries

> Receiver's life cycle is inconsistent with streaming job.
> -
>
> Key: SPARK-7942
> URL: https://issues.apache.org/jira/browse/SPARK-7942
> Project: Spark
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.4.0
>Reporter: SaintBacchus
>
> Streaming consider the receiver as a common spark job, thus if an error 
> occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
> will no longer get any data but the streaming job is still running. 
> A general scenario is that: we config the 
> `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
> `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
> will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7942) Receiver's life cycle is inconsistent with streaming job.

2015-06-05 Thread Tathagata Das (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575483#comment-14575483
 ] 

Tathagata Das commented on SPARK-7942:
--

At the very least, we should throw route an exception to the streamingcontext 
(so that awaitTermination throws that exception) when ALL of them have exited. 
That is strictly better than what it is right now.

 Receiver's life cycle is inconsistent with streaming job.
 -

 Key: SPARK-7942
 URL: https://issues.apache.org/jira/browse/SPARK-7942
 Project: Spark
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.4.0
Reporter: SaintBacchus

 Streaming consider the receiver as a common spark job, thus if an error 
 occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
 will no longer get any data but the streaming job is still running. 
 A general scenario is that: we config the 
 `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
 `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
 will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7942) Receiver's life cycle is inconsistent with streaming job.

2015-06-03 Thread SaintBacchus (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572212#comment-14572212
 ] 

SaintBacchus commented on SPARK-7942:
-

[~tdas] Now I'm not clear about how to deal with only some of the receivers had 
broken down, should we shutdown the StreamingContext ? Or ignore this and leave 
the alive receivers still running

 Receiver's life cycle is inconsistent with streaming job.
 -

 Key: SPARK-7942
 URL: https://issues.apache.org/jira/browse/SPARK-7942
 Project: Spark
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.4.0
Reporter: SaintBacchus

 Streaming consider the receiver as a common spark job, thus if an error 
 occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
 will no longer get any data but the streaming job is still running. 
 A general scenario is that: we config the 
 `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
 `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
 will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7942) Receiver's life cycle is inconsistent with streaming job.

2015-05-30 Thread Tathagata Das (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566222#comment-14566222
 ] 

Tathagata Das commented on SPARK-7942:
--

That is a very good idea. In fact please update the JIRA title to describe that 
feature. If there were receivers started, and all the receivers have shutdown, 
then stop the StreamingContext and throw error such that ssc.awaitTermination 
exits. This will be a good feature to add. 

 Receiver's life cycle is inconsistent with streaming job.
 -

 Key: SPARK-7942
 URL: https://issues.apache.org/jira/browse/SPARK-7942
 Project: Spark
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.4.0
Reporter: SaintBacchus

 Streaming consider the receiver as a common spark job, thus if an error 
 occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
 will no longer get any data but the streaming job is still running. 
 A general scenario is that: we config the 
 `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
 `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
 will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7942) Receiver's life cycle is inconsistent with streaming job.

2015-05-29 Thread SaintBacchus (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564411#comment-14564411
 ] 

SaintBacchus commented on SPARK-7942:
-

[~tdas] Should we add a logic to shut down the StreamingContext if the Receiver 
had been down?

 Receiver's life cycle is inconsistent with streaming job.
 -

 Key: SPARK-7942
 URL: https://issues.apache.org/jira/browse/SPARK-7942
 Project: Spark
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.4.0
Reporter: SaintBacchus

 Streaming consider the receiver as a common spark job, thus if an error 
 occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
 will no longer get any data but the streaming job is still running. 
 A general scenario is that: we config the 
 `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
 `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
 will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org