[ 
https://issues.apache.org/jira/browse/SPARK-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724038#comment-14724038
 ] 

Sudarshan Kadambi commented on SPARK-10320:
-------------------------------------------

Good questions Cody. 

When adding a topic after streaming context, we should at a minimum be able to 
start consumption from the beggining or end of each topic partition. When a 
topic is removed from subscription, no offsets should be retained. When it is 
added later, there is no difference from a brand new topic and the same options 
(beginning, end or specific offset) are available.

When the driver restarts, for all existing topics, the consumption should 
restart from the saved offsets by default, but jobs should have the flexibility 
to choose different consumption points (start, end, specific offset).  If you 
restart the job and specify a new offset, that is where consumption should 
start, in effect overriding any saved offsets.

Topics can be repartitioned in Kafka today. So we need to handle partition 
count increase or decrease even in the absence of dynamic topic registration in 
Spark Streaming. How is this handled? I expect the same solution to carry over.

The topic changes happen in the same thread of execution where the initial list 
of topics was provided before starting the streaming context. I'm not sure of 
the implication of doing it in the on batch completed handler.

> Support new topic subscriptions without requiring restart of the streaming 
> context
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-10320
>                 URL: https://issues.apache.org/jira/browse/SPARK-10320
>             Project: Spark
>          Issue Type: New Feature
>          Components: Streaming
>            Reporter: Sudarshan Kadambi
>
> Spark Streaming lacks the ability to subscribe to newer topics or unsubscribe 
> to current ones once the streaming context has been started. Restarting the 
> streaming context increases the latency of update handling.
> Consider a streaming application subscribed to n topics. Let's say 1 of the 
> topics is no longer needed in streaming analytics and hence should be 
> dropped. We could do this by stopping the streaming context, removing that 
> topic from the topic list and restarting the streaming context. Since with 
> some DStreams such as DirectKafkaStream, the per-partition offsets are 
> maintained by Spark, we should be able to resume uninterrupted (I think?) 
> from where we left off with a minor delay. However, in instances where 
> expensive state initialization (from an external datastore) may be needed for 
> datasets published to all topics, before streaming updates can be applied to 
> it, it is more convenient to only subscribe or unsubcribe to the incremental 
> changes to the topic list. Without such a feature, updates go unprocessed for 
> longer than they need to be, thus affecting QoS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to