That version of createDirectStream doesn't handle partition changes.
You can work around it by starting the job again.

The spark 2.0 consumer for kafka 0.10 should handle partition changes
via SubscribePattern.

On Tue, Sep 13, 2016 at 7:13 PM, vinay gupta
<vingup2...@yahoo.com.invalid> wrote:
> Hi we are using the following version of KafkaUtils.createDirectStream()
> from spark 1.5.0
>
> createDirectStream(JavaStreamingContext jssc,
> Class<K> keyClass,
> Class<V> valueClass,
> Class<KD> keyDecoderClass,
> Class<VD> valueDecoderClass,
> Class<R> recordClass,
> java.util.Map<String,String> kafkaParams,
> java.util.Map<kafka.common.TopicAndPartition,Long> fromOffsets,
> Function<kafka.message.MessageAndMetadata<K,V>,R> messageHandler)
>
>
> while the streaming app is running, the kafka topic got expanded by
> increasing the partitions
> from 10 to 20.
>
> The problem is that the running app doesn't change to include the 10 new
> partitions. We have to stop
> the app and feed the fromOffsets map the new partitions and restart.
>
> Is there any way to get this done automatically? Curious to know if you ran
> into same problem and
> whats your solution/workaround?
>
> Thanks
> -Vinay
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to