[ 
https://issues.apache.org/jira/browse/SPARK-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074265#comment-15074265
 ] 

Nikita Tarasenko commented on SPARK-12177:
------------------------------------------

I'm not against to use only Direct approach and remove Receiver based approach 
from new implementations. The only thing that bothers me is frequent creation 
new instance of KafkaConsumer at KafkaCluster (withConsumer method). I do not 
know how effective it is. On this point of view, Receiver based approach more 
easier. But with ReliableKafkaReceiver we have problem with multithread acces 
to KafkaConsumer (because it is not thread-safe) - one from MessageHandler for 
poll and second from GeneratedBlockHandler for commit offsets.
And if we leave only Direct approach - do you mean that we would change the 
original KafkaUtils by adding new functions for new DirectIS/KafkaRDD but using 
them from separate module with kafka09 classes?

> Update KafkaDStreams to new Kafka 0.9 Consumer API
> --------------------------------------------------
>
>                 Key: SPARK-12177
>                 URL: https://issues.apache.org/jira/browse/SPARK-12177
>             Project: Spark
>          Issue Type: Improvement
>          Components: Streaming
>    Affects Versions: 1.6.0
>            Reporter: Nikita Tarasenko
>              Labels: consumer, kafka
>
> Kafka 0.9 already released and it introduce new consumer API that not 
> compatible with old one. So, I added new consumer api. I made separate 
> classes in package org.apache.spark.streaming.kafka.v09 with changed API. I 
> didn't remove old classes for more backward compatibility. User will not need 
> to change his old spark applications when he uprgade to new Spark version.
> Please rewiew my changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to