[ 
https://issues.apache.org/jira/browse/SPARK-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15189325#comment-15189325
 ] 

Cody Koeninger commented on SPARK-12177:
----------------------------------------

Clearly K and V are serializable somehow, because they were in a byte array
in a Kafka message. So it's not really far fetched to expect a thin wrapper
around K and V to also be serializable, and it would be convenient from a
Spark perspective.

That being said, I agree with you that the Kafka project isn't likely to go
for it. It's also probably better for Spark users if they don't blindly
cache or collect consumer records, because its a lot of wasted space (e.g.
topic name). But people are going to be surprised the first the they try
and it doesn't work, so I wanted to mention it.



> Update KafkaDStreams to new Kafka 0.9 Consumer API
> --------------------------------------------------
>
>                 Key: SPARK-12177
>                 URL: https://issues.apache.org/jira/browse/SPARK-12177
>             Project: Spark
>          Issue Type: Improvement
>          Components: Streaming
>    Affects Versions: 1.6.0
>            Reporter: Nikita Tarasenko
>              Labels: consumer, kafka
>
> Kafka 0.9 already released and it introduce new consumer API that not 
> compatible with old one. So, I added new consumer api. I made separate 
> classes in package org.apache.spark.streaming.kafka.v09 with changed API. I 
> didn't remove old classes for more backward compatibility. User will not need 
> to change his old spark applications when he uprgade to new Spark version.
> Please rewiew my changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to