The 0.9 release still has the old consumer as Jay mentioned but this
specific release is a little unusual in that it also provides a completely
new consumer client.

Based on what I understand, users of Kafka need to upgrade their brokers to
> Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.



However, that presents a problem to other projects that integrate with
> Kafka (Spark, Flume, Storm, etc.).


This is true and we faced a similar issue at LinkedIn - there are scenarios
where it is useful/necessary to allow the client to be upgraded before the
broker. This improvement
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version>
can help with that although if users want to leverage newer server-side
features they would obviously need to upgrade the brokers.

Thanks,

Joel

On Fri, Feb 26, 2016 at 4:22 PM, Mark Grover <m...@apache.org> wrote:

> Thanks Jay. Yeah, if we were able to use the old consumer API from 0.9
> clients to work with 0.8 brokers that would have been super helpful here. I
> am just trying to avoid a scenario where Spark cares about new features
> from every new major release of Kafka (which is a good thing) but ends up
> having to keep multiple profiles/artifacts for it - one for 0.8.x, one for
> 0.9.x and another one, once 0.10.x gets released.
>
> So, anything that the Kafka community can do to alleviate the situation
> down the road would be great. Thanks again!
>
> On Fri, Feb 26, 2016 at 11:36 AM, Jay Kreps <j...@confluent.io> wrote:
>
> > Hey, yeah, we'd really like to make this work well for you guys.
> >
> > I think there are actually maybe two questions here:
> > 1. How should this work in steady state?
> > 2. Given that there was a major reworking of the kafka consumer java
> > library for 0.9 how does that impact things right now? (
> >
> >
> http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
> > )
> >
> > Quick recap of how we do compatibility, just so everyone is on the same
> > page:
> > 1. The protocol is versioned and the cluster supports multiple versions.
> > 2. As we evolve Kafka we always continue to support older versions of the
> > protocol an hence older clients continue to work with newer Kafka
> versions.
> > 2. In general we don't try to have the clients support older versions of
> > Kafka since, after all, the whole point of the new client is to add
> > features which often require those features to be in the broker.
> >
> > So I think in steady state the answer is to choose a conservative version
> > to build against and it's on us to keep that working as Kafka evolves. As
> > always there will be some tradeoff between using the newest features and
> > being compatible with old stuff.
> >
> > But that steady state question ignores the fact that we did a complete
> > rewrite of the consumer in 0.9. The old consumer is still there,
> supported,
> > and still works as before but the new consumer is the path forward and
> what
> > we are adding features to. At some point you will want to migrate to this
> > new api, which will be a non-trivial change to your code.
> >
> > This api has a couple of advantages for you guys (1) it supports
> security,
> > (2) It allows low-level control over partition assignment and offsets
> > without the crazy fiddliness of the old "simple consumer" api, (3) it no
> > longer directly accesses ZK, (4) no scala dependency and no dependency on
> > Kafka core. I think all four of these should be desirable for Spark et
> al.
> >
> > One thing we could discuss is the possibility of doing forwards and
> > backwards compatibility in the clients. I'm not sure this would actually
> > make things better, that would probably depend on the details of how it
> > worked.
> >
> > -Jay
> >
> >
> > On Fri, Feb 26, 2016 at 9:46 AM, Mark Grover <m...@apache.org> wrote:
> >
> > > Hi Kafka devs,
> > > I come to you with a dilemma and a request.
> > >
> > > Based on what I understand, users of Kafka need to upgrade their
> brokers
> > to
> > > Kafka 0.9.x first, before they upgrade their clients to Kafka 0.9.x.
> > >
> > > However, that presents a problem to other projects that integrate with
> > > Kafka (Spark, Flume, Storm, etc.). From here on, I will speak for
> Spark +
> > > Kafka, since that's the one I am most familiar with.
> > >
> > > In the light of compatibility (or the lack thereof) between 0.8.x and
> > > 0.9.x, Spark is faced with a problem of what version(s) of Kafka to be
> > > compatible with, and has 2 options (discussed in this PR
> > > <https://github.com/apache/spark/pull/11143>):
> > > 1. We either upgrade to Kafka 0.9, dropping support for 0.8. Storm and
> > > Flume are already on this path.
> > > 2. We introduce complexity in our code to support both 0.8 and 0.9 for
> > the
> > > entire duration of our next major release (Apache Spark 2.x).
> > >
> > > I'd love to hear your thoughts on which option, you recommend.
> > >
> > > Long term, I'd really appreciate if Kafka could do something that
> doesn't
> > > make Spark having to support two, or even more versions of Kafka. And,
> if
> > > there is something that I, personally, and Spark project can do in your
> > > next release candidate phase to make things easier, please do let us
> > know.
> > >
> > > Thanks!
> > > Mark
> > >
> >
>

Reply via email to