Hi Jun,

I understand the point about the SASL tokens being similar to the SSL
handshake in a way. However, is there any SASL library that handles the
network communication for these tokens? I couldn't find any and without
that, there isn't much benefit in deviating from Kafka's protocol (we
basically save the space taken by the request header). It's worth
mentioning that we are already adding the message size before the opaque
bytes provided by the library, so one could say we are already extending
the protocol.

If we leave versioning aside, adding the standard Kafka request header to
those messages may also help from a debugging perspective as would then
include client id and correlation id along with the message.

Ismael

On Tue, Apr 12, 2016 at 2:13 AM, Jun Rao <j...@confluent.io> wrote:

> Magnus,
>
> That sounds reasonable. To reduce the changes on the server side, I'd
> suggest the following minor tweaks on the proposal.
>
> 1. Continue supporting the separate SASL and SASL_SSL port.
>
> On SASL port, we support the new sequence
>     ApiVersionRequest (optional), SaslHandshakeRequest, SASL tokens,
> regular
> requests
>
> On SASL_SSL port, we support the new sequence
>     SSL handshake bytes, ApiVersionRequest (optional),
> SaslHandshakeRequest,
> SASL tokens, regular requests
>
> 2. We don't wrap SASL tokens in Kafka protocol. Similar to your argument
> about SSL handshake, those SASL tokens are generated by SASL library and
> Kafka doesn't really control its versioning. Kafka only controls the
> selection of SASL mechanism, which will be versioned in
> SaslHandshakeRequest.
>
> Does that work for you?
>
> Thanks,
>
> Jun
>
>
> On Mon, Apr 11, 2016 at 11:15 AM, Magnus Edenhill <mag...@edenhill.se>
> wrote:
>
> > Hey Jun, see inline
> >
> > 2016-04-11 19:19 GMT+02:00 Jun Rao <j...@confluent.io>:
> >
> > > Hi, Magnus,
> > >
> > > Let me understand your proposal in more details just from the client's
> > > perspective. My understanding of your proposal is the following.
> > >
> > > On plaintext port, the client will send the following bytes in order.
> > >     ApiVersionRequest, SaslHandshakeRequest, SASL tokens (if SASL is
> > > enabled), regular requests
> > >
> > > On SSL port, the client will send the following bytes in order.
> > >     SSL handshake bytes, ApiVersionRequest, SaslHandshakeRequest, SASL
> > > tokens (if SASL is enabled), regular requests
> > >
> >
> >
> > Yup!
> > "SASL tokens" is a series of proper Kafka protocol SaslHandshakeRequests
> > until
> > the handshake is done.
> >
> >
> >
> > >
> > > Is that right? Since we can use either SSL or SASL for authentication,
> > it's
> > > weird that in one case, we require ApiVersionRequest to happen before
> > > authentication and in another case we require the reverse.
> > >
> >
> > Since the SSL/TLS is standardised and taken care of for us by the SSL
> > libraries it
> > doesnt make sense to reimplement that on top of Kafka, so it isn't really
> > comparable.
> > But for SASL there is no standardised handshake protocol so we must
> either
> > conceive one from scratch, or use the protocol that we already have
> > (Kafka).
> > For the initial SASL implementation in 0.9 the first option was chosen
> and
> > while
> > it required a new protocol implementation in supporting clients and the
> > broker
> > it served its purpose. But not for long,  it already needs to evolve,
> > and this gives us a golden[1] opportunity to make the implementation
> > reusable, evolvable, less complex
> > and in line with all our other protocol work, by using the  protocol
> stack
> > of Kafka which the
> > broker and all clients already have in place.
> >
> > Not taking this chance and instead diverging the custom SASL handshake
> > protocol
> > even further from Kafka seems to me a weird choice.
> >
> > The current KIP-43 proposal does not have a clear compatibility story; it
> > doesnt seem to be possible
> > to upgrade clients before brokers, while this might be okay for the Java
> > client, the KIP-35 discussion
> > has hopefully proven that this assumption can't be made for the entire
> > eco-system.
> >
> > Let me be clear that there isn't anything technically wrong with the
> KIP-43
> > proposal (well,
> > except for the hack to check byte[0] for 0x60 perhaps), but I'm worried
> the
> > proposal will eventually lead to
> > reimplementing Api Versioning, KIP-35, etc, in the custom SASL handshake,
> > and this is just redundant,
> > there is no technical reason for doing so and it'll just make protocol
> > semantics and implementations more complex.
> >
> >
> > Regards,
> > Magnus
> >
> > [1]: Timing is good for this change since only two clients, Java and C,
> > currently supports
> > the existing SASL handshake so far.
> >
> >
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Apr 11, 2016 at 12:20 AM, Magnus Edenhill <mag...@edenhill.se>
> > > wrote:
> > >
> > > > 2016-04-11 3:01 GMT+02:00 Jun Rao <j...@confluent.io>:
> > > >
> > > > > Thinking about ApiVersionRequest a bit more. There are quite a few
> > > things
> > > > > special about it. In the ideal case, (1) its version should never
> > > change;
> > > > >
> > > >
> > > > The only thing we know of the future is that we dont know anything,
> we
> > > > can't
> > > > think of every possible future use case, that's why need to be able
> to
> > > > evolve interfaces
> > > > as requirements and use-cases change. This is the gist of KIP-35, and
> > > > hampering
> > > > KIP-35 itself, by not letting it also evolve, does not seem right to
> me
> > > at
> > > > all.
> > > >
> > > >
> > > >
> > > >
> > > > > (2) it needs to be done before authentication (either SSL/SASL);
> (3)
> > it
> > > > is
> > > > > required to be issued at the beginning of each connection but never
> > > needs
> > > > > to be issued again on the same connection. So, instead of modeling
> it
> > > as
> > > > a
> > > > > regular request, it seems a cleaner approach is to just bake that
> > into
> > > > the
> > > > > initial connection handshake even before the authentication layer.
> So
> > > the
> > > > > sequencing in a connection will be api discovery, authentication,
> > > > followed
> > > > > by regular requests. I am not sure we can still add this in a
> > backward
> > > > > compatible way now (e.g., not sure what the initial bytes from an
> SSL
> > > > > connection will look like). Even if we can do this in a backward
> > > > compatible
> > > > > way, it's probably non-trivial amount of work though.
> > > > >
> > > >
> > > > I have the luxory of not knowing the broker internals, so I can only
> > > > discuss
> > > > this on a conceptual design level.
> > > >
> > > > In its simplest form each API request type has a NeedsAuth flag and
> the
> > > > broker protocol request layer simply checks if the current session is
> > > > Authenticated
> > > > before processing the request: If not the session is closed and an
> > error
> > > is
> > > > logged.
> > > > The only two API requests that dont have the NeedsAuth flag would be
> > > > SaslHandshakeRequest
> > > > and ApiVersionRequest, the latter could also use filtering to only
> > return
> > > > the same two
> > > > requests in ApiVersionResponse before the client is authenticated (as
> > not
> > > > to "leak" information).
> > > > If authentication is not configured on the broker all sessions are
> > deemed
> > > > authenticated by default.
> > > >
> > > >
> > > > Re backwards compatibility:
> > > > My suggestion is to keep the current special SASL handshake protocol
> on
> > > the
> > > > SASL_PLAIN/SASL_SSL
> > > > endpoints, but use the new in-band Kafka SaslHandshakeRequest API on
> > the
> > > > PLAIN/SSL endpoints.
> > > > This way the broker is backwards compatible with older clients that
> > only
> > > > supports the special SASL protocol,
> > > > and newer cliets are also backwards compatible with older brokers
> that
> > > only
> > > > supports the special SASL protocol.
> > > > Newer clients connecting to new brokers will be configured to use
> > > non-SASL
> > > > ports and use the
> > > > in-band Kafka SaslHandshakeRequest to authenticate.
> > > >
> > > > Using the existing standard Kafka protocol and the new future-proof
> > > > functionality of ApiVersionRequest
> > > > allows the in-band authentication mechanisms and semantics to
> naturally
> > > > evolve over time
> > > > without breaking existing clients.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > >
> > > > > We started KIP-35 with supporting a client to know if a version is
> > > > > supported by the broker. It's now evolved into supporting a client
> to
> > > > > implement multiple versions of the protocol and dynamically pick a
> > > > version
> > > > > supported by the broker. The former is likely solvable without
> > > > > ApiVersionRequest. How important is the latter? What if the C
> client
> > > just
> > > > > follows the java client model by implementing one version of
> protocol
> > > > per C
> > > > > client release (which seems easier to implement)?
> > > > >
> > > >
> > > > We've discussed this at length and it is not an option for
> librdkafka,
> > > nor
> > > > kafka-python, and
> > > > probably other clients as well, due to usability/UX and maintenance
> > > > reasons.
> > > > (There's even discussion of making the Java client more version
> > > agnostic!)
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun
> > > > >
> > > > >
> > > > > On Fri, Apr 8, 2016 at 4:20 PM, Jun Rao <j...@confluent.io> wrote:
> > > > >
> > > > > > Magnus,
> > > > > >
> > > > > > A while back, we had another proposal for the broker to just send
> > the
> > > > > > correlation id and an empty payload if it receives an unsupported
> > > > version
> > > > > > of the request. I didn't see that in the rejected section. It
> seems
> > > > > simpler
> > > > > > than the current proposal where the client has to issue an
> > > > > > ApiVersionRequest on every connection. Could you document the
> > reason
> > > > why
> > > > > we
> > > > > > rejected it?
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Tue, Apr 5, 2016 at 1:47 PM, Ashish Singh <
> asi...@cloudera.com>
> > > > > wrote:
> > > > > >
> > > > > >> On Fri, Apr 1, 2016 at 1:32 AM, Ismael Juma <ism...@juma.me.uk>
> > > > wrote:
> > > > > >>
> > > > > >> > Two more things:
> > > > > >> >
> > > > > >> > 3. We talk about backporting of new request versions to stable
> > > > > branches
> > > > > >> in
> > > > > >> > the KIP. In practice, we can't do that until the Java client
> is
> > > > > changed
> > > > > >> so
> > > > > >> > that it doesn't blindly use the latest protocol version.
> > > Otherwise,
> > > > if
> > > > > >> new
> > > > > >> > request versions were added to 0.9.0.2, the client would break
> > > when
> > > > > >> talking
> > > > > >> > to a 0.9.0.1 broker (given Jason's proposal, it would fail a
> bit
> > > > more
> > > > > >> > gracefully, but that's still not good enough for a stable
> > branch).
> > > > It
> > > > > >> may
> > > > > >> > be worth making this clear in the KIP (yes, it is a bit
> > orthogonal
> > > > and
> > > > > >> > doesn't prevent the KIP from being adopted, but good to avoid
> > > > > >> confusion).
> > > > > >> >
> > > > > >> Good point. Adding this note and also adding a note that Kafka
> has
> > > not
> > > > > >> backported an api version so far.
> > > > > >>
> > > > > >> >
> > > > > >> > 4. The paragraph below is a bit confusing. It starts talking
> > about
> > > > > 0.9.0
> > > > > >> > and trunk and then switches to 0.9.1. Is that intentional?
> > > > > >> >
> > > > > >> Yes.
> > > > > >>
> > > > > >> >
> > > > > >> > "Deprecation of a protocol version will be done by marking a
> > > > protocol
> > > > > >> > version as deprecated in protocol documentation. Documentation
> > > shall
> > > > > >> also
> > > > > >> > be used to indicate a protocol version that must not be used,
> or
> > > for
> > > > > any
> > > > > >> > such information.For instance, say 0.9.0 had protocol versions
> > [0]
> > > > for
> > > > > >> api
> > > > > >> > key 1. On trunk, version 1 of the api key was added. Users
> > running
> > > > off
> > > > > >> > trunk started using version 1 of the api and found out a major
> > > bug.
> > > > To
> > > > > >> > rectify that version 2 of the api is added to trunk. For some
> > > > reason,
> > > > > >> it is
> > > > > >> > now deemed important to have version 2 of the api in 0.9.1 as
> > > well.
> > > > To
> > > > > >> do
> > > > > >> > so, version 1 and version 2 both of the api will be backported
> > to
> > > > the
> > > > > >> 0.9.1
> > > > > >> > branch. 0.9.1 broker will return 0 as min supported version
> for
> > > the
> > > > > api
> > > > > >> and
> > > > > >> > 2 for the max supported version for the api. However, the
> > version
> > > 1
> > > > > >> should
> > > > > >> > be clearly marked as deprecated on its documentation. It will
> be
> > > > > >> client's
> > > > > >> > responsibility to make sure they are not using any such
> > deprecated
> > > > > >> version
> > > > > >> > to the best knowledge of the client at the time of development
> > (or
> > > > > >> > alternatively by configuration)."
> > > > > >> >
> > > > > >> > Ismael
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> > On Fri, Apr 1, 2016 at 9:22 AM, Ismael Juma <
> ism...@juma.me.uk>
> > > > > wrote:
> > > > > >> >
> > > > > >> > > A couple of questions:
> > > > > >> > >
> > > > > >> > > 1. The KIP says "Specific version may be deprecated through
> > > > protocol
> > > > > >> > > documentation but must still be supported (although it is
> fair
> > > to
> > > > > >> return
> > > > > >> > an
> > > > > >> > > error code if the specific API supports it).". It may be
> worth
> > > > > >> expanding
> > > > > >> > > this a little more. For example, what does it mean to
> support
> > > the
> > > > > >> API? I
> > > > > >> > > guess this means that the broker must not disconnect the
> > client
> > > > and
> > > > > >> the
> > > > > >> > > broker must return a valid protocol response. Given that it
> > says
> > > > > that
> > > > > >> it
> > > > > >> > is
> > > > > >> > > "fair" (I would probably replace "fair" with "valid") to
> > return
> > > an
> > > > > >> error
> > > > > >> > > code if the specific API supports it, it sounds like we are
> > > saying
> > > > > >> that
> > > > > >> > we
> > > > > >> > > don't have to maintain the semantic behaviour (i.e. we could
> > > > > _always_
> > > > > >> > > return an error for a deprecated API?). Is this true?
> > > > > >> > >
> > > > > >> > > 2. ApiVersionQueryRequest seems a bit verbose, why not
> > > > > >> ApiVersionRequest?
> > > > > >> > >
> > > > > >> > > Ismael
> > > > > >> > >
> > > > > >> >
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >>
> > > > > >> Regards,
> > > > > >> Ashish
> > > > > >>
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to