Hi Manikumar,

I've looked over the KIP and had a quick look at the code in the PR as
well. In principle I think this would help Peter along depending on how
plugable some of the components are. Since Peter wants to generate Tokens
not in Kafka but in an external System the entire part in Kafka of
generating DelegationTokens would simply not be used, which I think would
be fine. To validate externally generated tokens an option to substitute
for example the TokenCache for a custom implementation or/and substitute
the method of authenticating a delegation token for a custom class.

Apologies for asking questions I could look up in the code myself, but at a
first glance I haven't seen any indications of this token system being
extendable, do you plan to allow extending the system to different external
token providers? OAuth would come to mind as a fairly wide spread candidate
that could probably be implemented fairly easily.

Kind regards,
Sönke

On Fri, Oct 27, 2017 at 11:17 AM, Manikumar <manikumar.re...@gmail.com>
wrote:

> Hi,
>
> We have a accepted KIP for adding delegation token support for Kafka.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 48+Delegation+token+support+for+Kafka
>
> currently the PR is under review process. Maybe this can used be as
> starting point for your requirement.
>
> https://github.com/apache/kafka/pull/3616
>
>
>
> On Fri, Oct 27, 2017 at 2:34 PM, Sönke Liebau <
> soenke.lie...@opencore.com.invalid> wrote:
>
> > Hi Peter,
> >
> > thanks for the explanation, it all makes sense now :)
> >
> > I can't say that I immediately see an easy way forward though to be
> honest.
> > The big issue, I think, is getting the token to Kafka (and hopefully
> there
> > is an easy way that I simply don't know of and someone will correct me) -
> > implementing a custom principalbuilder and authorizer should be almost
> > trivial.
> >
> > If transmitting the token as part of the ssl certificate or a Kerberos
> > ticket is out though the air gets a bit thin if you don't want to
> maintain
> > your own fork of Kafka. The only potential solution that I can come up
> with
> > is to piggyback on SASL and provide your own LoginModule in Kafka's jaas
> > file. If you use the SASL_SSL endpoint certificate checking should still
> > have occured before the SASL handshake is initialized, so you
> authenticated
> > the user at that point. You could then use that handshake to transmit
> your
> > token, have your custom principalbuilder extract the topics from that and
> > your custom authorizer authorize based on the extracted topicnames.
> > A word of caution though: this is based on a few minutes looking at code
> > and my dangerous half knowledge of SASL, so there are any number of
> things
> > that could make this impossible, either with SASL or in the Kafka
> codebase
> > itself. Might be a direction to explore though.
> >
> > Hopefully that makes sense and is targeted at least in the vicinity of
> whet
> > you are looking for?
> >
> > Kind regards,
> > Sönke
> >
> > On Fri, Oct 27, 2017 at 9:33 AM, Postmann, P. (Peter) <
> > peter.postm...@ing.com.invalid> wrote:
> >
> > > Hi Sönke,
> > >
> > > Thanks for your feedback, sorry that I didn’t gave you the whole
> picture
> > > in first place:
> > >
> > > We are using an Architecture, which tries to avoid to fetch or pull
> > > anything from a 3rd party during runtime. Therefore we are using
> > > self-contained tokens and client side load balancing with a micro
> service
> > > alike architecture.
> > >
> > > In this architecture we have two tokens:
> > > - the manifest which enabled services to provide APIs
> > > - the peer token which enables services to call APIs
> > >
> > > API providers publish their APIs in a Portal. API consumers subscribe
> to
> > > those APIs. The portal generates a manifest for the provider and a peer
> > > token for the consumer. Both tokens contain a list of endpoints and are
> > > signed by the portal. The tokens are valid for a certain amount of
> time.
> > >
> > > Furthermore we use a Service Registry to discover those services. The
> > flow
> > > works as follows:
> > >
> > > - A service instance registers itself (endpoint --> IP:Port) at the
> > > Service Discovery (SD) using the manifest.
> > > - A client queries SD for instances of a specific Endpoint and
> receives a
> > > list of IP:Port combinations
> > > - The client connects to the service and provides its peer token
> > >
> > > The client and the service use mutual TLS for authentication and the
> peer
> > > token for authorization. The token is signed to check its integrity and
> > > linked to the CN of the client certificate to check its validity (and
> > > prevent forwarding of the token).
> > >
> > > The benefit is, that we do not have any critical runtime dependencies:
> SD
> > > results can be cached and the tokens are valid for a certain amount of
> > > time. A client can perform client side load balancing and call a
> service
> > > even if the SD or the Portal are currently unavailable. Furthermore we
> > > avoid bottlenecks like load balancers.
> > >
> > > If you have 20mins time, our Principal Architect for API Design gave a
> > > talk about this: https://www.youtube.com/watch?v=Yke6Vut2Shc
> > >
> > > We want to use the same mechanism with Kafka:
> > > - Connect via Mutual TLS
> > > - Provide a peer token which contains metod:topics
> > >
> > > I understood that it should be possible to get the details from the
> > > certificate, but we also need the token. The combination of
> certificate +
> > > token is need to prevent impersonation of APIs (one could easily
> forward
> > a
> > > token, if it wasn’t bound to a certificate).
> > >
> > > I agree with the revocation part, but we are only looking at internal
> > > clients. The process would be to revoke access in the Portal which
> > prevents
> > > new instances to connect to the service. If we really encounter
> malicious
> > > behaviour, the prefer way is to shut down the misbehaving client.
> > >
> > > Kind Regards,
> > > Peter
> > >
> > >
> > > -----Original Message-----
> > > From: Sönke Liebau [mailto:soenke.lie...@opencore.com.INVALID]
> > > Sent: Donnerstag, 26. Oktober 2017 14:59
> > > To: dev@kafka.apache.org
> > > Subject: Re: Use self contained tokens instead of ACL
> > >
> > > Hi Peter,
> > >
> > > I am not entirely sure that I understand what you mean when you say "at
> > > application level" to be honest, but I do understand that you want to
> > > forego 3rd party tools. However, this would mean that you have to
> > implement
> > > some form of trust between your portal which issues the tokens and
> Kafka
> > > which checks the tokens. I am not sure that I'd recommend rolling your
> > own
> > > solution here, authentication has a lot of pitfalls that can turn
> around
> > to
> > > bite you.
> > > Same thing as what I proposed with Kerberos could be done via
> > certificates
> > > of course. If your user requests access to a topic in the portal he
> > uploads
> > > a certificate signing request with the topic name encoded in some field
> > > (lots of extensions to pick from). The portal signs the request and
> > returns
> > > it to the user. When the user now connects to Kafka he does so using
> this
> > > certificate and it should be fairly easy for you to extend the
> > > KafkaPrincipalBuilder class and extract the list of these topics from
> the
> > > certificate. Then you'd need to also extend SimpleAclAuthorizer to
> check
> > > for these topics in the principal name and allow access if the topic is
> > > present in the certificate.
> > >
> > > I am unsure of the benefits of this system over for example having the
> > > portal create ACLs in Kafka, when the user is granted access to a
> topic.
> > > One scenario that comes to mind is if you dynamically spin up new
> > > clusters, if these new clusters all trust the same ca they would also
> > > accept these tokens right away, no need to create ACLs in fresh
> clusters.
> > > But this could also be solved by having a central repository for ACLs
> > that
> > > are applied to all clusters regularly..
> > > A drawback of this system is that you'd need to revoke certificates if
> > you
> > > want to withdraw access to a topic, which is significantly harder that
> > just
> > > deleting an ACL.
> > >
> > > Anyway, not sure if this helps you at all, just some random musings -
> if
> > > you explain your specific needs a bit more we can discuss further.
> > >
> > > Kind regards,
> > > Sönke
> > >
> > > On Wed, Oct 25, 2017 at 1:10 PM, Postmann, P. (Peter) <
> > > peter.postm...@ing.com.invalid> wrote:
> > >
> > > > Hi Sönke,
> > > >
> > > > Thanks for the fast replay. We don’t want to use Kerberos since we
> > > > want to do the authorization on Application level and without
> > > > involvement of a 3rd party during runtime.
> > > >
> > > > -----Original Message-----
> > > > From: Sönke Liebau [mailto:soenke.lie...@opencore.com.INVALID]
> > > > Sent: Mittwoch, 25. Oktober 2017 12:37
> > > > To: dev@kafka.apache.org
> > > > Subject: Re: Use self contained tokens instead of ACL
> > > >
> > > > The concept you describe sounds similar to what Microsoft calls
> > > > "claims based authorization".
> > > >
> > > > At a high level I should think that using Kerberos as a vehicle to
> > > > transport the information would be the way to go, as it is
> established
> > > > and already supported by Kafka. I believe tickets have a field that
> > > > can be used for authorization information, so if information about
> the
> > > > topics that a user has access to were to be encoded in this field you
> > > > could probably extend Kafka to extract that information and use it
> > > instead of ACLs.
> > > >
> > > > I am not well versed in what exactly Microsoft does and how you can
> > > > control the granting side of things, but I do believe that AD server
> > > > has support for something along those lines already.
> > > >
> > > > The upside of this would be that you don't have to implement anything
> > > > around security, trust, encryption, etc. because everything is
> > > > provided by Kerberos.
> > > >
> > > > Not much information in here I am afraid, but maybe a useful
> direction
> > > > for future research.
> > > >
> > > > Kind regards,
> > > > Sönke
> > > >
> > > > On Wed, Oct 25, 2017 at 11:55 AM, Postmann, P. (Peter) <
> > > > peter.postm...@ing.com.invalid> wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > I´m working on a concept to use Kafka with self-contained tokens
> > > > > (instead of ACL).
> > > > >
> > > > > The idea:
> > > > >
> > > > > -          A client requests access to a certain topic (in some
> kind
> > of
> > > > > portal)
> > > > >
> > > > > -          The owner of the topic approves the request (in some
> kind
> > of
> > > > > portal)
> > > > >
> > > > > -          The client receives a signed tokens which contains the
> > topic
> > > > > (in some kind of portal)
> > > > >
> > > > > -          The client sends the token when he connects to Kafka
> > > > >
> > > > > -          Kafka validates the token and grants access
> > > > >
> > > > > Token Format:
> > > > >
> > > > > -          List of Topics and methods
> > > > >
> > > > > o   E.g. read /topic1
> > > > >
> > > > > -          Expire Date
> > > > >
> > > > > -          Signature
> > > > >
> > > > > Implementation Idea:
> > > > >
> > > > > -          Create a custom Authorization Class which checks the
> > > signature
> > > > >
> > > > > -          Implement the possibility to send arbitrary data
> > > (key->value)
> > > > > along with the request when the client connects to the cluster
> > > > >
> > > > > I´m looking forward for feedback on this approach and would be
> happy
> > > > > if you could give me a starting where to start with the
> > > > > implementation (or if there already is a way to send arbitrary data
> > > > > to a custom
> > > > Authorizer).
> > > > >
> > > > > Kind Regards,
> > > > > Peter
> > > > >
> > > > > -----------------------------------------------------------------
> > > > > ATTENTION:
> > > > > The information in this e-mail is confidential and only meant for
> > > > > the intended recipient. If you are not the intended recipient,
> don't
> > > > > use or disclose it in any way. Please let the sender know and
> delete
> > > > > the message immediately.
> > > > > -----------------------------------------------------------------
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Sönke Liebau
> > > > Partner
> > > > Tel. +49 179 7940878
> > > > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> > > >
> > > > -----------------------------------------------------------------
> > > > ATTENTION:
> > > > The information in this e-mail is confidential and only meant for the
> > > > intended recipient. If you are not the intended recipient, don't use
> > > > or disclose it in any way. Please let the sender know and delete the
> > > > message immediately.
> > > > -----------------------------------------------------------------
> > > >
> > >
> > >
> > >
> > > --
> > > Sönke Liebau
> > > Partner
> > > Tel. +49 179 7940878
> > > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> > >
> > > -----------------------------------------------------------------
> > > ATTENTION:
> > > The information in this e-mail is confidential and only meant for the
> > > intended recipient. If you are not the intended recipient, don't use or
> > > disclose it in any way. Please let the sender know and delete the
> message
> > > immediately.
> > > -----------------------------------------------------------------
> > >
> >
> >
> >
> > --
> > Sönke Liebau
> > Partner
> > Tel. +49 179 7940878
> > OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> >
>



-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany

Reply via email to