Re: Configuring Kerberos behind an ELB

2018-06-22 Thread Tyler Monahan
Martin,

I think I tried that already. I setup a user in ad and assigned the shared
SPN record for the ELB to that user. I then added the user to the keytab
file for the kafka servers and had the kafka servers use the SPN for the
user with the dns record. That worked fine for authing through the ELB but
it broke inter broker communication. Since kafka talks to the other nodes
directly and doesn't go through the elb it would fail on the direct
connection as it doesn't have valid credentials as mentioned. I don't know
a solution to the interbroker communication unless there is some way I can
have it use different information for inter broker then it does for
incoming consumer/producer connections.

Tyler Monahan

On Fri, Jun 22, 2018 at 10:27 AM, Martin Gainty  wrote:

> it appears you want:
> common-principal name with common-key distributed to all subdomains
>
>
> Use only one common Service Principal Name:
>
> One of the solutions is to create a new Service Principal  in the
> KDC for the name HTTP/all.ipa@ipa.dom
> then generate a keytab and
> distribute it (keytab) to all servers.
> The servers will use no other key, and they  will identify
> themselves with the common  name,
> so if a client tries to contact them using their individual
>  name, then authentication will fail,
> as the KDC will not have a principal for the other  names
> and the services themselves are not configure to use their hostname only
> the common  name.
>
> assuming you generated common SPN in the KDC
> assuming you generated keytab and distributed to all subdomains
> does this not work for you?
> M-
>
>
>
>
> --
> *From:* Tyler Monahan 
> *Sent:* Friday, June 22, 2018 1:09 PM
> *To:* Ben Wood
> *Cc:* users@kafka.apache.org; Martin Gainty
> *Subject:* Re: Configuring Kerberos behind an ELB
>
> Ben,
>
> Yes. I want to be able to provide consumers/producers with a single
> address they can use to connect to the cluster. Having it behind an elb
> lets us scale up and replace nodes with out needing to mess with
> consumer/producer configurations. I have considered setting up individual
> dns records for each broker and feeding in a list of instances to connect
> to but this is not as flexible as using an elb and does not match our
> general strategy for infrastructure. If at all possible I would like to get
> kafka working behind and elb with kerberos.
>
> Tyler Monahan
>
> On Fri, Jun 22, 2018 at 9:44 AM, Ben Wood  wrote:
>
> Hey Tyler,
>
> What is your end goal? To have a single publicly / internally available
> address to be able to provide to consumers / producers to connect to the
> Kerberized Kafka?
>
> On Fri, Jun 22, 2018 at 9:20 AM, Tyler Monahan 
> wrote:
>
> Martin,
>
> I have read that stack overflow post but it doesn't help with my specific
> problem. An ELB will work if I am not using kerberos just fine. The issue
> started happening when I added kerberos auth to the cluster. The auth has
> to happen before the meta data request so it never gets to the point where
> it is by passing the load balancer. Because I am connecting with the load
> balancers dns record I don't have a valid spn on the brokers for the load
> balancers dns record. This blog post has some work arounds for kerberos
> with a load balancer and details the problem but I haven't been able to get
> any of them to work with kafka because it gets traffic through and ELB but
> also talks to the other brokers directly in my setup.
> https://ssimo.org/blog/id_019.html
>
> Tyler Monahan
>
> On Fri, Jun 22, 2018 at 5:36 AM, Martin Gainty 
> wrote:
>
> > MG>quoting stackoverflow below
> >
> > "You can use an ELB as the bootstrap.servers,
> > *The ELB will be used for the initial metadata request the client makes
> to
> > figure out which topic partitions are on which brokers, *
> > but after (the initial metadata request)
> > the
> > *brokers still need to be directly accessible to the client. *that it'll
> > use the hostname of the server (or advertised.listeners setting if you
> > need to customize it,
> > which, e.g. might be necessary on EC2 instances to get the public IP of a
> > server).
> > If you were trying to use an ELB to make a Kafka cluster publicly
> > available,
> > you'd need to make sure the advertised.listeners for each broker also
> > makes it publicly accessible. "
> >
> > MG> initial metadata request you will see elb
> > MG>after metadata request and topic partition locations are determined
> > MG>elb drops out and client will talk to directly to broker
> > MG>use healthcheck algorithm to determine assigned server/port assigned
>

Re: Configuring Kerberos behind an ELB

2018-06-22 Thread Martin Gainty
it appears you want:
common-principal name with common-key distributed to all subdomains

Use only one common Service Principal Name:


One of the solutions is to create a new Service Principal  in the KDC for 
the name HTTP/all.ipa@ipa.dom
then generate a keytab and
distribute it (keytab) to all servers.
The servers will use no other key, and they  will identify 
themselves with the common  name,
so if a client tries to contact them using their individual  name, 
then authentication will fail,
as the KDC will not have a principal for the other  names
and the services themselves are not configure to use their hostname only the 
common  name.

assuming you generated common SPN in the KDC
assuming you generated keytab and distributed to all subdomains
does this not work for you?
M-





From: Tyler Monahan 
Sent: Friday, June 22, 2018 1:09 PM
To: Ben Wood
Cc: users@kafka.apache.org; Martin Gainty
Subject: Re: Configuring Kerberos behind an ELB

Ben,

Yes. I want to be able to provide consumers/producers with a single address 
they can use to connect to the cluster. Having it behind an elb lets us scale 
up and replace nodes with out needing to mess with consumer/producer 
configurations. I have considered setting up individual dns records for each 
broker and feeding in a list of instances to connect to but this is not as 
flexible as using an elb and does not match our general strategy for 
infrastructure. If at all possible I would like to get kafka working behind and 
elb with kerberos.

Tyler Monahan

On Fri, Jun 22, 2018 at 9:44 AM, Ben Wood 
mailto:bw...@mesosphere.io>> wrote:
Hey Tyler,

What is your end goal? To have a single publicly / internally available address 
to be able to provide to consumers / producers to connect to the Kerberized 
Kafka?

On Fri, Jun 22, 2018 at 9:20 AM, Tyler Monahan 
mailto:tjmonah...@gmail.com>> wrote:
Martin,

I have read that stack overflow post but it doesn't help with my specific
problem. An ELB will work if I am not using kerberos just fine. The issue
started happening when I added kerberos auth to the cluster. The auth has
to happen before the meta data request so it never gets to the point where
it is by passing the load balancer. Because I am connecting with the load
balancers dns record I don't have a valid spn on the brokers for the load
balancers dns record. This blog post has some work arounds for kerberos
with a load balancer and details the problem but I haven't been able to get
any of them to work with kafka because it gets traffic through and ELB but
also talks to the other brokers directly in my setup.
https://ssimo.org/blog/id_019.html

Tyler Monahan

On Fri, Jun 22, 2018 at 5:36 AM, Martin Gainty 
mailto:mgai...@hotmail.com>> wrote:

> MG>quoting stackoverflow below
>
> "You can use an ELB as the bootstrap.servers,
> *The ELB will be used for the initial metadata request the client makes to
> figure out which topic partitions are on which brokers, *
> but after (the initial metadata request)
> the
> *brokers still need to be directly accessible to the client. *that it'll
> use the hostname of the server (or advertised.listeners setting if you
> need to customize it,
> which, e.g. might be necessary on EC2 instances to get the public IP of a
> server).
> If you were trying to use an ELB to make a Kafka cluster publicly
> available,
> you'd need to make sure the advertised.listeners for each broker also
> makes it publicly accessible. "
>
> MG> initial metadata request you will see elb
> MG>after metadata request and topic partition locations are determined
> MG>elb drops out and client will talk to directly to broker
> MG>use healthcheck algorithm to determine assigned server/port assigned to
> broker from /brokers/id/$id
> MG>echo dump | nc localhost 2181 | grep brokers
>
>
> https://stackoverflow.com/questions/38666795/does-kafka-
> support-elb-in-front-of-broker-cluster
>
> <https://stackoverflow.com/questions/38666795/does-kafka-support-elb-in-front-of-broker-cluster>
> Does Kafka support ELB in front of broker cluster? - Stack ...
> <https://stackoverflow.com/questions/38666795/does-kafka-support-elb-in-front-of-broker-cluster>
> stackoverflow.com<http://stackoverflow.com>
> I had a question regarding Kafka broker clusters on AWS. Right now there
> is an AWS ELB sitting in front of the cluster, but when I set the
> "bootstrap.servers" property of my producer or consumer to...
>
> does this help?
>
> Martin
> __
>
>
>
> --
> *From:* Tyler Monahan mailto:tjmonah...@gmail.com>>
> *Sent:* Thursday, June 21, 2018 6:17 PM
> *To:* users@kafka.apache.org<mailto:users@kafka.apache.org>
> *Subject:* Configuring Kerberos 

Re: Configuring Kerberos behind an ELB

2018-06-22 Thread Tyler Monahan
Ben,

Yes. I want to be able to provide consumers/producers with a single address
they can use to connect to the cluster. Having it behind an elb lets us
scale up and replace nodes with out needing to mess with consumer/producer
configurations. I have considered setting up individual dns records for
each broker and feeding in a list of instances to connect to but this is
not as flexible as using an elb and does not match our general strategy for
infrastructure. If at all possible I would like to get kafka working behind
and elb with kerberos.

Tyler Monahan

On Fri, Jun 22, 2018 at 9:44 AM, Ben Wood  wrote:

> Hey Tyler,
>
> What is your end goal? To have a single publicly / internally available
> address to be able to provide to consumers / producers to connect to the
> Kerberized Kafka?
>
> On Fri, Jun 22, 2018 at 9:20 AM, Tyler Monahan 
> wrote:
>
>> Martin,
>>
>> I have read that stack overflow post but it doesn't help with my specific
>> problem. An ELB will work if I am not using kerberos just fine. The issue
>> started happening when I added kerberos auth to the cluster. The auth has
>> to happen before the meta data request so it never gets to the point where
>> it is by passing the load balancer. Because I am connecting with the load
>> balancers dns record I don't have a valid spn on the brokers for the load
>> balancers dns record. This blog post has some work arounds for kerberos
>> with a load balancer and details the problem but I haven't been able to
>> get
>> any of them to work with kafka because it gets traffic through and ELB but
>> also talks to the other brokers directly in my setup.
>> https://ssimo.org/blog/id_019.html
>>
>> Tyler Monahan
>>
>> On Fri, Jun 22, 2018 at 5:36 AM, Martin Gainty 
>> wrote:
>>
>> > MG>quoting stackoverflow below
>> >
>> > "You can use an ELB as the bootstrap.servers,
>> > *The ELB will be used for the initial metadata request the client makes
>> to
>> > figure out which topic partitions are on which brokers, *
>> > but after (the initial metadata request)
>> > the
>> > *brokers still need to be directly accessible to the client. *that it'll
>> > use the hostname of the server (or advertised.listeners setting if you
>> > need to customize it,
>> > which, e.g. might be necessary on EC2 instances to get the public IP of
>> a
>> > server).
>> > If you were trying to use an ELB to make a Kafka cluster publicly
>> > available,
>> > you'd need to make sure the advertised.listeners for each broker also
>> > makes it publicly accessible. "
>> >
>> > MG> initial metadata request you will see elb
>> > MG>after metadata request and topic partition locations are determined
>> > MG>elb drops out and client will talk to directly to broker
>> > MG>use healthcheck algorithm to determine assigned server/port assigned
>> to
>> > broker from /brokers/id/$id
>> > MG>echo dump | nc localhost 2181 | grep brokers
>> >
>> >
>> > https://stackoverflow.com/questions/38666795/does-kafka-
>> > support-elb-in-front-of-broker-cluster
>> >
>> > > port-elb-in-front-of-broker-cluster>
>> > Does Kafka support ELB in front of broker cluster? - Stack ...
>> > > port-elb-in-front-of-broker-cluster>
>> > stackoverflow.com
>> > I had a question regarding Kafka broker clusters on AWS. Right now there
>> > is an AWS ELB sitting in front of the cluster, but when I set the
>> > "bootstrap.servers" property of my producer or consumer to...
>> >
>> > does this help?
>> >
>> > Martin
>> > __
>> >
>> >
>> >
>> > --
>> > *From:* Tyler Monahan 
>> > *Sent:* Thursday, June 21, 2018 6:17 PM
>> > *To:* users@kafka.apache.org
>> > *Subject:* Configuring Kerberos behind an ELB
>> >
>> > Hello,
>> >
>> > I have setup kafka using kerberos successfully however if I try and
>> reach
>> > kafka through an elb the kerberos authentication fails. The kafka
>> brokers
>> > are each using their unique hostname for kerberos and when going
>> through an
>> > elb the consumer/producer only sees the elb's dns record which doesn't
>> have
>> > kerberos setup for it causing auth to fail. If I try to setup a service
>> > principle name for that dns record I can only associate it with one of
>> the
>> > brokers behind the elb so the other ones fail.
>> >
>> > I have tried setting up a service account and having the kafka brokers
>> use
>> > that which works when a consumer/producer is talking to the instances
>> > through the elb however inter broker communication which is also over
>> > kerberos fails at that point because it is going directly to the other
>> > nodes instead of through the elb. I am not sure where to go from here as
>> > there doesn't appear to be a way to configure the inter broker
>> > communication to work differently then the incoming consumer
>> communication
>> > short of getting rid of kerberos.
>> >
>> > Any 

Re: Configuring Kerberos behind an ELB

2018-06-22 Thread Ben Wood
Hey Tyler,

What is your end goal? To have a single publicly / internally available
address to be able to provide to consumers / producers to connect to the
Kerberized Kafka?

On Fri, Jun 22, 2018 at 9:20 AM, Tyler Monahan  wrote:

> Martin,
>
> I have read that stack overflow post but it doesn't help with my specific
> problem. An ELB will work if I am not using kerberos just fine. The issue
> started happening when I added kerberos auth to the cluster. The auth has
> to happen before the meta data request so it never gets to the point where
> it is by passing the load balancer. Because I am connecting with the load
> balancers dns record I don't have a valid spn on the brokers for the load
> balancers dns record. This blog post has some work arounds for kerberos
> with a load balancer and details the problem but I haven't been able to get
> any of them to work with kafka because it gets traffic through and ELB but
> also talks to the other brokers directly in my setup.
> https://ssimo.org/blog/id_019.html
>
> Tyler Monahan
>
> On Fri, Jun 22, 2018 at 5:36 AM, Martin Gainty 
> wrote:
>
> > MG>quoting stackoverflow below
> >
> > "You can use an ELB as the bootstrap.servers,
> > *The ELB will be used for the initial metadata request the client makes
> to
> > figure out which topic partitions are on which brokers, *
> > but after (the initial metadata request)
> > the
> > *brokers still need to be directly accessible to the client. *that it'll
> > use the hostname of the server (or advertised.listeners setting if you
> > need to customize it,
> > which, e.g. might be necessary on EC2 instances to get the public IP of a
> > server).
> > If you were trying to use an ELB to make a Kafka cluster publicly
> > available,
> > you'd need to make sure the advertised.listeners for each broker also
> > makes it publicly accessible. "
> >
> > MG> initial metadata request you will see elb
> > MG>after metadata request and topic partition locations are determined
> > MG>elb drops out and client will talk to directly to broker
> > MG>use healthcheck algorithm to determine assigned server/port assigned
> to
> > broker from /brokers/id/$id
> > MG>echo dump | nc localhost 2181 | grep brokers
> >
> >
> > https://stackoverflow.com/questions/38666795/does-kafka-
> > support-elb-in-front-of-broker-cluster
> >
> >  support-elb-in-front-of-broker-cluster>
> > Does Kafka support ELB in front of broker cluster? - Stack ...
> >  support-elb-in-front-of-broker-cluster>
> > stackoverflow.com
> > I had a question regarding Kafka broker clusters on AWS. Right now there
> > is an AWS ELB sitting in front of the cluster, but when I set the
> > "bootstrap.servers" property of my producer or consumer to...
> >
> > does this help?
> >
> > Martin
> > __
> >
> >
> >
> > --
> > *From:* Tyler Monahan 
> > *Sent:* Thursday, June 21, 2018 6:17 PM
> > *To:* users@kafka.apache.org
> > *Subject:* Configuring Kerberos behind an ELB
> >
> > Hello,
> >
> > I have setup kafka using kerberos successfully however if I try and reach
> > kafka through an elb the kerberos authentication fails. The kafka brokers
> > are each using their unique hostname for kerberos and when going through
> an
> > elb the consumer/producer only sees the elb's dns record which doesn't
> have
> > kerberos setup for it causing auth to fail. If I try to setup a service
> > principle name for that dns record I can only associate it with one of
> the
> > brokers behind the elb so the other ones fail.
> >
> > I have tried setting up a service account and having the kafka brokers
> use
> > that which works when a consumer/producer is talking to the instances
> > through the elb however inter broker communication which is also over
> > kerberos fails at that point because it is going directly to the other
> > nodes instead of through the elb. I am not sure where to go from here as
> > there doesn't appear to be a way to configure the inter broker
> > communication to work differently then the incoming consumer
> communication
> > short of getting rid of kerberos.
> >
> > Any advice would be greatly appreciated.
> >
> > Tyler Monahan
> >
>



-- 
Ben Wood
Software Engineer - Data Agility
Mesosphere


Re: Configuring Kerberos behind an ELB

2018-06-22 Thread Tyler Monahan
Martin,

I have read that stack overflow post but it doesn't help with my specific
problem. An ELB will work if I am not using kerberos just fine. The issue
started happening when I added kerberos auth to the cluster. The auth has
to happen before the meta data request so it never gets to the point where
it is by passing the load balancer. Because I am connecting with the load
balancers dns record I don't have a valid spn on the brokers for the load
balancers dns record. This blog post has some work arounds for kerberos
with a load balancer and details the problem but I haven't been able to get
any of them to work with kafka because it gets traffic through and ELB but
also talks to the other brokers directly in my setup.
https://ssimo.org/blog/id_019.html

Tyler Monahan

On Fri, Jun 22, 2018 at 5:36 AM, Martin Gainty  wrote:

> MG>quoting stackoverflow below
>
> "You can use an ELB as the bootstrap.servers,
> *The ELB will be used for the initial metadata request the client makes to
> figure out which topic partitions are on which brokers, *
> but after (the initial metadata request)
> the
> *brokers still need to be directly accessible to the client. *that it'll
> use the hostname of the server (or advertised.listeners setting if you
> need to customize it,
> which, e.g. might be necessary on EC2 instances to get the public IP of a
> server).
> If you were trying to use an ELB to make a Kafka cluster publicly
> available,
> you'd need to make sure the advertised.listeners for each broker also
> makes it publicly accessible. "
>
> MG> initial metadata request you will see elb
> MG>after metadata request and topic partition locations are determined
> MG>elb drops out and client will talk to directly to broker
> MG>use healthcheck algorithm to determine assigned server/port assigned to
> broker from /brokers/id/$id
> MG>echo dump | nc localhost 2181 | grep brokers
>
>
> https://stackoverflow.com/questions/38666795/does-kafka-
> support-elb-in-front-of-broker-cluster
>
> 
> Does Kafka support ELB in front of broker cluster? - Stack ...
> 
> stackoverflow.com
> I had a question regarding Kafka broker clusters on AWS. Right now there
> is an AWS ELB sitting in front of the cluster, but when I set the
> "bootstrap.servers" property of my producer or consumer to...
>
> does this help?
>
> Martin
> __
>
>
>
> --
> *From:* Tyler Monahan 
> *Sent:* Thursday, June 21, 2018 6:17 PM
> *To:* users@kafka.apache.org
> *Subject:* Configuring Kerberos behind an ELB
>
> Hello,
>
> I have setup kafka using kerberos successfully however if I try and reach
> kafka through an elb the kerberos authentication fails. The kafka brokers
> are each using their unique hostname for kerberos and when going through an
> elb the consumer/producer only sees the elb's dns record which doesn't have
> kerberos setup for it causing auth to fail. If I try to setup a service
> principle name for that dns record I can only associate it with one of the
> brokers behind the elb so the other ones fail.
>
> I have tried setting up a service account and having the kafka brokers use
> that which works when a consumer/producer is talking to the instances
> through the elb however inter broker communication which is also over
> kerberos fails at that point because it is going directly to the other
> nodes instead of through the elb. I am not sure where to go from here as
> there doesn't appear to be a way to configure the inter broker
> communication to work differently then the incoming consumer communication
> short of getting rid of kerberos.
>
> Any advice would be greatly appreciated.
>
> Tyler Monahan
>


Re: Configuring Kerberos behind an ELB

2018-06-22 Thread Martin Gainty
MG>quoting stackoverflow below

"You can use an ELB as the bootstrap.servers,
The ELB will be used for the initial metadata request the client makes to 
figure out which topic partitions are on which brokers,
but after (the initial metadata request)
the brokers still need to be directly accessible to the client.
that it'll use the hostname of the server (or advertised.listeners setting if 
you need to customize it,
which, e.g. might be necessary on EC2 instances to get the public IP of a 
server).
If you were trying to use an ELB to make a Kafka cluster publicly available,
you'd need to make sure the advertised.listeners for each broker also makes it 
publicly accessible. "

MG> initial metadata request you will see elb
MG>after metadata request and topic partition locations are determined
MG>elb drops out and client will talk to directly to broker
MG>use healthcheck algorithm to determine assigned server/port assigned to 
broker from /brokers/id/$id
MG>echo dump | nc localhost 2181 | grep brokers


https://stackoverflow.com/questions/38666795/does-kafka-support-elb-in-front-of-broker-cluster

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]

Does Kafka support ELB in front of broker cluster? - Stack 
...
stackoverflow.com
I had a question regarding Kafka broker clusters on AWS. Right now there is an 
AWS ELB sitting in front of the cluster, but when I set the "bootstrap.servers" 
property of my producer or consumer to...


does this help?

Martin
__





From: Tyler Monahan 
Sent: Thursday, June 21, 2018 6:17 PM
To: users@kafka.apache.org
Subject: Configuring Kerberos behind an ELB

Hello,

I have setup kafka using kerberos successfully however if I try and reach
kafka through an elb the kerberos authentication fails. The kafka brokers
are each using their unique hostname for kerberos and when going through an
elb the consumer/producer only sees the elb's dns record which doesn't have
kerberos setup for it causing auth to fail. If I try to setup a service
principle name for that dns record I can only associate it with one of the
brokers behind the elb so the other ones fail.

I have tried setting up a service account and having the kafka brokers use
that which works when a consumer/producer is talking to the instances
through the elb however inter broker communication which is also over
kerberos fails at that point because it is going directly to the other
nodes instead of through the elb. I am not sure where to go from here as
there doesn't appear to be a way to configure the inter broker
communication to work differently then the incoming consumer communication
short of getting rid of kerberos.

Any advice would be greatly appreciated.

Tyler Monahan