Ben,

Yes. I want to be able to provide consumers/producers with a single address
they can use to connect to the cluster. Having it behind an elb lets us
scale up and replace nodes with out needing to mess with consumer/producer
configurations. I have considered setting up individual dns records for
each broker and feeding in a list of instances to connect to but this is
not as flexible as using an elb and does not match our general strategy for
infrastructure. If at all possible I would like to get kafka working behind
and elb with kerberos.

Tyler Monahan

On Fri, Jun 22, 2018 at 9:44 AM, Ben Wood <bw...@mesosphere.io> wrote:

> Hey Tyler,
>
> What is your end goal? To have a single publicly / internally available
> address to be able to provide to consumers / producers to connect to the
> Kerberized Kafka?
>
> On Fri, Jun 22, 2018 at 9:20 AM, Tyler Monahan <tjmonah...@gmail.com>
> wrote:
>
>> Martin,
>>
>> I have read that stack overflow post but it doesn't help with my specific
>> problem. An ELB will work if I am not using kerberos just fine. The issue
>> started happening when I added kerberos auth to the cluster. The auth has
>> to happen before the meta data request so it never gets to the point where
>> it is by passing the load balancer. Because I am connecting with the load
>> balancers dns record I don't have a valid spn on the brokers for the load
>> balancers dns record. This blog post has some work arounds for kerberos
>> with a load balancer and details the problem but I haven't been able to
>> get
>> any of them to work with kafka because it gets traffic through and ELB but
>> also talks to the other brokers directly in my setup.
>> https://ssimo.org/blog/id_019.html
>>
>> Tyler Monahan
>>
>> On Fri, Jun 22, 2018 at 5:36 AM, Martin Gainty <mgai...@hotmail.com>
>> wrote:
>>
>> > MG>quoting stackoverflow below
>> >
>> > "You can use an ELB as the bootstrap.servers,
>> > *The ELB will be used for the initial metadata request the client makes
>> to
>> > figure out which topic partitions are on which brokers, *
>> > but after (the initial metadata request)
>> > the
>> > *brokers still need to be directly accessible to the client. *that it'll
>> > use the hostname of the server (or advertised.listeners setting if you
>> > need to customize it,
>> > which, e.g. might be necessary on EC2 instances to get the public IP of
>> a
>> > server).
>> > If you were trying to use an ELB to make a Kafka cluster publicly
>> > available,
>> > you'd need to make sure the advertised.listeners for each broker also
>> > makes it publicly accessible. "
>> >
>> > MG> initial metadata request you will see elb
>> > MG>after metadata request and topic partition locations are determined
>> > MG>elb drops out and client will talk to directly to broker
>> > MG>use healthcheck algorithm to determine assigned server/port assigned
>> to
>> > broker from /brokers/id/$id
>> > MG>echo dump | nc localhost 2181 | grep brokers
>> >
>> >
>> > https://stackoverflow.com/questions/38666795/does-kafka-
>> > support-elb-in-front-of-broker-cluster
>> >
>> > <https://stackoverflow.com/questions/38666795/does-kafka-sup
>> port-elb-in-front-of-broker-cluster>
>> > Does Kafka support ELB in front of broker cluster? - Stack ...
>> > <https://stackoverflow.com/questions/38666795/does-kafka-sup
>> port-elb-in-front-of-broker-cluster>
>> > stackoverflow.com
>> > I had a question regarding Kafka broker clusters on AWS. Right now there
>> > is an AWS ELB sitting in front of the cluster, but when I set the
>> > "bootstrap.servers" property of my producer or consumer to...
>> >
>> > does this help?
>> >
>> > Martin
>> > ______________________________________________
>> >
>> >
>> >
>> > ------------------------------
>> > *From:* Tyler Monahan <tjmonah...@gmail.com>
>> > *Sent:* Thursday, June 21, 2018 6:17 PM
>> > *To:* users@kafka.apache.org
>> > *Subject:* Configuring Kerberos behind an ELB
>> >
>> > Hello,
>> >
>> > I have setup kafka using kerberos successfully however if I try and
>> reach
>> > kafka through an elb the kerberos authentication fails. The kafka
>> brokers
>> > are each using their unique hostname for kerberos and when going
>> through an
>> > elb the consumer/producer only sees the elb's dns record which doesn't
>> have
>> > kerberos setup for it causing auth to fail. If I try to setup a service
>> > principle name for that dns record I can only associate it with one of
>> the
>> > brokers behind the elb so the other ones fail.
>> >
>> > I have tried setting up a service account and having the kafka brokers
>> use
>> > that which works when a consumer/producer is talking to the instances
>> > through the elb however inter broker communication which is also over
>> > kerberos fails at that point because it is going directly to the other
>> > nodes instead of through the elb. I am not sure where to go from here as
>> > there doesn't appear to be a way to configure the inter broker
>> > communication to work differently then the incoming consumer
>> communication
>> > short of getting rid of kerberos.
>> >
>> > Any advice would be greatly appreciated.
>> >
>> > Tyler Monahan
>> >
>>
>
>
>
> --
> Ben Wood
> Software Engineer - Data Agility
> Mesosphere
>

Reply via email to