Tyler Monahan created KAFKA-7416:
------------------------------------
Summary: kerberos credentials not being refreshed
Key: KAFKA-7416
URL: https://issues.apache.org/jira/browse/KAFKA-7416
Project: Kafka
Issue Type: Bug
Components: security
Affects Versions: 1.1.0
Environment: ubnutu 14, aws
Reporter: Tyler Monahan
My setup uses kerberos for auth between consumers/producers/brokers in aws.
When an instances goes down in aws a new one spins back up to replace the old
one and reuses the old kerberos dns name and kafka id. I am running into an
issue where the consumers/producers/brokers are caching the credentials for the
old server and they continue to use them to login to the new server which fails
since it has a different kerberos key. I have not found a way to make kafka
clear out the login credentials so it can login to the new node.
I had hoped I could update the jaas config to use credentials that were not
stored in java but it seems like storeKey=true is required to work so I can't
do that. My other hope was that I could modify the /etc/krb5.conf config to set
a low life time on the tickets but kafka doesn't seem to honor that. If there
was some way to configure java to expire the stored credentials periodically
that might work.
This is the error I get initially when a node dies and a new one comes up from
the kafka controller which tries to connect to it. Restarting the kafka brokers
causes it to no longer have this error.
{code:java}
[RequestSendThread controllerId=3] Controller 3's connection to broker
int-kafka-a-1.int.skytouch.io:9092 (id: 1 rack: null) was unsuccessful
(kafka.controller.RequestSendThread)
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication
failed due to invalid credentials with SASL mechanism GSSAPI
{code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)