I am trying to use "ssl.keystore.certificate.chain" and "ssl.keystore.key" in my brokers' configuration in order to be able to use dynamic recompilation for short TLS certificates expiration. No luck so far.

I have been unable to find a complete example anywhere.

My current configuration is this (I am using KRaft and Kafka 3.6.0):

Hidden data marked with *****
"""
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker

# The node id associated with this instance's roles
node.id=3

# The connect string for the controller quorum
controller.quorum.voters=*****

############################# Socket Server Settings #############################

# The address the socket server listens on. If not configured, the host name will be equal to the value of # java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=BROKER://:9092
advertised.listeners=BROKER://*****:9092

# Name of listener used for communication between brokers.
inter.broker.listener.name=broker

# A comma-separated list of the names of the listeners used by the controller. # This is required if running in KRaft mode. On a node with `process.roles=broker`, only the first listed listener will be used by the broker.
controller.listener.names=CONTROLLER

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:SASL_SSL,BROKER:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
# KIP-631
sasl.mechanism.controller.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256

ssl.truststore.type=PEM
ssl.truststore.location=/home/*****

password.encoder.secret=*****

ssl.keystore.type=PEM
ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE----- \
***** \
-----END CERTIFICATE-----
ssl.keystore.key=-----BEGIN RSA PRIVATE KEY----- \
***** \
-----END RSA PRIVATE KEY-----


listener.name.controller.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="3" password="*****";

listener.name.broker.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="3" password="*****";
"""

Trying this configuration, I get and exception:

"""
org.apache.kafka.common.KafkaException: org.apache.kafka.common.errors.InvalidConfigurationException: Invalid PEM keystore configs
[...]
Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Invalid PEM keystore configs Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Private key could not be loaded Caused by: java.security.spec.InvalidKeySpecException: Could not create RSA private key
[...]
"""

I have tried quite a few variations, with no success.

Using "ssl.keystore.location" and PKCS#12, it works, but can not be dynamically update (well, maybe changing the local p12 file and reloading, I have not tried it)

Anybody out there have a working configuration to share?

Moreover, I am not quite sure how dynamic reconfiguration would work in this case. I guess the local static configuration will be overriden by a remote configuration coming for the quorum metadata. But how is that metadata fetched if the "old" static certificate has expired? Maybe it is loaded from the local metadata copy that each broker has in "__cluster_metadata-0"? Maybe somewhere else locally, private for that broker?. How is that data protected by "password.encoder.secret"?. Remember that I am using KRaft.

If the dynamic reconfiguration overrides the static configuration, how could a mistake be solved if the broker can not join the network because certificate has expired? For example, if dynamic configuration didn't work and the certificate deployed has expired. If the (expired) certificate override the static certificate in the server.properties, how can the broker be recovered with no destruction & recreation?

I have read KIP-266 and KIP-651. I don't know how KRaft changes the documented scenario in those documents.

Thanks for time and expertise.

PS: This is a brand new Kafka 3.6.0 cluster. Nothing legacy to worry about.

--
Jesús Cea Avión                         _/_/      _/_/_/        _/_/_/
j...@jcea.es - https://www.jcea.es/    _/_/    _/_/  _/_/    _/_/  _/_/
Twitter: @jcea                        _/_/    _/_/          _/_/_/_/_/
jabber / xmpp:j...@jabber.org  _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz

Reply via email to