This is an automated email from the ASF dual-hosted git repository.
mimaison pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new 3710add2a7f KAFKA-18012: Update the Scram configuration section for
KRaft (#17844)
3710add2a7f is described below
commit 3710add2a7f8048f001c0f6a255b0bb209d35830
Author: PoAn Yang <[email protected]>
AuthorDate: Wed Nov 27 18:37:24 2024 +0800
KAFKA-18012: Update the Scram configuration section for KRaft (#17844)
Reviewers: Mickael Maison <[email protected]>
---
docs/security.html | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/docs/security.html b/docs/security.html
index 069ac42c106..7b08458a864 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -814,27 +814,27 @@ sasl.mechanism=PLAIN</code></pre></li>
Kafka supports <a
href="https://tools.ietf.org/html/rfc7677">SCRAM-SHA-256</a> and SCRAM-SHA-512
which
can be used with TLS to perform secure authentication. Under
the default implementation of <code>principal.builder.class</code>, the
username is used as the authenticated
<code>Principal</code> for configuration of ACLs etc. The
default SCRAM implementation in Kafka
- stores SCRAM credentials in Zookeeper and is suitable for use
in Kafka installations where Zookeeper
- is on a private network. Refer to <a
href="#security_sasl_scram_security">Security Considerations</a>
+ stores SCRAM credentials in the metadata log. Refer to <a
href="#security_sasl_scram_security">Security Considerations</a>
for more details.</p>
<ol>
<li><h5 class="anchor-heading"><a
id="security_sasl_scram_credentials" class="anchor-link"></a><a
href="#security_sasl_scram_credentials">Creating SCRAM Credentials</a></h5>
- <p>The SCRAM implementation in Kafka uses Zookeeper as
credential store. Credentials can be created in
- Zookeeper using <code>kafka-configs.sh</code>. For
each SCRAM mechanism enabled, credentials must be created
+ <p>The SCRAM implementation in Kafka uses the metadata log
as credential store. Credentials can be created in
+ the metadata log using <code>kafka-storage.sh</code>
or <code>kafka-configs.sh</code>. For each SCRAM mechanism enabled, credentials
must be created
by adding a config with the mechanism name.
Credentials for inter-broker communication must be created
- before Kafka brokers are started. Client credentials
may be created and updated dynamically and updated
- credentials will be used to authenticate new
connections.</p>
- <p>Create SCRAM credentials for user <i>alice</i> with
password <i>alice-secret</i>:
- <pre><code class="language-bash">$ bin/kafka-configs.sh
--zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties
--alter --add-config
'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
--entity-type users --entity-name alice</code></pre>
- <p>The default iteration count of 4096 is used if
iterations are not specified. A random salt is created
- and the SCRAM identity consisting of salt, iterations,
StoredKey and ServerKey are stored in Zookeeper.
+ before Kafka brokers are started.
<code>kafka-storage.sh</code> can format storage with initial credentials.
+ Client credentials may be created and updated
dynamically and updated credentials will be used to authenticate new
connections.
+ <code>kafka-configs.sh</code> can be used to create
and update credentials after Kafka brokers are started.</p>
+ <p>Create initial SCRAM credentials for user <i>admin</i>
with password <i>admin-secret</i>:
+ <pre><code class="language-bash">$ bin/kafka-storage.sh
format -t $(bin/kafka-storage.sh random-uuid) -c config/kraft/server.properties
--add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'</code></pre>
+ <p>Create SCRAM credentials for user <i>alice</i> with
password <i>alice-secret</i> (refer to <a
href="#security_sasl_scram_clientconfig">Configuring Kafka Clients</a> for
client configuration):
+ <pre><code class="language-bash">$ bin/kafka-configs.sh
--bootstrap-server localhost:9092 --alter --add-config
'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users
--entity-name alice --command-config client.properties</code></pre>
+ <p>The default iteration count of 4096 is used if
iterations are not specified. A random salt is created if it's not specified.
+ The SCRAM identity consisting of salt, iterations,
StoredKey and ServerKey are stored in the metadata log.
See <a href="https://tools.ietf.org/html/rfc5802">RFC
5802</a> for details on SCRAM identity and the individual fields.
- <p>The following examples also require a user <i>admin</i>
for inter-broker communication which can be created using:
- <pre><code class="language-bash">$ bin/kafka-configs.sh
--zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties
--alter --add-config
'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]'
--entity-type users --entity-name admin</code></pre>
<p>Existing credentials may be listed using the
<i>--describe</i> option:
- <pre><code class="language-bash">$ bin/kafka-configs.sh
--zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties
--describe --entity-type users --entity-name alice</code></pre>
+ <pre><code class="language-bash">$ bin/kafka-configs.sh
--bootstrap-server localhost:9092 --describe --entity-type users --entity-name
alice --command-config client.properties</code></pre>
<p>Credentials may be deleted for one or more SCRAM
mechanisms using the <i>--alter --delete-config</i> option:
- <pre><code class="language-bash">$ bin/kafka-configs.sh
--zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties
--alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name
alice</code></pre>
+ <pre><code class="language-bash">$ bin/kafka-configs.sh
--bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-256'
--entity-type users --entity-name alice --command-config
client.properties</code></pre>
</li>
<li><h5 class="anchor-heading"><a
id="security_sasl_scram_brokerconfig" class="anchor-link"></a><a
href="#security_sasl_scram_brokerconfig">Configuring Kafka Brokers</a></h5>
<ol>
@@ -882,17 +882,17 @@ sasl.mechanism=SCRAM-SHA-256 (or
SCRAM-SHA-512)</code></pre></li>
</li>
<li><h5><a id="security_sasl_scram_security"
href="#security_sasl_scram_security">Security Considerations for
SASL/SCRAM</a></h5>
<ul>
- <li>The default implementation of SASL/SCRAM in Kafka
stores SCRAM credentials in Zookeeper. This
- is suitable for production use in installations
where Zookeeper is secure and on a private network.</li>
+ <li>The default implementation of SASL/SCRAM in Kafka
stores SCRAM credentials in the metadata log. This
+ is suitable for production use in installations
where KRaft controllers are secure and on a private network.</li>
<li>Kafka supports only the strong hash functions
SHA-256 and SHA-512 with a minimum iteration count
of 4096. Strong hash functions combined with
strong passwords and high iteration counts protect
- against brute force attacks if Zookeeper security
is compromised.</li>
+ against brute force attacks if KRaft controllers
security is compromised.</li>
<li>SCRAM should be used only with TLS-encryption to
prevent interception of SCRAM exchanges. This
- protects against dictionary or brute force attacks
and against impersonation if Zookeeper is compromised.</li>
+ protects against dictionary or brute force attacks
and against impersonation if KRaft controllers security is compromised.</li>
<li>From Kafka version 2.0 onwards, the default
SASL/SCRAM credential store may be overridden using custom callback handlers
- by configuring
<code>sasl.server.callback.handler.class</code> in installations where
Zookeeper is not secure.</li>
+ by configuring
<code>sasl.server.callback.handler.class</code> in installations where KRaft
controllers are not secure.</li>
<li>For more details on security considerations, refer
to
- <a
href="https://tools.ietf.org/html/rfc5802#section-9">RFC 5802</a>.
+ <a
href="https://tools.ietf.org/html/rfc5802#section-9">RFC 5802</a>.</li>
</ul>
</li>
</ol>