This is an automated email from the ASF dual-hosted git repository.

davidarthur pushed a commit to branch 3.3
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/3.3 by this push:
     new 6174f95d61c MINOR: update configuration.html with KRaft details 
(#12678)
6174f95d61c is described below

commit 6174f95d61c54fb228fc23e5111e2941f3947cad
Author: David Arthur <mum...@gmail.com>
AuthorDate: Mon Sep 26 10:16:12 2022 -0400

    MINOR: update configuration.html with KRaft details (#12678)
---
 core/src/main/scala/kafka/server/KafkaConfig.scala |  1 +
 docs/configuration.html                            | 38 ++++++++++++++++------
 2 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/core/src/main/scala/kafka/server/KafkaConfig.scala 
b/core/src/main/scala/kafka/server/KafkaConfig.scala
index 8f4806185b9..497a904c2c5 100755
--- a/core/src/main/scala/kafka/server/KafkaConfig.scala
+++ b/core/src/main/scala/kafka/server/KafkaConfig.scala
@@ -712,6 +712,7 @@ object KafkaConfig {
   val BrokerHeartbeatIntervalMsDoc = "The length of time in milliseconds 
between broker heartbeats. Used when running in KRaft mode."
   val BrokerSessionTimeoutMsDoc = "The length of time in milliseconds that a 
broker lease lasts if no heartbeats are made. Used when running in KRaft mode."
   val NodeIdDoc = "The node ID associated with the roles this process is 
playing when `process.roles` is non-empty. " +
+    "Every node in a KRaft cluster must have a unique `node.id`, this includes 
broker and controller nodes. " +
     "This is required configuration when running in KRaft mode."
   val MetadataLogDirDoc = "This configuration determines where we put the 
metadata log for clusters in KRaft mode. " +
     "If it is not set, the metadata log is placed in the first log directory 
from log.dirs."
diff --git a/docs/configuration.html b/docs/configuration.html
index ceb671ca750..c2f342f2ee1 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -20,13 +20,22 @@
 
   <h3 class="anchor-heading"><a id="brokerconfigs" class="anchor-link"></a><a 
href="#brokerconfigs">3.1 Broker Configs</a></h3>
 
-  The essential configurations are the following:
+  For ZooKeeper clusters, brokers must have the following configuration:
   <ul>
-      <li><code>broker.id</code>
-      <li><code>log.dirs</code>
-      <li><code>zookeeper.connect</code>
+    <li><code>broker.id</code></li>
+    <li><code>log.dirs</code></li>
+    <li><code>zookeeper.connect</code></li>
   </ul>
 
+  For KRaft clusters, brokers and controllers must have the following 
configurations:
+  <ul>
+    <li><code>node.id</code></li>
+    <li><code>log.dirs</code></li>
+    <li><code>process.roles</code></li>
+  </ul>
+
+  On KRaft brokers, if <code>broker.id</code> is set, it must be equal to 
<code>node.id</code>.
+
   Topic-level configurations and defaults are discussed in more detail <a 
href="#topicconfigs">below</a>.
 
   <!--#include virtual="generated/kafka_config.html" -->
@@ -62,13 +71,16 @@
   All configs that are configurable at cluster level may also be configured at 
per-broker level (e.g. for testing).
   If a config value is defined at different levels, the following order of 
precedence is used:
   <ul>
-  <li>Dynamic per-broker config stored in ZooKeeper</li>
-  <li>Dynamic cluster-wide default config stored in ZooKeeper</li>
-  <li>Static broker config from <code>server.properties</code></li>
+  <li>Dynamic per-broker configs</li>
+  <li>Dynamic cluster-wide default configs</li>
+  <li>Static broker configs from <code>server.properties</code></li>
   <li>Kafka default, see <a href="#brokerconfigs">broker configs</a></li>
   </ul>
 
-  <h5>Updating Password Configs Dynamically</h5>
+  Dynamic configs are stored in Kafka as cluster metadata. In ZooKeeper mode, 
dynamic configs are stored in ZooKeeper.
+  In KRaft mode, dynamic configs are stored as records in the metadata log.
+
+  <h5>Updating Password Configs Dynamically (ZooKeeper-only)</h5>
   <p>Password config values that are dynamically updated are encrypted before 
storing in ZooKeeper. The broker config
   <code>password.encoder.secret</code> must be configured in 
<code>server.properties</code> to enable dynamic update
   of password configs. The secret may be different on different brokers.</p>
@@ -159,12 +171,18 @@
 
   From Kafka version 2.0.0 onwards, unclean leader election is automatically 
enabled by the controller when the config
   <code>unclean.leader.election.enable</code> is dynamically updated.
-  In Kafka version 1.1.x, changes to 
<code>unclean.leader.election.enable</code> take effect only when a new 
controller is elected.
-  Controller re-election may be forced by running:
+  In Kafka version 1.1.x, changes to 
<code>unclean.leader.election.enable</code> take effect only when a new 
controller
+  is elected.
+
+  In ZooKeeper mode, a Controller re-election may be forced by removing the 
Controller's ZNode. This is
+  done using the <code>zookeeper-shell.sh</code> utility included in the "bin" 
directory.
 
   <pre class="line-numbers"><code class="language-bash">&gt; 
bin/zookeeper-shell.sh localhost
   rmr /controller</code></pre>
 
+  In KRaft mode, the way to force a Controller re-election is to terminate the 
active controller node. Since KRaft
+  controllers do not host partitions, they are generally very quick to restart.
+
   <h5>Updating Log Cleaner Configs</h5>
   Log cleaner configs may be updated dynamically at cluster-default level used 
by all brokers. The changes take effect
   on the next iteration of log cleaning. One or more of these configs may be 
updated:

Reply via email to