This is an automated email from the ASF dual-hosted git repository.
frankvicky pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new e3712c8b0 Revert some changes that unexpectedly be overwritten (#695)
e3712c8b0 is described below
commit e3712c8b0c1f4ddf9657e372bead770c9db1fe8f
Author: TengYao Chi <[email protected]>
AuthorDate: Wed Jun 18 07:35:26 2025 +0800
Revert some changes that unexpectedly be overwritten (#695)
---
39/generated/kafka_config.html | 6 +++---
39/ops.html | 40 ++++++++++++++++++++++++----------------
2 files changed, 27 insertions(+), 19 deletions(-)
diff --git a/39/generated/kafka_config.html b/39/generated/kafka_config.html
index 3c9ecc7c7..b44d6aed8 100644
--- a/39/generated/kafka_config.html
+++ b/39/generated/kafka_config.html
@@ -67,7 +67,7 @@
</li>
<li>
<h4><a id="control.plane.listener.name"></a><a
id="brokerconfigs_control.plane.listener.name"
href="#brokerconfigs_control.plane.listener.name">control.plane.listener.name</a></h4>
-<p>Name of listener used for communication between controller and brokers. A
broker will use the <code>control.plane.listener.name</code> to locate the
endpoint in listeners list, to listen for connections from the controller. For
example, if a broker's config is:<br><code>listeners =
INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093,
CONTROLLER://192.1.1.8:9094listener.security.protocol.map = INTERNAL:PLAINTEXT,
EXTERNAL:SSL, CONTROLLER:SSLcontrol.plane.listener.name = CONTROLLER</cod [...]
+<p>Name of listener used for communication between controller and brokers. A
broker will use the <code>control.plane.listener.name</code> to locate the
endpoint in listeners list, to listen for connections from the controller. For
example, if a broker's config is:<br><code>listeners =
INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093,
CONTROLLER://192.1.1.8:9094 listener.security.protocol.map =
INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name =
CONTROLLER</c [...]
<table><tbody>
<tr><th>Type:</th><td>string</td></tr>
<tr><th>Default:</th><td>null</td></tr>
@@ -1321,11 +1321,11 @@
</li>
<li>
<h4><a id="group.coordinator.rebalance.protocols"></a><a
id="brokerconfigs_group.coordinator.rebalance.protocols"
href="#brokerconfigs_group.coordinator.rebalance.protocols">group.coordinator.rebalance.protocols</a></h4>
-<p>The list of enabled rebalance protocols. Supported protocols:
consumer,classic,share,unknown. The consumer rebalance protocol is in early
access and therefore must not be used in production.</p>
+<p>The list of enabled rebalance protocols. The consumer and share rebalance
protocol are in early access and therefore must not be used in production.</p>
<table><tbody>
<tr><th>Type:</th><td>list</td></tr>
<tr><th>Default:</th><td>classic</td></tr>
-<tr><th>Valid Values:</th><td>[consumer, classic, share, unknown]</td></tr>
+<tr><th>Valid Values:</th><td>[consumer, classic, share]</td></tr>
<tr><th>Importance:</th><td>medium</td></tr>
<tr><th>Update Mode:</th><td>read-only</td></tr>
</tbody></table>
diff --git a/39/ops.html b/39/ops.html
index 94dc19b4e..646d1c3ca 100644
--- a/39/ops.html
+++ b/39/ops.html
@@ -3802,22 +3802,22 @@ controller.listener.names=CONTROLLER</code></pre>
<h5 class="anchor-heading"><a id="kraft_storage_standalone"
class="anchor-link"></a><a href="#kraft_storage_standalone">Bootstrap a
Standalone Controller</a></h5>
The recommended method for creating a new KRaft controller cluster is to
bootstrap it with one voter and dynamically <a href="#kraft_reconfig_add">add
the rest of the controllers</a>. Bootstrapping the first controller can be done
with the following CLI command:
- <pre><code class="language-bash">$ bin/kafka-storage format --cluster-id
<cluster-id> --standalone --config controller.properties</code></pre>
+ <pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id
<cluster-id> --standalone --config
./config/kraft/controller.properties</code></pre>
This command will 1) create a meta.properties file in metadata.log.dir with
a randomly generated directory.id, 2) create a snapshot at
00000000000000000000-0000000000.checkpoint with the necessary control records
(KRaftVersionRecord and VotersRecord) to make this Kafka node the only voter
for the quorum.
<h5 class="anchor-heading"><a id="kraft_storage_voters"
class="anchor-link"></a><a href="#kraft_storage_voters">Bootstrap with Multiple
Controllers</a></h5>
The KRaft cluster metadata partition can also be bootstrapped with more than
one voter. This can be done by using the --initial-controllers flag:
- <pre><code class="language-bash">cluster-id=$(kafka-storage random-uuid)
-controller-0-uuid=$(kafka-storage random-uuid)
-controller-1-uuid=$(kafka-storage random-uuid)
-controller-2-uuid=$(kafka-storage random-uuid)
+ <pre><code class="language-bash">cluster-id=$(bin/kafka-storage.sh
random-uuid)
+controller-0-uuid=$(bin/kafka-storage.sh random-uuid)
+controller-1-uuid=$(bin/kafka-storage.sh random-uuid)
+controller-2-uuid=$(bin/kafka-storage.sh random-uuid)
# In each controller execute
-kafka-storage format --cluster-id ${cluster-id} \
+bin/kafka-storage.sh format --cluster-id ${cluster-id} \
--initial-controllers
"0@controller-0:1234:${controller-0-uuid},1@controller-1:1234:${controller-1-uuid},2@controller-2:1234:${controller-2-uuid}"
\
- --config controller.properties</code></pre>
+ --config config/kraft/controller.properties</code></pre>
This command is similar to the standalone version but the snapshot at
00000000000000000000-0000000000.checkpoint will instead contain a VotersRecord
that includes information for all of the controllers specified in
--initial-controllers. It is important that the value of this flag is the same
in all of the controllers with the same cluster id.
@@ -3826,7 +3826,7 @@ In the replica description
0@controller-0:1234:3Db5QLSqSZieL3rJBUUegA, 0 is the
<h5 class="anchor-heading"><a id="kraft_storage_observers"
class="anchor-link"></a><a href="#kraft_storage_observers">Formatting Brokers
and New Controllers</a></h5>
When provisioning new broker and controller nodes that we want to add to an
existing Kafka cluster, use the <code>kafka-storage.sh format</code> command
with the --no-initial-controllers flag.
- <pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id
<cluster-id> --config server.properties
--no-initial-controllers</code></pre>
+ <pre><code class="language-bash">$ bin/kafka-storage.sh format --cluster-id
<cluster-id> --config config/kraft/server.properties
--no-initial-controllers</code></pre>
<h4 class="anchor-heading"><a id="kraft_reconfig" class="anchor-link"></a><a
href="#kraft_reconfig">Controller membership changes</a></h4>
@@ -3879,7 +3879,7 @@ Feature: metadata.version SupportedMinVersion:
3.0-IV1 SupportedMaxVers
this flag when formatting brokers -- only when formatting controllers.)<p>
<pre><code class="language-bash">
- $ bin/kafka-storage.sh format -t KAFKA_CLUSTER_ID --feature kraft.version=1
-c controller_static.properties
+ $ bin/kafka-storage.sh format -t KAFKA_CLUSTER_ID --feature kraft.version=1
-c config/kraft/controller.properties
Cannot set kraft.version to 1 unless KIP-853 configuration is present. Try
removing the --feature flag for kraft.version.
</code></pre><p>
@@ -3892,10 +3892,10 @@ Feature: metadata.version SupportedMinVersion:
3.0-IV1 SupportedMaxVers
After starting the controller, the replication to the new controller can be
monitored using the <code>kafka-metadata-quorum describe --replication</code>
command. Once the new controller has caught up to the active controller, it can
be added to the cluster using the <code>kafka-metadata-quorum
add-controller</code> command.
When using broker endpoints use the --bootstrap-server flag:
- <pre><code class="language-bash">$ bin/kafka-metadata-quorum
--command-config controller.properties --bootstrap-server localhost:9092
add-controller</code></pre>
+ <pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh
--command-config config/kraft/controller.properties --bootstrap-server
localhost:9092 add-controller</code></pre>
When using controller endpoints use the --bootstrap-controller flag:
- <pre><code class="language-bash">$ bin/kafka-metadata-quorum
--command-config controller.properties --bootstrap-controller localhost:9092
add-controller</code></pre>
+ <pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh
--command-config config/kraft/controller.properties --bootstrap-controller
localhost:9092 add-controller</code></pre>
<h5 class="anchor-heading"><a id="kraft_reconfig_remove"
class="anchor-link"></a><a href="#kraft_reconfig_remove">Remove
Controller</a></h5>
If the dynamic controller cluster already exists, it can be shrunk using the
<code>bin/kafka-metadata-quorum.sh remove-controller</code> command. Until
KIP-996: Pre-vote has been implemented and released, it is recommended to
shutdown the controller that will be removed before running the
remove-controller command.
@@ -4076,7 +4076,12 @@ inter.broker.listener.name=PLAINTEXT
# Other configs ...</code></pre>
- <p>The new standalone controller in the example configuration above should
be formatted using the <code>kafka-storage format
--standalone</code>command.</p>
+ <p>
+ Follow these steps to format and start a new standalone controller:
+ </p>
+ <pre><code class="language-bash"># Save the previously retrieved cluster ID
from ZooKeeper in a variable called zk-cluster-id
+$ bin/kafka-storage.sh format --standalone -t <zk-cluster-id> -c
config/kraft/controller.properties
+$ bin/kafka-server-start.sh config/kraft/controller.properties</code></pre>
<p><em>Note: The KRaft cluster <code>node.id</code> values must be different
from any existing ZK broker <code>broker.id</code>.
In KRaft-mode, the brokers and controllers share the same Node ID
namespace.</em></p>
@@ -4251,9 +4256,9 @@ listeners=CONTROLLER://:9093
Deprovision the KRaft controller quorum.
</li>
<li>
- Using <code>zookeeper-shell.sh</code>, run <code>rmr
/controller</code> so that one
+ Using <code>zookeeper-shell.sh</code>, run <code>delete
/controller</code> so that one
of the brokers can become the new old-style controller.
Additionally, run
- <code>get /migration</code> followed by <code>rmr
/migration</code> to clear the
+ <code>get /migration</code> followed by <code>delete
/migration</code> to clear the
migration state from ZooKeeper. This will allow you to
re-attempt the migration
in the future. The data read from "/migration" can be useful for
debugging.
</li>
@@ -4291,8 +4296,11 @@ listeners=CONTROLLER://:9093
Deprovision the KRaft controller quorum.
</li>
<li>
- Using <code>zookeeper-shell.sh</code>, run <code>rmr
/controller</code> so that one
- of the brokers can become the new old-style controller.
+ Using <code>zookeeper-shell.sh</code>, run <code>delete
/controller</code> so that one
+ of the brokers can become the new old-style controller.
Additionally, run
+ <code>get /migration</code> followed by <code>delete
/migration</code> to clear the
+ migration state from ZooKeeper. This will allow you to
re-attempt the migration
+ in the future. The data read from "/migration" can be useful for
debugging.
</li>
<li>
On each broker, remove the
<code>zookeeper.metadata.migration.enable</code>,