This is an automated email from the ASF dual-hosted git repository.

chia7712 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new 13d9a199f25 KAFKA-18011 Remove ZooKeeper sections from the docs 
(#17813)
13d9a199f25 is described below

commit 13d9a199f253dbd920bfce8f273aa8f82b0ad738
Author: Mickael Maison <[email protected]>
AuthorDate: Mon Nov 25 18:58:48 2024 +0100

    KAFKA-18011 Remove ZooKeeper sections from the docs (#17813)
    
    Reviewers: Chia-Ping Tsai <[email protected]>
---
 docs/configuration.html      |  44 +---
 docs/design.html             |  23 +--
 docs/implementation.html     |  39 ----
 docs/migration.html          |  34 ---
 docs/ops.html                | 482 ++-----------------------------------------
 docs/quickstart.html         |  31 +--
 docs/security.html           | 218 +------------------
 docs/streams/quickstart.html |  28 +--
 docs/toc.html                |  37 +---
 9 files changed, 48 insertions(+), 888 deletions(-)

diff --git a/docs/configuration.html b/docs/configuration.html
index cd12dd3ea9a..1f43995bd11 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -22,9 +22,10 @@
 
   The essential configurations are the following:
   <ul>
-      <li><code>broker.id</code>
+      <li><code>node.id</code>
       <li><code>log.dirs</code>
-      <li><code>zookeeper.connect</code>
+      <li><code>process.roles</code>
+      <li><code>controller.quorum.bootstrap.servers</code>
   </ul>
 
   Topic-level configurations and defaults are discussed in more detail <a 
href="#topicconfigs">below</a>.
@@ -62,39 +63,12 @@
   All configs that are configurable at cluster level may also be configured at 
per-broker level (e.g. for testing).
   If a config value is defined at different levels, the following order of 
precedence is used:
   <ul>
-  <li>Dynamic per-broker config stored in ZooKeeper</li>
-  <li>Dynamic cluster-wide default config stored in ZooKeeper</li>
+  <li>Dynamic per-broker config stored in the metadata log</li>
+  <li>Dynamic cluster-wide default config stored in the metadata log</li>
   <li>Static broker config from <code>server.properties</code></li>
   <li>Kafka default, see <a href="#brokerconfigs">broker configs</a></li>
   </ul>
 
-  <h5>Updating Password Configs Dynamically</h5>
-  <p>Password config values that are dynamically updated are encrypted before 
storing in ZooKeeper. The broker config
-  <code>password.encoder.secret</code> must be configured in 
<code>server.properties</code> to enable dynamic update
-  of password configs. The secret may be different on different brokers.</p>
-  <p>The secret used for password encoding may be rotated with a rolling 
restart of brokers. The old secret used for encoding
-  passwords currently in ZooKeeper must be provided in the static broker 
config <code>password.encoder.old.secret</code> and
-  the new secret must be provided in <code>password.encoder.secret</code>. All 
dynamic password configs stored in ZooKeeper
-  will be re-encoded with the new secret when the broker starts up.</p>
-  <p>In Kafka 1.1.x, all dynamically updated password configs must be provided 
in every alter request when updating configs
-  using <code>kafka-configs.sh</code> even if the password config is not being 
altered. This constraint will be removed in
-  a future release.</p>
-
-  <h5>Updating Password Configs in ZooKeeper Before Starting Brokers</h5>
-
-  From Kafka 2.0.0 onwards, <code>kafka-configs.sh</code> enables dynamic 
broker configs to be updated using ZooKeeper before
-  starting brokers for bootstrapping. This enables all password configs to be 
stored in encrypted form, avoiding the need for
-  clear passwords in <code>server.properties</code>. The broker config 
<code>password.encoder.secret</code> must also be specified
-  if any password configs are included in the alter command. Additional 
encryption parameters may also be specified. Password
-  encoder configs will not be persisted in ZooKeeper. For example, to store 
SSL key password for listener <code>INTERNAL</code>
-  on broker 0:
-
-  <pre><code class="language-bash">$ bin/kafka-configs.sh --zookeeper 
localhost:2182 --zk-tls-config-file zk_tls_config.properties --entity-type 
brokers --entity-name 0 --alter --add-config
-    
'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192'</code></pre>
-
-  The configuration <code>listener.name.internal.ssl.key.password</code> will 
be persisted in ZooKeeper in encrypted
-  form using the provided encoder configs. The encoder secret and iterations 
are not persisted in ZooKeeper.
-
   <h5>Updating SSL Keystore of an Existing Listener</h5>
   Brokers may be configured with SSL keystores with short validity periods to 
reduce the risk of compromised certificates.
   Keystores may be updated dynamically without restarting the broker. The 
config name must be prefixed with the listener prefix
@@ -157,14 +131,6 @@
     <li><code>log.message.timestamp.difference.max.ms</code></li>
   </ul>
 
-  From Kafka version 2.0.0 onwards, unclean leader election is automatically 
enabled by the controller when the config
-  <code>unclean.leader.election.enable</code> is dynamically updated.
-  In Kafka version 1.1.x, changes to 
<code>unclean.leader.election.enable</code> take effect only when a new 
controller is elected.
-  Controller re-election may be forced by running:
-
-  <pre><code class="language-bash">$ bin/zookeeper-shell.sh localhost
-  rmr /controller</code></pre>
-
   <h5>Updating Log Cleaner Configs</h5>
   Log cleaner configs may be updated dynamically at cluster-default level used 
by all brokers. The changes take effect
   on the next iteration of log cleaning. One or more of these configs may be 
updated:
diff --git a/docs/design.html b/docs/design.html
index f775552f4cf..a671e8285e4 100644
--- a/docs/design.html
+++ b/docs/design.html
@@ -334,11 +334,6 @@
     sending periodic heartbeats to the controller. If the controller fails to 
receive a heartbeat before the timeout configured by 
     <code>broker.session.timeout.ms</code> expires, then the node is 
considered offline.
     <p>
-    For clusters using Zookeeper, liveness is determined indirectly through 
the existence of an ephemeral node which is created by the broker on
-    initialization of its Zookeeper session. If the broker loses its session 
after failing to send heartbeats to Zookeeper before expiration of
-    <code>zookeeper.session.timeout.ms</code>, then the node gets deleted. The 
controller would then notice the node deletion through a Zookeeper watch
-    and mark the broker offline.
-    <p>
     We refer to nodes satisfying these two conditions as being "in sync" to 
avoid the vagueness of "alive" or "failed". The leader keeps track of the set 
of "in sync" replicas,
     which is known as the ISR. If either of these conditions fail to be 
satisfied, then the broker will be removed from the ISR. For example,
     if a follower dies, then the controller will notice the failure through 
the loss of its session, and will remove the broker from the ISR.
@@ -624,7 +619,7 @@
     <p>
         Quota configuration may be defined for (user, client-id), user and 
client-id groups. It is possible to override the default quota at any of the 
quota levels that needs a higher (or even lower) quota.
         The mechanism is similar to the per-topic log config overrides.
-        User and (user, client-id) quota overrides are written to ZooKeeper 
under <i><b>/config/users</b></i> and client-id quota overrides are written 
under <i><b>/config/clients</b></i>.
+        User and (user, client-id) quota overrides are written to the metadata 
log.
         These overrides are read by all brokers and are effective immediately. 
This lets us change quotas without having to do a rolling restart of the entire 
cluster. See <a href="#quotas">here</a> for details.
         Default quotas for each group may also be updated dynamically using 
the same mechanism.
     </p>
@@ -632,14 +627,14 @@
         The order of precedence for quota configuration is:
     </p>
         <ol>
-            <li>/config/users/&lt;user&gt;/clients/&lt;client-id&gt;</li>
-            <li>/config/users/&lt;user&gt;/clients/&lt;default&gt;</li>
-            <li>/config/users/&lt;user&gt;</li>
-            <li>/config/users/&lt;default&gt;/clients/&lt;client-id&gt;</li>
-            <li>/config/users/&lt;default&gt;/clients/&lt;default&gt;</li>
-            <li>/config/users/&lt;default&gt;</li>
-            <li>/config/clients/&lt;client-id&gt;</li>
-            <li>/config/clients/&lt;default&gt;</li>
+            <li>matching user and client-id quotas</li>
+            <li>matching user and default client-id quotas</li>
+            <li>matching user quota</li>
+            <li>default user and matching client-id quotas</li>
+            <li>default user and default client-id quotas</li>
+            <li>default user quota</li>
+            <li>matching client-id quota</li>
+            <li>default client-id quota</li>
         </ol>
     <h4 class="anchor-heading"><a id="design_quotasbandwidth" 
class="anchor-link"></a><a href="#design_quotasbandwidth">Network Bandwidth 
Quotas</a></h4>
     <p>
diff --git a/docs/implementation.html b/docs/implementation.html
index 93c9aa60c4c..25a7f60b18f 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -255,45 +255,6 @@ messageSetSend n</code></pre>
         CoordinatorLoadInProgressException and the consumer may retry the 
OffsetFetchRequest after backing off.
     </p>
 
-    <h4 class="anchor-heading"><a id="impl_zookeeper" 
class="anchor-link"></a><a href="#impl_zookeeper">ZooKeeper Directories</a></h4>
-    <p>
-    The following gives the ZooKeeper structures and algorithms used for 
co-ordination between consumers and brokers.
-    </p>
-
-    <h4 class="anchor-heading"><a id="impl_zknotation" 
class="anchor-link"></a><a href="#impl_zknotation">Notation</a></h4>
-    <p>
-    When an element in a path is denoted <code>[xyz]</code>, that means that 
the value of xyz is not fixed and there is in fact a ZooKeeper znode for each 
possible value of xyz. For example <code>/topics/[topic]</code> would be a 
directory named /topics containing a sub-directory for each topic name. 
Numerical ranges are also given such as <code>[0...5]</code> to indicate the 
subdirectories 0, 1, 2, 3, 4. An arrow <code>-></code> is used to indicate the 
contents of a znode. For example < [...]
-    </p>
-
-    <h4 class="anchor-heading"><a id="impl_zkbroker" 
class="anchor-link"></a><a href="#impl_zkbroker">Broker Node Registry</a></h4>
-    <pre><code class="language-json">/brokers/ids/[0...N] --> 
{"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...}
 (ephemeral node)</code></pre>
-    <p>
-    This is a list of all present broker nodes, each of which provides a 
unique logical broker id which identifies it to consumers (which must be given 
as part of its configuration). On startup, a broker node registers itself by 
creating a znode with the logical broker id under /brokers/ids. The purpose of 
the logical broker id is to allow a broker to be moved to a different physical 
machine without affecting consumers. An attempt to register a broker id that is 
already in use (say becau [...]
-    </p>
-    <p>
-    Since the broker registers itself in ZooKeeper using ephemeral znodes, 
this registration is dynamic and will disappear if the broker is shutdown or 
dies (thus notifying consumers it is no longer available).
-    </p>
-    <h4 class="anchor-heading"><a id="impl_zktopic" class="anchor-link"></a><a 
href="#impl_zktopic">Broker Topic Registry</a></h4>
-    <pre><code 
class="language-json">/brokers/topics/[topic]/partitions/[0...N]/state --> 
{"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]}
 (ephemeral node)</code></pre>
-
-    <p>
-    Each broker registers itself under the topics it maintains and stores the 
number of partitions for that topic.
-    </p>
-
-    <h4 class="anchor-heading"><a id="impl_clusterid" 
class="anchor-link"></a><a href="#impl_clusterid">Cluster Id</a></h4>
-
-    <p>
-        The cluster id is a unique and immutable identifier assigned to a 
Kafka cluster. The cluster id can have a maximum of 22 characters and the 
allowed characters are defined by the regular expression [a-zA-Z0-9_\-]+, which 
corresponds to the characters used by the URL-safe Base64 variant with no 
padding. Conceptually, it is auto-generated when a cluster is started for the 
first time.
-    </p>
-    <p>
-        Implementation-wise, it is generated when a broker with version 0.10.1 
or later is successfully started for the first time. The broker tries to get 
the cluster id from the <code>/cluster/id</code> znode during startup. If the 
znode does not exist, the broker generates a new cluster id and creates the 
znode with this cluster id.
-    </p>
-
-    <h4 class="anchor-heading"><a id="impl_brokerregistration" 
class="anchor-link"></a><a href="#impl_brokerregistration">Broker node 
registration</a></h4>
-
-    <p>
-    The broker nodes are basically independent, so they only publish 
information about what they have. When a broker joins, it registers itself 
under the broker node registry directory and writes information about its host 
name and port. The broker also register the list of existing topics and their 
logical partitions in the broker topic registry. New topics are registered 
dynamically when they are created on the broker.
-    </p>
 </script>
 
 <div class="p-implementation"></div>
diff --git a/docs/migration.html b/docs/migration.html
deleted file mode 100644
index 95fc87ffaca..00000000000
--- a/docs/migration.html
+++ /dev/null
@@ -1,34 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<!--#include virtual="../includes/_header.htm" -->
-<h2 class="anchor-heading"><a id="migration" class="anchor-link"></a><a 
href="#migration">Migrating from 0.7.x to 0.8</a></h2>
-
-0.8 is our first (and hopefully last) release with a non-backwards-compatible 
wire protocol, ZooKeeper     layout, and on-disk data format. This was a chance 
for us to clean up a lot of cruft and start fresh. This means performing a 
no-downtime upgrade is more painful than normal&mdash;you cannot just swap in 
the new code in-place.
-
-<h3 class="anchor-heading"><a id="migration_steps" class="anchor-link"></a><a 
href="#migration_steps">Migration Steps</a></h3>
-
-<ol>
-    <li>Setup a new cluster running 0.8.
-    <li>Use the 0.7 to 0.8 <a href="tools.html">migration tool</a> to mirror 
data from the 0.7 cluster into the 0.8 cluster.
-    <li>When the 0.8 cluster is fully caught up, redeploy all data 
<i>consumers</i> running the 0.8 client and reading from the 0.8 cluster.
-    <li>Finally migrate all 0.7 producers to 0.8 client publishing data to the 
0.8 cluster.
-    <li>Decommission the 0.7 cluster.
-    <li>Drink.
-</ol>
-
-<!--#include virtual="../includes/_footer.htm" -->
diff --git a/docs/ops.html b/docs/ops.html
index fb3d8aff912..305947f3c6a 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -508,7 +508,7 @@ Configs for user-principal 'user1', client-id 'clientA' are 
producer_byte_rate=1
   <p>
   Kafka naturally batches data in both the producer and consumer so it can 
achieve high-throughput even over a high-latency connection. To allow this 
though it may be necessary to increase the TCP socket buffer sizes for the 
producer, consumer, and broker using the <code>socket.send.buffer.bytes</code> 
and <code>socket.receive.buffer.bytes</code> configurations. The appropriate 
way to set this is documented <a 
href="https://en.wikipedia.org/wiki/Bandwidth-delay_product";>here</a>.
   <p>
-  It is generally <i>not</i> advisable to run a <i>single</i> Kafka cluster 
that spans multiple datacenters over a high-latency link. This will incur very 
high replication latency both for Kafka writes and ZooKeeper writes, and 
neither Kafka nor ZooKeeper will remain available in all locations if the 
network between locations is unavailable.
+  It is generally <i>not</i> advisable to run a <i>single</i> Kafka cluster 
that spans multiple datacenters over a high-latency link. This will incur very 
high replication latency for Kafka writes, and Kafka will remain available in 
all locations if the network between locations is unavailable.
 
   <h3 class="anchor-heading"><a id="georeplication" class="anchor-link"></a><a 
href="#georeplication">6.3 Geo-Replication (Cross-Cluster Data 
Mirroring)</a></h3>
 
@@ -1149,8 +1149,8 @@ Security settings for Kafka fall into three main 
categories, which are similar t
   </p>
 
   <ol>
-    <li><strong>Encryption</strong> of data transferred between Kafka brokers 
and Kafka clients, between brokers, between brokers and ZooKeeper nodes, and 
between brokers and other, optional tools.</li>
-    <li><strong>Authentication</strong> of connections from Kafka clients and 
applications to Kafka brokers, as well as connections from Kafka brokers to 
ZooKeeper nodes.</li>
+    <li><strong>Encryption</strong> of data transferred between Kafka brokers 
and Kafka clients, between brokers, and between brokers and other optional 
tools.</li>
+    <li><strong>Authentication</strong> of connections from Kafka clients and 
applications to Kafka brokers, as well as connections between Kafka 
brokers.</li>
     <li><strong>Authorization</strong> of client operations such as creating, 
deleting, and altering the configuration of topics; writing events to or 
reading events from a topic; creating and deleting ACLs. Administrators can 
also define custom policies to put in place additional restrictions, such as a 
<code>CreateTopicPolicy</code> and <code>AlterConfigPolicy</code> (see <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-108%3A+Create+Topic+Policy";>KIP-108</a>
 and the sett [...]
   </ol>
 
@@ -1213,41 +1213,7 @@ $ bin/kafka-acls.sh \
     <strong>Data contracts:</strong> You may need to define data contracts 
between the producers and the consumers of data in a cluster, using event 
schemas. This ensures that events written to Kafka can always be read properly 
again, and prevents malformed or corrupt events being written. The best way to 
achieve this is to deploy a so-called schema registry alongside the cluster. 
(Kafka does not include a schema registry, but there are third-party 
implementations available.) A schema re [...]
   </p>
 
-
-  <h3 class="anchor-heading"><a id="config" class="anchor-link"></a><a 
href="#config">6.5 Kafka Configuration</a></h3>
-
-  <h4 class="anchor-heading"><a id="clientconfig" class="anchor-link"></a><a 
href="#clientconfig">Important Client Configurations</a></h4>
-
-  The most important producer configurations are:
-  <ul>
-      <li>acks</li>
-      <li>compression</li>
-      <li>batch size</li>
-  </ul>
-  The most important consumer configuration is the fetch size.
-  <p>
-  All configurations are documented in the <a 
href="#configuration">configuration</a> section.
-  <p>
-  <h4 class="anchor-heading"><a id="prodconfig" class="anchor-link"></a><a 
href="#prodconfig">A Production Server Config</a></h4>
-  Here is an example production server configuration:
-  <pre><code class="language-text"># ZooKeeper
-zookeeper.connect=[list of ZooKeeper servers]
-
-# Log configuration
-num.partitions=8
-default.replication.factor=3
-log.dir=[List of directories. Kafka should have its own dedicated disk(s) or 
SSD(s).]
-
-# Other configurations
-broker.id=[An integer. Start with 0 and increment by 1 for each new broker.]
-listeners=[list of listeners]
-auto.create.topics.enable=false
-min.insync.replicas=2
-queued.max.requests=[number of concurrent requests]</code></pre>
-
-  Our client configuration varies a fair amount between different use cases.
-
-  <h3 class="anchor-heading"><a id="java" class="anchor-link"></a><a 
href="#java">6.6 Java Version</a></h3>
+  <h3 class="anchor-heading"><a id="java" class="anchor-link"></a><a 
href="#java">6.5 Java Version</a></h3>
 
   Java 11, Java 17, Java 21 and Java 23 are supported.
   <p>
@@ -1274,7 +1240,7 @@ queued.max.requests=[number of concurrent 
requests]</code></pre>
 
   All of the brokers in that cluster have a 90% GC pause time of about 21ms 
with less than 1 young GC per second.
 
-  <h3 class="anchor-heading"><a id="hwandos" class="anchor-link"></a><a 
href="#hwandos">6.7 Hardware and OS</a></h3>
+  <h3 class="anchor-heading"><a id="hwandos" class="anchor-link"></a><a 
href="#hwandos">6.6 Hardware and OS</a></h3>
   We are using dual quad-core Intel Xeon machines with 24GB of memory.
   <p>
   You need sufficient memory to buffer active readers and writers. You can do 
a back-of-the-envelope estimate of memory needs by assuming you want to be able 
to buffer for 30 seconds and compute your memory need as write_throughput*30.
@@ -1380,7 +1346,7 @@ NodeId  LogEndOffset    Lag     LastFetchTimestamp      
LastCaughtUpTimestamp
 
   <pre><code class="language-bash">$ bin/kafka-server-start.sh 
server_properties</code></pre>
 
-  <h3 class="anchor-heading"><a id="monitoring" class="anchor-link"></a><a 
href="#monitoring">6.8 Monitoring</a></h3>
+  <h3 class="anchor-heading"><a id="monitoring" class="anchor-link"></a><a 
href="#monitoring">6.7 Monitoring</a></h3>
 
   Kafka uses Yammer Metrics for metrics reporting in the server. The Java 
clients use Kafka Metrics, a built-in metrics registry that minimizes 
transitive dependencies pulled into client applications. Both expose metrics 
via JMX and can be configured to report stats using pluggable stats reporters 
to hook up to your monitoring system.
   <p>
@@ -1723,17 +1689,6 @@ NodeId  LogEndOffset    Lag     LastFetchTimestamp      
LastCaughtUpTimestamp
         <td>exempt-throttle-time indicates the percentage of time spent in 
broker network and I/O threads to process requests
             that are exempt from throttling.</td>
       </tr>
-      <tr>
-        <td>ZooKeeper client request latency</td>
-        
<td>kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs</td>
-        <td>Latency in milliseconds for ZooKeeper requests from broker.</td>
-      </tr>
-      <tr>
-        <td>ZooKeeper connection status</td>
-        <td>kafka.server:type=SessionExpireListener,name=SessionState</td>
-        <td>Connection status of broker's ZooKeeper session which may be one of
-            
Disconnected|SyncConnected|AuthFailed|ConnectedReadOnly|SaslAuthenticated|Expired.</td>
-      </tr>
       <tr>
         <td>Max time to load group metadata</td>
         
<td>kafka.server:type=group-coordinator-metrics,name=partition-load-time-max</td>
@@ -2093,24 +2048,6 @@ These metrics are reported on both Controllers and 
Brokers in a KRaft Cluster
     <td>The number of active brokers as observed by this Controller.</td>
     <td>kafka.controller:type=KafkaController,name=ActiveBrokerCount</td>
   </tr>
-  <tr>
-    <td>Migrating ZK Broker Count</td>
-    <td>The number of brokers registered with the Controller that haven't yet 
migrated to KRaft mode.</td>
-    <td>kafka.controller:type=KafkaController,name=MigratingZkBrokerCount</td>
-  </tr>
-  <tr>
-    <td>ZK Migrating State</td>
-    <td>
-      <ul style="list-style: none; padding-left: 0; margin: 0;">
-        <li>0 - NONE, cluster created in KRaft mode;</li>
-        <li>4 - ZK, Migration has not started, controller is a ZK 
controller;</li>
-        <li>2 - PRE_MIGRATION, the KRaft Controller is waiting for all ZK 
brokers to register in migration mode;</li>
-        <li>1 - MIGRATION, ZK metadata has been migrated, but some broker is 
still running in ZK mode;</li>
-        <li>3 - POST_MIGRATION, the cluster migration is complete;</li>
-      </ul>
-    </td>
-    <td>kafka.controller:type=KafkaController,name=ZkMigrationState</td>
-  </tr>
   <tr>
     <td>Global Topic Count</td>
     <td>The number of global topics as observed by this Controller.</td>
@@ -2157,22 +2094,6 @@ These metrics are reported on both Controllers and 
Brokers in a KRaft Cluster
     For active Controllers the value of this lag is always zero.</td>
     <td>kafka.controller:type=KafkaController,name=LastAppliedRecordLagMs</td>
   </tr>
-  <tr>
-    <td>ZooKeeper Write Behind Lag</td>
-    <td>The amount of lag in records that ZooKeeper is behind relative to the 
highest committed record in the metadata log.
-    This metric will only be reported by the active KRaft controller.</td>
-    <td>kafka.controller:type=KafkaController,name=ZkWriteBehindLag</td>
-  </tr>
-  <tr>
-    <td>ZooKeeper Metadata Snapshot Write Time</td>
-    <td>The number of milliseconds the KRaft controller took reconciling a 
snapshot into ZooKeeper.</td>
-    <td>kafka.controller:type=KafkaController,name=ZkWriteSnapshotTimeMs</td>
-  </tr>
-  <tr>
-    <td>ZooKeeper Metadata Delta Write Time</td>
-    <td>The number of milliseconds the KRaft controller took writing a delta 
into ZK.</td>
-    <td>kafka.controller:type=KafkaController,name=ZkWriteDeltaTimeMs</td>
-  </tr>
   <tr>
     <td>Timed-out Broker Heartbeat Count</td>
     <td>The number of broker heartbeats that timed out on this controller 
since the process was started. Note that only
@@ -3708,37 +3629,7 @@ customized state stores; for built-in state stores, 
currently we have:
 
   On the client side, we recommend monitoring the message/byte rate (global 
and per topic), request rate/size/time, and on the consumer side, max lag in 
messages among all partitions and min fetch request rate. For a consumer to 
keep up, max lag needs to be less than a threshold and min fetch rate needs to 
be larger than 0.
 
-  <h3 class="anchor-heading"><a id="zk" class="anchor-link"></a><a 
href="#zk">6.9 ZooKeeper</a></h3>
-
-  <h4 class="anchor-heading"><a id="zkversion" class="anchor-link"></a><a 
href="#zkversion">Stable version</a></h4>
-  The current stable branch is 3.8. Kafka is regularly updated to include the 
latest release in the 3.8 series.
-
-  <h4 class="anchor-heading"><a id="zk_depr" class="anchor-link"></a><a 
href="#zk_depr">ZooKeeper Deprecation</a></h4>
-  <p>With the release of Apache Kafka 3.5, Zookeeper is now marked deprecated. 
Removal of ZooKeeper is planned in the next major release of Apache Kafka 
(version 4.0),
-     which is scheduled to happen no sooner than April 2024. During the 
deprecation phase, ZooKeeper is still supported for metadata management of 
Kafka clusters,
-     but it is not recommended for new deployments. There is a small subset of 
features that remain to be implemented in KRaft
-     see <a href="#kraft_missing">current missing features</a> for more 
information.</p>
-
-    <h5 class="anchor-heading"><a id="zk_depr_migration" 
class="anchor-link"></a><a href="#zk_drep_migration">Migration</a></h5>
-    <p>Users are recommended to begin planning for migration to KRaft and also 
begin testing to provide any feedback. Refer to <a 
href="#kraft_zk_migration">ZooKeeper to KRaft Migration</a> for details on how 
to perform a live migration from ZooKeeper to KRaft and current limitations.</p>
-
-    <h5 class="anchor-heading"><a id="zk_depr_3xsupport" 
class="anchor-link"></a><a href="#zk_depr_3xsupport">3.x and ZooKeeper 
Support</a></h5>
-    <p>The final 3.x minor release, that supports ZooKeeper mode, will receive 
critical bug fixes and security fixes for 12 months after its release.</p>
-
-<h4 class="anchor-heading"><a id="zkops" class="anchor-link"></a><a 
href="#zkops">Operationalizing ZooKeeper</a></h4>
-  Operationally, we do the following for a healthy ZooKeeper installation:
-  <ul>
-    <li>Redundancy in the physical/hardware/network layout: try not to put 
them all in the same rack, decent (but don't go nuts) hardware, try to keep 
redundant power and network paths, etc. A typical ZooKeeper ensemble has 5 or 7 
servers, which tolerates 2 and 3 servers down, respectively. If you have a 
small deployment, then using 3 servers is acceptable, but keep in mind that 
you'll only be able to tolerate 1 server down in this case. </li>
-    <li>I/O segregation: if you do a lot of write type traffic you'll almost 
definitely want the transaction logs on a dedicated disk group. Writes to the 
transaction log are synchronous (but batched for performance), and 
consequently, concurrent writes can significantly affect performance. ZooKeeper 
snapshots can be one such a source of concurrent writes, and ideally should be 
written on a disk group separate from the transaction log. Snapshots are 
written to disk asynchronously, so it  [...]
-    <li>Application segregation: Unless you really understand the application 
patterns of other apps that you want to install on the same box, it can be a 
good idea to run ZooKeeper in isolation (though this can be a balancing act 
with the capabilities of the hardware).</li>
-    <li>Use care with virtualization: It can work, depending on your cluster 
layout and read/write patterns and SLAs, but the tiny overheads introduced by 
the virtualization layer can add up and throw off ZooKeeper, as it can be very 
time sensitive</li>
-    <li>ZooKeeper configuration: It's java, make sure you give it 'enough' 
heap space (We usually run them with 3-5G, but that's mostly due to the data 
set size we have here). Unfortunately we don't have a good formula for it, but 
keep in mind that allowing for more ZooKeeper state means that snapshots can 
become large, and large snapshots affect recovery time. In fact, if the 
snapshot becomes too large (a few gigabytes), then you may need to increase the 
initLimit parameter to give enou [...]
-    <li>Monitoring: Both JMX and the 4 letter words (4lw) commands are very 
useful, they do overlap in some cases (and in those cases we prefer the 4 
letter commands, they seem more predictable, or at the very least, they work 
better with the LI monitoring infrastructure)</li>
-    <li>Don't overbuild the cluster: large clusters, especially in a write 
heavy usage pattern, means a lot of intracluster communication (quorums on the 
writes and subsequent cluster member updates), but don't underbuild it (and 
risk swamping the cluster). Having more servers adds to your read capacity.</li>
-  </ul>
-  Overall, we try to keep the ZooKeeper system as small as will handle the 
load (plus standard growth capacity planning) and as simple as possible. We try 
not to do anything fancy with the configuration or application layout as 
compared to the official release as well as keep it as self contained as 
possible. For these reasons, we tend to skip the OS packaged versions, since it 
has a tendency to try to put things in the OS standard hierarchy, which can be 
'messy', for want of a better wa [...]
-
-  <h3 class="anchor-heading"><a id="kraft" class="anchor-link"></a><a 
href="#kraft">6.10 KRaft</a></h3>
+  <h3 class="anchor-heading"><a id="kraft" class="anchor-link"></a><a 
href="#kraft">6.8 KRaft</a></h3>
 
   <h4 class="anchor-heading"><a id="kraft_config" class="anchor-link"></a><a 
href="#kraft_config">Configuration</a></h4>
 
@@ -3750,7 +3641,6 @@ customized state stores; for built-in state stores, 
currently we have:
     <li>If <code>process.roles</code> is set to <code>broker</code>, the 
server acts as a broker.</li>
     <li>If <code>process.roles</code> is set to <code>controller</code>, the 
server acts as a controller.</li>
     <li>If <code>process.roles</code> is set to 
<code>broker,controller</code>, the server acts as both a broker and a 
controller.</li>
-    <li>If <code>process.roles</code> is not set at all, it is assumed to be 
in ZooKeeper mode.</li>
   </ul>
 
   <p>Kafka servers that act as both brokers and controllers are referred to as 
"combined" servers. Combined servers are simpler to operate for small use cases 
like a development environment. The key disadvantage is that the controller 
will be less isolated from the rest of the system. For example, it is not 
possible to roll or scale the controllers separately from the brokers in 
combined mode. Combined mode is not recommended in critical deployment 
environments.</p>
@@ -3758,7 +3648,7 @@ customized state stores; for built-in state stores, 
currently we have:
 
   <h5 class="anchor-heading"><a id="kraft_voter" class="anchor-link"></a><a 
href="#kraft_voter">Controllers</a></h5>
 
-  <p>In KRaft mode, specific Kafka servers are selected to be controllers 
(unlike the ZooKeeper-based mode, where any server can become the Controller). 
The servers selected to be controllers will participate in the metadata quorum. 
Each controller is either an active or a hot standby for the current active 
controller.</p>
+  <p>In KRaft mode, specific Kafka servers are selected to be controllers. The 
servers selected to be controllers will participate in the metadata quorum. 
Each controller is either an active or a hot standby for the current active 
controller.</p>
 
   <p>A Kafka admin will typically select 3 or 5 servers for this role, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. A majority of the 
controllers must be alive in order to maintain availability. With 3 
controllers, the cluster can tolerate 1 controller failure; with 5 controllers, 
the cluster can tolerate 2 controller failures.</p>
 
@@ -3960,358 +3850,10 @@ foo
 
   <h4 class="anchor-heading"><a id="kraft_zk_migration" 
class="anchor-link"></a><a href="#kraft_zk_migration">ZooKeeper to KRaft 
Migration</a></h4>
 
-  <h3>Terminology</h3>
-  <ul>
-    <li>Brokers that are in <b>ZK mode</b> store their metadata in Apache 
ZooKepeer. This is the old mode of handling metadata.</li>
-    <li>Brokers that are in <b>KRaft mode</b> store their metadata in a KRaft 
quorum. This is the new and improved mode of handling metadata.</li>
-    <li><b>Migration</b> is the process of moving cluster metadata from 
ZooKeeper into a KRaft quorum.</li>
-  </ul>
-
-  <h3>Migration Phases</h3>
-  In general, the migration process passes through several phases.
-
-  <ul>
-    <li>In the <b>initial phase</b>, all the brokers are in ZK mode, and there 
is a ZK-based controller.</li>
-    <li>During the <b>initial metadata load</b>, a KRaft quorum loads the 
metadata from ZooKeeper,</li>
-    <li>In <b>hybrid phase</b>, some brokers are in ZK mode, but there is a 
KRaft controller.</li>
-    <li>In <b>dual-write phase</b>, all brokers are KRaft, but the KRaft 
controller is continuing to write to ZK.</li>
-    <li>When the migration has been <b>finalized</b>, we no longer write 
metadata to ZooKeeper.</li>
-  </ul>
-
-  <h3>Limitations</h3>
-  <ul>
-    <li>While a cluster is being migrated from ZK mode to KRaft mode, we do 
not support changing the <i>metadata
-      version</i> (also known as the <i>inter.broker.protocol.version</i>.) 
Please do not attempt to do this during
-      a migration, or you may break the cluster.</li>
-    <li>After the migration has been finalized, it is not possible to revert 
back to ZooKeeper mode.</li>
-    <li>
-      During the migration, if a ZK broker is running with multiple log 
directories,
-      any directory failure will cause the broker to shutdown.
-      Brokers with broken log directories will only be able to migrate to 
KRaft once the directories are repaired.
-      For further details refer to <a 
href="https://issues.apache.org/jira/browse/KAFKA-16431";>KAFKA-16431</a>.
-    </li>
-    <li><a href="#kraft_missing">As noted above</a>, some features are not 
fully implemented in KRaft mode. If you are
-      using one of those features, you will not be able to migrate to KRaft 
yet.</li>
-  </ul>
-
-  <h3>Preparing for migration</h3>
-  <p>
-    Before beginning the migration, the Kafka brokers must be upgraded to 
software version {{fullDotVersion}} and have the
-    "inter.broker.protocol.version" configuration set to "{{dotVersion}}".
-  </p>
-
-  <p>
-    It is recommended to enable TRACE level logging for the migration 
components while the migration is active. This can
-    be done by adding the following log4j configuration to each KRaft 
controller's "log4j.properties" file.
-  </p>
-
-  <pre><code 
class="language-text">log4j.logger.org.apache.kafka.metadata.migration=TRACE</code></pre>
-
-  <p>
-    It is generally useful to enable DEBUG logging on the KRaft controllers 
and the ZK brokers during the migration.
-  </p>
-
-  <h3>Provisioning the KRaft controller quorum</h3>
-  <p>
-    Two things are needed before the migration can begin. First, the brokers 
must be configured to support the migration and second,
-    a KRaft controller quorum must be deployed. The KRaft controllers should 
be provisioned with the same cluster ID as
-    the existing Kafka cluster. This can be found by examining one of the 
"meta.properties" files in the data directories
-    of the brokers, or by running the following command.
-  </p>
-
-  <pre><code class="language-bash">$ bin/zookeeper-shell.sh localhost:2181 get 
/cluster/id</code></pre>
-
-  <p>
-    The KRaft controller quorum should also be provisioned with the latest 
<code>metadata.version</code>.
-    This is done automatically when you format the node with the 
kafka-storage.sh tool.
-    For further instructions on KRaft deployment, please refer to <a 
href="#kraft_config">the above documentation</a>.
-  </p>
-
-  <p>
-    In addition to the standard KRaft configuration, the KRaft controllers 
will need to enable support for the migration
-    as well as provide ZooKeeper connection configuration.
-  </p>
-
-  <p>
-    Here is a sample config for a KRaft controller that is ready for migration:
-  </p>
-  <pre><code class="language-text"># Sample KRaft cluster 
controller.properties listening on 9093
-process.roles=controller
-node.id=3000
-controller.quorum.bootstrap.servers=localhost:9093
-controller.listener.names=CONTROLLER
-listeners=CONTROLLER://:9093
-
-# Enable the migration
-zookeeper.metadata.migration.enable=true
-
-# ZooKeeper client configuration
-zookeeper.connect=localhost:2181
-
-# The inter broker listener in brokers to allow KRaft controller send RPCs to 
brokers
-inter.broker.listener.name=PLAINTEXT
-
-# Other configs ...</code></pre>
-
-  <p>The new standalone controller in the example configuration above should 
be formatted using the <code>bin/kafka-storage.sh format 
--standalone</code>command.</p>
-
-  <p><em>Note: The KRaft cluster <code>node.id</code> values must be different 
from any existing ZK broker <code>broker.id</code>.
-  In KRaft-mode, the brokers and controllers share the same Node ID 
namespace.</em></p>
-
-  <h3>Enter Migration Mode on the Brokers</h3>
-  <p>
-    Once the KRaft controller quorum has been started, the brokers will need 
to be reconfigured and restarted. Brokers
-    may be restarted in a rolling fashion to avoid impacting cluster 
availability. Each broker requires the
-    following configuration to communicate with the KRaft controllers and to 
enable the migration.
-  </p>
-
-  <ul>
-    <li><a href="#brokerconfigs_broker.id">broker.id</a>: Ensure 
<code>broker.id</code> is set to a non-negative integer even if 
<code>broker.id.generation.enable</code> is enabled (default is enabled). 
Additionally, ensure <code>broker.id</code> does not exceed 
<code>reserved.broker.max.id</code> to avoid failure.</li>
-    <li><a 
href="#brokerconfigs_controller.quorum.bootstrap.servers">controller.quorum.bootstrap.servers</a></li>
-    <li><a 
href="#brokerconfigs_controller.listener.names">controller.listener.names</a></li>
-    <li>The controller.listener.name should also be added to <a 
href="#brokerconfigs_listener.security.protocol.map">listener.security.property.map</a></li>
-    <li><a 
href="#brokerconfigs_zookeeper.metadata.migration.enable">zookeeper.metadata.migration.enable</a></li>
-  </ul>
-
-  <p>Here is a sample config for a broker that is ready for migration:</p>
-
-  <pre><code class="language-text"># Sample ZK broker server.properties 
listening on 9092
-broker.id=0
-listeners=PLAINTEXT://:9092
-advertised.listeners=PLAINTEXT://localhost:9092
-listener.security.protocol.map=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
-
-# Set the IBP
-inter.broker.protocol.version={{dotVersion}}
-
-# Enable the migration
-zookeeper.metadata.migration.enable=true
-
-# ZooKeeper client configuration
-zookeeper.connect=localhost:2181
-
-# KRaft controller quorum configuration
-controller.quorum.bootstrap.servers=localhost:9093
-controller.listener.names=CONTROLLER</code></pre>
-
-  <p>
-    <em>Note: Once the final ZK broker has been restarted with the necessary 
configuration, the migration will automatically begin.</em>
-    When the migration is complete, an INFO level log can be observed on the 
active controller:
-  </p>
-
-  <pre>Completed migration of metadata from Zookeeper to KRaft</pre>
-
-  <h3>Migrating brokers to KRaft</h3>
-  <p>
-    Once the KRaft controller completes the metadata migration, the brokers 
will still be running
-    in ZooKeeper mode. While the KRaft controller is in migration mode, it 
will continue sending
-    controller RPCs to the ZooKeeper mode brokers. This includes RPCs like 
UpdateMetadata and
-    LeaderAndIsr.
-  </p>
-
-  <p>
-    To migrate the brokers to KRaft, they simply need to be reconfigured as 
KRaft brokers and restarted. Using the above
-    broker configuration as an example, we would replace the 
<code>broker.id</code> with <code>node.id</code> and add
-    <code>process.roles=broker</code>. It is important that the broker 
maintain the same Broker/Node ID when it is restarted.
-    The zookeeper configurations should be removed at this point.
-  </p>
-
-  <p>
-    If your broker has authorization configured via the 
<code>authorizer.class.name</code> property
-    using <code>kafka.security.authorizer.AclAuthorizer</code>, this is also 
the time to change it
-    to use 
<code>org.apache.kafka.metadata.authorizer.StandardAuthorizer</code> instead.
-  </p>
-
-  <pre><code class="language-text"># Sample KRaft broker server.properties 
listening on 9092
-process.roles=broker
-node.id=0
-listeners=PLAINTEXT://:9092
-advertised.listeners=PLAINTEXT://localhost:9092
-listener.security.protocol.map=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
-
-# Don't set the IBP, KRaft uses "metadata.version" feature flag
-# inter.broker.protocol.version={{dotVersion}}
-
-# Remove the migration enabled flag
-# zookeeper.metadata.migration.enable=true
-
-# Remove ZooKeeper client configuration
-# zookeeper.connect=localhost:2181
-
-# Keep the KRaft controller quorum configuration
-controller.quorum.bootstrap.servers=localhost:9093
-controller.listener.names=CONTROLLER</code></pre>
-
-  <p>
-    Each broker is restarted with a KRaft configuration until the entire 
cluster is running in KRaft mode.
-  </p>
-
-  <h3>Finalizing the migration</h3>
-  <p>
-    Once all brokers have been restarted in KRaft mode, the last step to 
finalize the migration is to take the
-    KRaft controllers out of migration mode. This is done by removing the 
"zookeeper.metadata.migration.enable"
-    property from each of their configs and restarting them one at a time.
-  </p>
-  <p>
-    Once the migration has been finalized, you can safely deprovision your 
ZooKeeper cluster, assuming you are
-    not using it for anything else. After this point, it is no longer possible 
to revert to ZooKeeper mode.
-  </p>
-
-  <pre><code class="language-text"># Sample KRaft cluster 
controller.properties listening on 9093
-process.roles=controller
-node.id=3000
-controller.quorum.bootstrap.servers=localhost:9093
-controller.listener.names=CONTROLLER
-listeners=CONTROLLER://:9093
-
-# Disable the migration
-# zookeeper.metadata.migration.enable=true
-
-# Remove ZooKeeper client configuration
-# zookeeper.connect=localhost:2181
-
-# Other configs ...</code></pre>
-
-  <h3>Reverting to ZooKeeper mode During the Migration</h3>
-  <p>
-    While the cluster is still in migration mode, it is possible to revert to 
ZooKeeper mode.  The process
-    to follow depends on how far the migration has progressed. In order to 
find out how to revert,
-    select the <b>final</b> migration step that you have <b>completed</b> in 
this table.
-  </p>
-  <p>
-    Note that the directions given here assume that each step was fully 
completed, and they were
-    done in order. So, for example, we assume that if "Enter Migration Mode on 
the Brokers" was
-    completed, "Provisioning the KRaft controller quorum" was also fully 
completed previously.
-  </p>
-  <p>
-    If you did not fully complete any step, back out whatever you have done 
and then follow revert
-    directions for the last fully completed step.
-  </p>
-
-  <table class="data-table">
-      <tbody>
-      <tr>
-        <th>Final Migration Section Completed</th>
-        <th>Directions for Reverting</th>
-        <th>Notes</th>
-      </tr>
-      <tr>
-        <td>Preparing for migration</td>
-        <td>
-          The preparation section does not involve leaving ZooKeeper mode. So 
there is nothing to do in the
-          case of a revert.
-        </td>
-        <td>
-        </td>
-      </tr>
-      <tr>
-        <td>Provisioning the KRaft controller quorum</td>
-        <td>
-          <ul>
-            <li>
-              Deprovision the KRaft controller quorum.
-            </li>
-            <li>
-              Then you are done.
-            </li>
-          </ul>
-        </td>
-        <td>
-        </td>
-      </tr>
-      <tr>
-        <td>Enter Migration Mode on the brokers</td>
-        <td>
-          <ul>
-            <li>
-              Deprovision the KRaft controller quorum.
-            </li>
-            <li>
-              Using <code>zookeeper-shell.sh</code>, run <code>rmr 
/controller</code> so that one
-              of the brokers can become the new old-style controller. 
Additionally, run
-              <code>get /migration</code> followed by <code>rmr 
/migration</code> to clear the
-              migration state from ZooKeeper. This will allow you to 
re-attempt the migration
-              in the future. The data read from "/migration" can be useful for 
debugging.
-            </li>
-            <li>
-              On each broker, remove the 
<code>zookeeper.metadata.migration.enable</code>,
-              <code>controller.listener.names</code>, and 
<code>controller.quorum.bootstrap.servers</code>
-              configurations, and replace <code>node.id</code> with 
<code>broker.id</code>.
-              Then perform a rolling restart of all brokers.
-            </li>
-            <li>
-              Then you are done.
-            </li>
-          </ul>
-        </td>
-        <td>
-          It is important to perform the <code>zookeeper-shell.sh</code> step 
<b>quickly</b>, to minimize the amount of
-          time that the cluster lacks a controller. Until the <code> 
/controller</code> znode is deleted,
-          you can also ignore any errors in the broker log about failing to 
connect to the Kraft controller.
-          Those error logs should disappear after second roll to pure 
zookeeper mode.
-        </td>
-      </tr>
-      <tr>
-        <td>Migrating brokers to KRaft</td>
-        <td>
-          <ul>
-            <li>
-              On each broker, remove the <code>process.roles</code> 
configuration,
-              replace the <code>node.id</code> with <code>broker.id</code> and
-              restore the <code>zookeeper.connect</code> configuration to its 
previous value.
-              If your cluster requires other ZooKeeper configurations for 
brokers, such as
-              <code>zookeeper.ssl.protocol</code>, re-add those configurations 
as well.
-              Then perform a rolling restart of all brokers.
-            </li>
-            <li>
-              Deprovision the KRaft controller quorum.
-            </li>
-            <li>
-              Using <code>zookeeper-shell.sh</code>, run <code>rmr 
/controller</code> so that one
-              of the brokers can become the new old-style controller.
-            </li>
-            <li>
-              On each broker, remove the 
<code>zookeeper.metadata.migration.enable</code>,
-              <code>controller.listener.names</code>, and 
<code>controller.quorum.bootstrap.servers</code>
-              configurations.
-              Then perform a second rolling restart of all brokers.
-            </li>
-            <li>
-              Then you are done.
-            </li>
-          </ul>
-        </td>
-        <td>
-          <ul>
-            <li>
-              It is important to perform the <code>zookeeper-shell.sh</code> 
step <b>quickly</b>, to minimize the amount of
-              time that the cluster lacks a controller. Until the <code> 
/controller</code> znode is deleted,
-              you can also ignore any errors in the broker log about failing 
to connect to the Kraft controller.
-              Those error logs should disappear after second roll to pure 
zookeeper mode.
-            </li>
-            <li>
-              Make sure that on the first cluster roll, 
<code>zookeeper.metadata.migration.enable</code> remains set to
-              <code>true</code>. <b>Do not set it to false until the second 
cluster roll.</b>
-            </li>
-          </ul>
-        </td>
-      </tr>
-      <tr>
-        <td>Finalizing the migration</td>
-        <td>
-          If you have finalized the ZK migration, then you cannot revert.
-        </td>
-        <td>
-          Some users prefer to wait for a week or two before finalizing the 
migration. While this
-          requires you to keep the ZooKeeper cluster running for a while 
longer, it may be helpful
-          in validating KRaft mode in your cluster.
-        </td>
-      </tr>
-    </tbody>
- </table>
-
+  <p>In order to migrate from ZooKeeper to KRaft you need to use a bridge 
release. The last bridge release is Kafka 3.9.
+    See the <a href="/39/documentation/#kraft_zk_migration">ZooKeeper to KRaft 
Migration steps</a> in the 3.9 documentation.</p>
 
-<h3 class="anchor-heading"><a id="tiered_storage" class="anchor-link"></a><a 
href="#kraft">6.11 Tiered Storage</a></h3>
+<h3 class="anchor-heading"><a id="tiered_storage" class="anchor-link"></a><a 
href="#kraft">6.9 Tiered Storage</a></h3>
 
 <h4 class="anchor-heading"><a id="tiered_storage_overview" 
class="anchor-link"></a><a href="#tiered_storage_overview">Tiered Storage 
Overview</a></h4>
 
@@ -4376,7 +3918,7 @@ $ ./gradlew clean :storage:testJar</code></pre>
 <p>After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
 Next, setting configurations in the broker side to enable tiered storage 
feature.</p>
 
-<pre><code class="language-text"># Sample Zookeeper/Kraft broker 
server.properties listening on PLAINTEXT://:9092
+<pre><code class="language-text"># Sample KRaft broker server.properties 
listening on PLAINTEXT://:9092
 remote.log.storage.system.enable=true
 
 # Setting the listener for the clients in RemoteLogMetadataManager to talk to 
the brokers.
diff --git a/docs/quickstart.html b/docs/quickstart.html
index 64a7a23c6b9..1ded73e2256 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -42,15 +42,11 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
             <a href="#quickstart_startserver">Step 2: Start the Kafka 
environment</a>
         </h4>
 
-        <p class="note">NOTE: Your local environment must have Java 11+ 
installed.</p>
+        <p class="note">NOTE: Your local environment must have Java 17+ 
installed.</p>
 
-        <p>Apache Kafka can be started using KRaft or ZooKeeper. To get 
started with either configuration follow one of the sections below but not 
both.</p>
+        <p>Kafka can be run using local scripts and downloaded files or the 
docker image.</p>
 
-        <h5>Kafka with KRaft</h5>
-
-        <p>Kafka can be run using KRaft mode using local scripts and 
downloaded files or the docker image. Follow one of the sections below but not 
both to start the kafka server.</p>
-
-        <h6>Using downloaded files</h6>
+        <h5>Using downloaded files</h5>
 
         <p>Generate a Cluster UUID</p>
         <pre><code class="language-bash">$ 
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"</code></pre>
@@ -63,7 +59,7 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
 
         <p>Once the Kafka server has successfully launched, you will have a 
basic Kafka environment running and ready to use.</p>
 
-        <h6>Using JVM Based Apache Kafka Docker Image</h6>
+        <h5>Using JVM Based Apache Kafka Docker Image</h5>
 
         <p> Get the Docker image:</p>
         <pre><code class="language-bash">$ docker pull 
apache/kafka:{{fullDotVersion}}</code></pre>
@@ -71,7 +67,7 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
         <p> Start the Kafka Docker container: </p>
         <pre><code class="language-bash">$ docker run -p 9092:9092 
apache/kafka:{{fullDotVersion}}</code></pre>
 
-        <h6>Using GraalVM Based Native Apache Kafka Docker Image</h6>
+        <h5>Using GraalVM Based Native Apache Kafka Docker Image</h5>
 
         <p>Get the Docker image:</p>
         <pre><code class="language-bash">$ docker pull 
apache/kafka-native:{{fullDotVersion}}</code></pre>
@@ -79,18 +75,6 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
         <p>Start the Kafka Docker container:</p>
         <pre><code class="language-bash">$ docker run -p 9092:9092 
apache/kafka-native:{{fullDotVersion}}</code></pre>
 
-        <h5>Kafka with ZooKeeper</h5>
-
-        <p>Run the following commands in order to start all services in the 
correct order:</p>
-        <pre><code class="language-bash"># Start the ZooKeeper service
-$ bin/zookeeper-server-start.sh config/zookeeper.properties</code></pre>
-
-        <p>Open another terminal session and run:</p>
-        <pre><code class="language-bash"># Start the Kafka broker service
-$ bin/kafka-server-start.sh config/server.properties</code></pre>
-
-        <p>Once all services have successfully launched, you will have a basic 
Kafka environment running and ready to use.</p>
-
     </div>
 
     <div class="quickstart-step">
@@ -320,9 +304,6 @@ wordCounts.toStream().to("output-topic", 
Produced.with(Serdes.String(), Serdes.L
             <li>
                 Stop the Kafka broker with <code>Ctrl-C</code>.
             </li>
-            <li>
-                Lastly, if the Kafka with ZooKeeper section was followed, stop 
the ZooKeeper server with <code>Ctrl-C</code>.
-            </li>
         </ol>
 
         <p>
@@ -330,7 +311,7 @@ wordCounts.toStream().to("output-topic", 
Produced.with(Serdes.String(), Serdes.L
             along the way, run the command:
         </p>
 
-        <pre><code class="language-bash">$ rm -rf /tmp/kafka-logs 
/tmp/zookeeper /tmp/kraft-combined-logs</code></pre>
+        <pre><code class="language-bash">$ rm -rf /tmp/kafka-logs 
/tmp/kraft-combined-logs</code></pre>
 
     </div>
 
diff --git a/docs/security.html b/docs/security.html
index f9e7a5ba69a..89c818c2e1e 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -17,7 +17,7 @@
 
 <script id="security-template" type="text/x-handlebars-template">
     <h3 class="anchor-heading"><a id="security_overview" 
class="anchor-link"></a><a href="#security_overview">7.1 Security 
Overview</a></h3>
-    In release 0.9.0.0, the Kafka community added a number of features that, 
used either separately or together, increases security in a Kafka cluster. The 
following security measures are currently supported:
+    The following security measures are currently supported:
     <ol>
         <li>Authentication of connections to brokers from clients (producers 
and consumers), other brokers and tools, using either SSL or SASL. Kafka 
supports the following SASL mechanisms:
             <ul>
@@ -26,7 +26,6 @@
                 <li>SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at 
version 0.10.2.0</li>
                 <li>SASL/OAUTHBEARER - starting at version 2.0</li>
             </ul></li>
-        <li>Authentication of connections from brokers to ZooKeeper</li>
         <li>Encryption of data transferred between brokers and clients, 
between brokers, or between brokers and tools using SSL (Note that there is a 
performance degradation when SSL is enabled, the magnitude of which depends on 
the CPU type and the JVM implementation.)</li>
         <li>Authorization of read / write operations by clients</li>
         <li>Authorization is pluggable and integration with external 
authorization services is supported</li>
@@ -94,15 +93,6 @@
       by the security protocol defined by 
<code>security.inter.broker.protocol</code>, which
       defaults to <code>PLAINTEXT</code>.</p>
     
-    <p>For legacy clusters which rely on Zookeeper to store cluster metadata, 
it is possible to
-      declare a separate listener to be used for metadata propagation from the 
active controller
-      to the brokers. This is defined by 
<code>control.plane.listener.name</code>. The active controller
-      will use this listener when it needs to push metadata updates to the 
brokers in the cluster.
-      The benefit of using a control plane listener is that it uses a separate 
processing thread,
-      which makes it less likely for application traffic to impede timely 
propagation of metadata changes
-      (such as partition leader and ISR updates). Note that the default value 
is null, which
-      means that the controller will use the same listener defined by 
<code>inter.broker.listener</code></p>
-    
     <p>In a KRaft cluster, a broker is any server which has the 
<code>broker</code> role enabled
       in <code>process.roles</code> and a controller is any server which has 
the <code>controller</code>
       role enabled. Listener configuration depends on the role. The listener 
defined by
@@ -542,19 +532,6 @@ $ bin/kafka-console-consumer.sh --bootstrap-server 
localhost:9093 --topic test -
                     SASL, the section name may be prefixed with the listener 
name in lower-case
                     followed by a period, e.g. 
<code>sasl_ssl.KafkaServer</code>.</p>
 
-                    <p><code>Client</code> section is used to authenticate a 
SASL connection with
-                    zookeeper. It also allows the brokers to set SASL ACL on 
zookeeper
-                    nodes which locks these nodes down so that only the 
brokers can
-                    modify it. It is necessary to have the same principal name 
across all
-                    brokers. If you want to use a section name other than 
Client, set the
-                    system property <code>zookeeper.sasl.clientconfig</code> 
to the appropriate
-                    name (<i>e.g.</i>, 
<code>-Dzookeeper.sasl.clientconfig=ZkClient</code>).</p>
-
-                    <p>ZooKeeper uses "zookeeper" as the service name by 
default. If you
-                    want to change this, set the system property
-                    <code>zookeeper.sasl.client.username</code> to the 
appropriate name
-                    (<i>e.g.</i>, 
<code>-Dzookeeper.sasl.client.username=zk</code>).</p>
-
                     <p>Brokers may also configure JAAS using the broker 
configuration property <code>sasl.jaas.config</code>.
                     The property name must be prefixed with the listener 
prefix including the SASL mechanism,
                     i.e. 
<code>listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config</code>. 
Only one
@@ -576,7 +553,6 @@ 
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.p
                         <li><code>{listenerName}.KafkaServer</code> section of 
static JAAS configuration</li>
                         <li><code>KafkaServer</code> section of static JAAS 
configuration</li>
                     </ul>
-                    Note that ZooKeeper JAAS config may only be configured 
using static JAAS configuration.
 
                     <p>See <a 
href="#security_sasl_kerberos_brokerconfig">GSSAPI (Kerberos)</a>,
                         <a href="#security_sasl_plain_brokerconfig">PLAIN</a>,
@@ -708,19 +684,10 @@ $ sudo /usr/sbin/kadmin.local -q "ktadd -k 
/etc/security/keytabs/{keytabname}.ke
     storeKey=true
     keyTab="/etc/security/keytabs/kafka_server.keytab"
     principal="kafka/[email protected]";
-};
-
-// Zookeeper client authentication
-Client {
-    com.sun.security.auth.module.Krb5LoginModule required
-    useKeyTab=true
-    storeKey=true
-    keyTab="/etc/security/keytabs/kafka_server.keytab"
-    principal="kafka/[email protected]";
 };</code></pre>
 
                             <code>KafkaServer</code> section in the JAAS file 
tells the broker which principal to use and the location of the keytab where 
this principal is stored. It
-                            allows the broker to login using the keytab 
specified in this section. See <a href="#security_jaas_broker">notes</a> for 
more details on Zookeeper SASL configuration.
+                            allows the broker to login using the keytab 
specified in this section.
                         </li>
                         <li>Pass the JAAS and optionally the krb5 file 
locations as JVM parameters to each Kafka broker (see <a 
href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html";>here</a>
 for more details):
                             <pre><code 
class="language-bash">-Djava.security.krb5.conf=/etc/kafka/krb5.conf
@@ -1127,7 +1094,7 @@ sasl.mechanism=OAUTHBEARER</code></pre></li>
                     <ul>
                         <li>The default implementation of SASL/OAUTHBEARER in 
Kafka creates and validates <a 
href="https://tools.ietf.org/html/rfc7515#appendix-A.5";>Unsecured JSON Web 
Tokens</a>.
                             This is suitable only for non-production use.</li>
-                        <li>OAUTHBEARER should be used in production 
enviromnments only with TLS-encryption to prevent interception of tokens.</li>
+                        <li>OAUTHBEARER should be used in production 
environments only with TLS-encryption to prevent interception of tokens.</li>
                         <li>The default unsecured SASL/OAUTHBEARER 
implementation may be overridden (and must be overridden in production 
environments)
                             using custom login and SASL Server callback 
handlers as described above.</li>
                         <li>For more details on OAuth 2 security 
considerations in general, refer to <a 
href="https://tools.ietf.org/html/rfc6749#section-10";>RFC 6749, Section 
10</a>.</li>
@@ -1195,14 +1162,12 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of 
the other enabled mechani
                 <li><h5 class="anchor-heading"><a 
id="security_token_management" class="anchor-link"></a><a 
href="#security_token_management">Token Management</a></h5>
                     <p> A secret is used to generate and verify delegation 
tokens. This is supplied using config
                         option <code>delegation.token.secret.key</code>. The 
same secret key must be configured across all the brokers.
-                        If using Kafka with KRaft the controllers must also be 
configured with the secret using the same config option.
+                        The controllers must also be configured with the 
secret using the same config option.
                         If the secret is not set or set to empty string, 
delegation token authentication and API operations will fail.</p>
 
-                    <p>When using Kafka with Zookeeper, the token details are 
stored in Zookeeper and delegation tokens are suitable
-                        for use in Kafka installations where Zookeeper is on a 
private network. When using Kafka with KRaft, the token
-                        details are stored with the other metadata on the 
controller nodes and delegation tokens are suitable for use
-                        when the controllers are on a private network or when 
all commnications between brokers and controllers is
-                        encrypted.  Currently, this secret is stored as plain 
text in the server.properties config file.
+                    <p>The token details are stored with the other metadata on 
the controller nodes and delegation tokens are suitable for use
+                        when the controllers are on a private network or when 
all communications between brokers and controllers is
+                        encrypted. Currently, this secret is stored as plain 
text in the server.properties config file.
                         We intend to make these configurable in a future Kafka 
release.</p>
 
                     <p>A token has a current life, and a maximum renewable 
life. By default, tokens must be renewed once every 24 hours
@@ -1210,7 +1175,7 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of 
the other enabled mechani
                         and <code>delegation.token.max.lifetime.ms</code> 
config options.</p>
 
                     <p>Tokens can also be cancelled explicitly.  If a token is 
not renewed by the token’s expiration time or if token
-                        is beyond the max life time, it will be deleted from 
all broker caches as well as from zookeeper.</p>
+                        is beyond the max life time, it will be deleted from 
all broker caches.</p>
                 </li>
 
                 <li><h5 class="anchor-heading"><a 
id="security_sasl_create_tokens" class="anchor-link"></a><a 
href="#security_sasl_create_tokens">Creating Delegation Tokens</a></h5>
@@ -1300,8 +1265,6 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of 
the other enabled mechani
     <br>All of this implies that Kafka must understand how to serialize and 
deserialize the client principal. The authentication framework allows for 
customized principals by overriding the <code>principal.builder.class</code> 
configuration.
     In order for customized principals to work with KRaft, the configured 
class must implement 
<code>org.apache.kafka.common.security.auth.KafkaPrincipalSerde</code> so that 
Kafka knows how to serialize and deserialize the principals.
     The default implementation 
<code>org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder</code>
 uses the Kafka RPC format defined in the source code: 
<code>clients/src/main/resources/common/message/DefaultPrincipalData.json</code>.
-
-    For more detail about request forwarding in KRaft, see <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-590%3A+Redirect+Zookeeper+Mutation+Protocols+to+The+Controller";>KIP-590</a>
     
     <h5 class="anchor-heading"><a id="security_authz_ssl" 
class="anchor-link"></a><a href="#security_authz_ssl">Customizing SSL User 
Name</a></h5>
 
@@ -1561,16 +1524,6 @@ 
RULE:[n:string](regexp)s/pattern/replacement/g/U</code></pre>
             <pre><code class="language-bash">$ bin/kafka-acls.sh 
--bootstrap-server localhost:9092 --add --allow-principal User:Bob --consumer 
--topic Test-topic --group Group-1 </code></pre>
             Note that for consumer option we must also specify the consumer 
group.
             In order to remove a principal from producer or consumer role we 
just need to pass --remove option. </li>
-
-        <li><b>Admin API based acl management</b><br>
-            Users having Alter permission on ClusterResource can use Admin API 
for ACL management. kafka-acls.sh script supports AdminClient API to manage 
ACLs without interacting with zookeeper/authorizer directly.
-            All the above examples can be executed by using 
<b>--bootstrap-server</b> option. For example:
-
-            <pre><code class="language-bash">$ bin/kafka-acls.sh 
--bootstrap-server localhost:9092 --command-config 
/tmp/adminclient-configs.conf --add --allow-principal User:Bob --producer 
--topic Test-topic
-$ bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config 
/tmp/adminclient-configs.conf --add --allow-principal User:Bob --consumer 
--topic Test-topic --group Group-1
-$ bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config 
/tmp/adminclient-configs.conf --list --topic Test-topic
-$ bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config 
/tmp/adminclient-configs.conf --add --allow-principal User:tokenRequester 
--operation CreateTokens --user-principal "owner1"</code></pre></li>
-
     </ul>
 
     <h4 class="anchor-heading"><a id="security_authz_primitives" 
class="anchor-link"></a><a href="#security_authz_primitives">Authorization 
Primitives</a></h4>
@@ -1633,7 +1586,7 @@ $ bin/kafka-acls.sh --bootstrap-server localhost:9092 
--command-config /tmp/admi
             <td>PRODUCE (0)</td>
             <td>Write</td>
             <td>TransactionalId</td>
-            <td>An transactional producer which has its transactional.id set 
requires this privilege.</td>
+            <td>A transactional producer which has its transactional.id set 
requires this privilege.</td>
         </tr>
         <tr>
             <td>PRODUCE (0)</td>
@@ -2373,157 +2326,4 @@ security.inter.broker.protocol=SSL</code></pre>
     <pre><code 
class="language-text">listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
 security.inter.broker.protocol=SSL</code></pre>
 
-    ZooKeeper can be secured independently of the Kafka cluster. The steps for 
doing this are covered in section <a href="#zk_authz_migration">7.7.2</a>.
-
-
-    <h3 class="anchor-heading"><a id="zk_authz" class="anchor-link"></a><a 
href="#zk_authz">7.7 ZooKeeper Authentication</a></h3>
-    ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 
3.5.x versions.
-    Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either 
individually or both together --
-    beginning with version 2.5. See
-    <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication";>KIP-515:
 Enable ZK client to use the new TLS supported authentication</a>
-    for more details.
-    <p>When using mTLS alone, every broker and any CLI tools (such as the <a 
href="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
-        should identify itself with the same Distinguished Name (DN) because 
it is the DN that is ACL'ed.
-        This can be changed as described below, but it involves writing and 
deploying a custom ZooKeeper authentication provider.
-        Generally each certificate should have the same DN but a different 
Subject Alternative Name (SAN)
-        so that hostname verification of the brokers and any CLI tools by 
ZooKeeper will succeed.
-    </p>
-    <p>
-        When using SASL authentication to ZooKeeper together with mTLS, both 
the SASL identity and
-        either the DN that created the znode (i.e. the creating broker's 
certificate)
-        or the DN of the Security Migration Tool (if migration was performed 
after the znode was created)
-        will be ACL'ed, and all brokers and CLI tools will be authorized even 
if they all use different DNs
-        because they will all use the same ACL'ed SASL identity.
-        It is only when  using mTLS authentication alone that all the DNs must 
match (and SANs become critical --
-        again, in the absence of writing and deploying a custom ZooKeeper 
authentication provider as described below).
-    </p>
-    <p>
-        Use the broker properties file to set TLS configs for brokers as 
described below.
-    </p>
-    <p>
-        Use the <code>--zk-tls-config-file &lt;file&gt;</code> option to set 
TLS configs in the Zookeeper Security Migration Tool.
-        The <code>kafka-acls.sh</code> and <code>kafka-configs.sh</code> CLI 
tools also support the <code>--zk-tls-config-file &lt;file&gt;</code> option.
-    </p>
-    <p>
-        Use the <code>-zk-tls-config-file &lt;file&gt;</code> option (note the 
single-dash rather than double-dash)
-        to set TLS configs for the <code>zookeeper-shell.sh</code> CLI tool.
-    </p>
-    <h4 class="anchor-heading"><a id="zk_authz_new" class="anchor-link"></a><a 
href="#zk_authz_new">7.7.1 New clusters</a></h4>
-    <h5 class="anchor-heading"><a id="zk_authz_new_sasl" 
class="anchor-link"></a><a href="#zk_authz_new_sasl">7.7.1.1 ZooKeeper SASL 
Authentication</a></h5>
-    To enable ZooKeeper SASL authentication on brokers, there are two 
necessary steps:
-    <ol>
-        <li> Create a JAAS login file and set the appropriate system property 
to point to it as described above</li>
-        <li> Set the configuration property <code>zookeeper.set.acl</code> in 
each broker to true</li>
-    </ol>
-
-    The metadata stored in ZooKeeper for the Kafka cluster is world-readable, 
but can only be modified by the brokers. The rationale behind this decision is 
that the data stored in ZooKeeper is not sensitive, but inappropriate 
manipulation of that data can cause cluster disruption. We also recommend 
limiting the access to ZooKeeper via network segmentation (only brokers and 
some admin tools need access to ZooKeeper).
-
-    <h5 class="anchor-heading"><a id="zk_authz_new_mtls" 
class="anchor-link"></a><a href="#zk_authz_new_mtls">7.7.1.2 ZooKeeper Mutual 
TLS Authentication</a></h5>
-    ZooKeeper mTLS authentication can be enabled with or without SASL 
authentication.  As mentioned above,
-    when using mTLS alone, every broker and any CLI tools (such as the <a 
href="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
-    must generally identify itself with the same Distinguished Name (DN) 
because it is the DN that is ACL'ed, which means
-    each certificate should have an appropriate Subject Alternative Name (SAN) 
so that
-    hostname verification of the brokers and any CLI tool by ZooKeeper will 
succeed.
-    <p>
-        It is possible to use something other than the DN for the identity of 
mTLS clients by writing a class that
-        extends 
<code>org.apache.zookeeper.server.auth.X509AuthenticationProvider</code> and 
overrides the method
-        <code>protected String getClientId(X509Certificate clientCert)</code>.
-        Choose a scheme name and set <code>authProvider.[scheme]</code> in 
ZooKeeper to be the fully-qualified class name
-        of the custom implementation; then set 
<code>ssl.authProvider=[scheme]</code> to use it.
-    </p>
-    Here is a sample (partial) ZooKeeper configuration for enabling TLS 
authentication.
-    These configurations are described in the
-    <a 
href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#sc_authOptions";>ZooKeeper
 Admin Guide</a>.
-    <pre><code class="language-text">secureClientPort=2182
-serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
-authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
-ssl.keyStore.location=/path/to/zk/keystore.jks
-ssl.keyStore.password=zk-ks-passwd
-ssl.trustStore.location=/path/to/zk/truststore.jks
-ssl.trustStore.password=zk-ts-passwd</code></pre>
-    <strong>IMPORTANT</strong>: ZooKeeper does not support setting the key 
password in the ZooKeeper server keystore
-    to a value different from the keystore password itself.
-    Be sure to set the key password to be the same as the keystore password.
-
-    <p>Here is a sample (partial) Kafka Broker configuration for connecting to 
ZooKeeper with mTLS authentication.
-        These configurations are described above in <a 
href="#brokerconfigs">Broker Configs</a>.
-    </p>
-    <pre><code class="language-text"># connect to the ZooKeeper port 
configured for TLS
-zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
-# required to use TLS to ZooKeeper (default is false)
-zookeeper.ssl.client.enable=true
-# required to use TLS to ZooKeeper
-zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
-# define key/trust stores to use TLS to ZooKeeper; ignored unless 
zookeeper.ssl.client.enable=true
-zookeeper.ssl.keystore.location=/path/to/kafka/keystore.jks
-zookeeper.ssl.keystore.password=kafka-ks-passwd
-zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
-zookeeper.ssl.truststore.password=kafka-ts-passwd
-# tell broker to create ACLs on znodes
-zookeeper.set.acl=true</code></pre>
-    <strong>IMPORTANT</strong>: ZooKeeper does not support setting the key 
password in the ZooKeeper client (i.e. broker) keystore
-    to a value different from the keystore password itself.
-    Be sure to set the key password to be the same as the keystore password.
-
-    <h4 class="anchor-heading"><a id="zk_authz_migration" 
class="anchor-link"></a><a href="#zk_authz_migration">7.7.2 Migrating 
clusters</a></h4>
-    If you are running a version of Kafka that does not support security or 
simply with security disabled, and you want to make the cluster secure, then 
you need to execute the following steps to enable ZooKeeper authentication with 
minimal disruption to your operations:
-    <ol>
-        <li>Enable SASL and/or mTLS authentication on ZooKeeper.  If enabling 
mTLS, you would now have both a non-TLS port and a TLS port, like this:
-            <pre><code class="language-text">clientPort=2181
-secureClientPort=2182
-serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
-authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
-ssl.keyStore.location=/path/to/zk/keystore.jks
-ssl.keyStore.password=zk-ks-passwd
-ssl.trustStore.location=/path/to/zk/truststore.jks
-ssl.trustStore.password=zk-ts-passwd</code></pre>
-        </li>
-        <li>Perform a rolling restart of brokers setting the JAAS login file 
and/or defining ZooKeeper mutual TLS configurations (including connecting to 
the TLS-enabled ZooKeeper port) as required, which enables brokers to 
authenticate to ZooKeeper. At the end of the rolling restart, brokers are able 
to manipulate znodes with strict ACLs, but they will not create znodes with 
those ACLs</li>
-        <li>If you enabled mTLS, disable the non-TLS port in ZooKeeper</li>
-        <li>Perform a second rolling restart of brokers, this time setting the 
configuration parameter <code>zookeeper.set.acl</code> to true, which enables 
the use of secure ACLs when creating znodes</li>
-        <li>Execute the ZkSecurityMigrator tool. To execute the tool, there is 
this script: <code>bin/zookeeper-security-migration.sh</code> with 
<code>zookeeper.acl</code> set to secure. This tool traverses the corresponding 
sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file 
&lt;file&gt;</code> option if you enable mTLS.</li>
-    </ol>
-    <p>It is also possible to turn off authentication in a secure cluster. To 
do it, follow these steps:</p>
-    <ol>
-        <li>Perform a rolling restart of brokers setting the JAAS login file 
and/or defining ZooKeeper mutual TLS configurations, which enables brokers to 
authenticate, but setting <code>zookeeper.set.acl</code> to false. At the end 
of the rolling restart, brokers stop creating znodes with secure ACLs, but are 
still able to authenticate and manipulate all znodes</li>
-        <li>Execute the ZkSecurityMigrator tool. To execute the tool, run this 
script <code>bin/zookeeper-security-migration.sh</code> with 
<code>zookeeper.acl</code> set to unsecure. This tool traverses the 
corresponding sub-trees changing the ACLs of the znodes. Use the 
<code>--zk-tls-config-file &lt;file&gt;</code> option if you need to set TLS 
configuration.</li>
-        <li>If you are disabling mTLS, enable the non-TLS port in 
ZooKeeper</li>
-        <li>Perform a second rolling restart of brokers, this time omitting 
the system property that sets the JAAS login file and/or removing ZooKeeper 
mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper 
port) as required</li>
-        <li>If you are disabling mTLS, disable the TLS port in ZooKeeper</li>
-    </ol>
-    Here is an example of how to run the migration tool:
-    <pre><code class="language-bash">$ bin/zookeeper-security-migration.sh 
--zookeeper.acl=secure --zookeeper.connect=localhost:2181</code></pre>
-    <p>Run this to see the full list of parameters:</p>
-    <pre><code class="language-bash">$ bin/zookeeper-security-migration.sh 
--help</code></pre>
-    <h4 class="anchor-heading"><a id="zk_authz_ensemble" 
class="anchor-link"></a><a href="#zk_authz_ensemble">7.7.3 Migrating the 
ZooKeeper ensemble</a></h4>
-    It is also necessary to enable SASL and/or mTLS authentication on the 
ZooKeeper ensemble. To do it, we need to perform a rolling restart of the 
server and set a few properties. See above for mTLS information.  Please refer 
to the ZooKeeper documentation for more detail:
-    <ol>
-        <li><a 
href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperProgrammers.html#sc_ZooKeeperAccessControl";>Apache
 ZooKeeper documentation</a></li>
-        <li><a 
href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL";>Apache
 ZooKeeper wiki</a></li>
-    </ol>
-    <h4 class="anchor-heading"><a id="zk_authz_quorum" 
class="anchor-link"></a><a href="#zk_authz_quorum">7.7.4 ZooKeeper Quorum 
Mutual TLS Authentication</a></h4>
-    It is possible to enable mTLS authentication between the ZooKeeper servers 
themselves.
-    Please refer to the <a 
href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#Quorum+TLS";>ZooKeeper
 documentation</a> for more detail.
-
-    <h3 class="anchor-heading"><a id="zk_encryption" 
class="anchor-link"></a><a href="#zk_encryption">7.8 ZooKeeper 
Encryption</a></h3>
-    ZooKeeper connections that use mutual TLS are encrypted.
-    Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka 
version 2.5) ZooKeeper supports a sever-side config
-    <code>ssl.clientAuth</code> (case-insensitively: 
<code>want</code>/<code>need</code>/<code>none</code> are the valid options, 
the default is <code>need</code>),
-    and setting this value to <code>none</code> in ZooKeeper allows clients to 
connect via a TLS-encrypted connection
-    without presenting their own certificate.  Here is a sample (partial) 
Kafka Broker configuration for connecting to ZooKeeper with just TLS encryption.
-    These configurations are described above in <a 
href="#brokerconfigs">Broker Configs</a>.
-    <pre><code class="language-text"># connect to the ZooKeeper port 
configured for TLS
-zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
-# required to use TLS to ZooKeeper (default is false)
-zookeeper.ssl.client.enable=true
-# required to use TLS to ZooKeeper
-zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
-# define trust stores to use TLS to ZooKeeper; ignored unless 
zookeeper.ssl.client.enable=true
-# no need to set keystore information assuming ssl.clientAuth=none on ZooKeeper
-zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
-zookeeper.ssl.truststore.password=kafka-ts-passwd
-# tell broker to create ACLs on znodes (if using SASL authentication, 
otherwise do not set this)
-zookeeper.set.acl=true</code></pre>
-</script>
-
 <div class="p-security"></div>
diff --git a/docs/streams/quickstart.html b/docs/streams/quickstart.html
index b1b55e93fca..2895e4673f3 100644
--- a/docs/streams/quickstart.html
+++ b/docs/streams/quickstart.html
@@ -33,7 +33,7 @@
         </div>
     </div>
 <p>
-  This tutorial assumes you are starting fresh and have no existing Kafka or 
ZooKeeper data. However, if you have already started Kafka, feel free to skip 
the first two steps.
+  This tutorial assumes you are starting fresh and have no existing Kafka 
data. However, if you have already started Kafka, feel free to skip the first 
two steps.
 </p>
 
   <p>
@@ -96,30 +96,6 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
 
 <h4><a id="quickstart_streams_startserver" 
href="#quickstart_streams_startserver">Step 2: Start the Kafka server</a></h4>
 
-<p>
-  Apache Kafka can be started using ZooKeeper or KRaft. To get started with 
either configuration follow one of the sections below but not both.
-</p>
-
-<h5>
-  Kafka with ZooKeeper
-</h5>
-
-<p>
-  Run the following commands in order to start all services in the correct 
order:
-</p>
-
-<pre><code class="language-bash">$ bin/zookeeper-server-start.sh 
config/zookeeper.properties</code></pre>
-
-<p>
-  Open another terminal session and run:
-</p>
-
-<pre><code class="language-bash">$ bin/kafka-server-start.sh 
config/server.properties</code></pre>
-
-<h5>
-  Kafka with KRaft
-</h5>
-
 <p>
   Generate a Cluster UUID
 </p>
@@ -325,7 +301,7 @@ Looking beyond the scope of this concrete example, what 
Kafka Streams is doing h
 
 <h4><a id="quickstart_streams_stop" href="#quickstart_streams_stop">Step 6: 
Teardown the application</a></h4>
 
-<p>You can now stop the console consumer, the console producer, the Wordcount 
application, the Kafka broker and the ZooKeeper server (if one was started) in 
order via <b>Ctrl-C</b>.</p>
+<p>You can now stop the console consumer, the console producer, the Wordcount 
application, the Kafka broker in order via <b>Ctrl-C</b>.</p>
 
  <div class="pagination">
         <a href="/{{version}}/documentation/streams" class="pagination__btn 
pagination__btn__prev">Previous</a>
diff --git a/docs/toc.html b/docs/toc.html
index af670a8de7e..1751962118f 100644
--- a/docs/toc.html
+++ b/docs/toc.html
@@ -131,14 +131,8 @@
                         <li><a href="#multitenancy-more">Further 
considerations</a>
                     </ul>
                 
-                <li><a href="#config">6.5 Important Configs</a>
-                    <ul>
-                        <li><a href="#clientconfig">Important Client 
Configs</a>
-                        <li><a href="#prodconfig">A Production Server 
Configs</a>
-                    </ul>
-                
-                <li><a href="#java">6.6 Java Version</a>
-                <li><a href="#hwandos">6.7 Hardware and OS</a>
+                <li><a href="#java">6.5 Java Version</a>
+                <li><a href="#hwandos">6.6 Hardware and OS</a>
                     <ul>
                         <li><a href="#os">OS</a>
                         <li><a href="#diskandfs">Disks and Filesystems</a>
@@ -148,7 +142,7 @@
                         <li><a href="#replace_disk">Replace KRaft Controller 
Disk</a>
                     </ul>
                 
-                <li><a href="#monitoring">6.8 Monitoring</a>
+                <li><a href="#monitoring">6.7 Monitoring</a>
                     <ul>
                         <li><a href="#remote_jmx">Security Considerations for 
Remote Monitoring using JMX</a>
                         <li><a href="#tiered_storage_monitoring">Tiered 
Storage Monitoring</a>
@@ -162,14 +156,7 @@
                         <li><a href="#others_monitoring">Others</a>
                     </ul>
                 
-                <li><a href="#zk">6.9 ZooKeeper</a>
-                    <ul>
-                        <li><a href="#zkversion">Stable Version</a>
-                        <li><a href="#zk_depr">ZooKeeper Deprecation</a>
-                        <li><a href="#zkops">Operationalization</a>
-                    </ul>
-                
-                <li><a href="#kraft">6.10 KRaft</a>
+                <li><a href="#kraft">6.8 KRaft</a>
                     <ul>
                         <li><a href="#kraft_config">Configuration</a>
                         <li><a href="#kraft_storage">Storage Tool</a>
@@ -179,7 +166,7 @@
                         <li><a href="#kraft_zk_migration">ZooKeeper to KRaft 
Migration</a>
                     </ul>
                 
-                <li><a href="#tiered_storage">6.11 Tiered Storage</a>
+                <li><a href="#tiered_storage">6.9 Tiered Storage</a>
                     <ul>
                         <li><a href="#tiered_storage_overview">Tiered Storage 
Overview</a>
                         <li><a href="#tiered_storage_config">Configuration</a>
@@ -197,20 +184,6 @@
                 <li><a href="#security_sasl">7.4 Authentication using SASL</a>
                 <li><a href="#security_authz">7.5 Authorization and ACLs</a>
                 <li><a href="#security_rolling_upgrade">7.6 Incorporating 
Security Features in a Running Cluster</a>
-                <li><a href="#zk_authz">7.7 ZooKeeper Authentication</a>
-                    <ul>
-                        <li><a href="#zk_authz_new">New Clusters</a>
-                            <ul>
-                                <li><a href="#zk_authz_new_sasl">ZooKeeper 
SASL Authentication</a>
-                                <li><a href="#zk_authz_new_mtls">ZooKeeper 
Mutual TLS Authentication</a>
-                            </ul>
-                        
-                        <li><a href="#zk_authz_migration">Migrating 
Clusters</a>
-                        <li><a href="#zk_authz_ensemble">Migrating the 
ZooKeeper Ensemble</a>
-                        <li><a href="#zk_authz_quorum">ZooKeeper Quorum Mutual 
TLS Authentication</a>
-                    </ul>
-                
-                <li><a href="#zk_encryption">7.8 ZooKeeper Encryption</a>
             </ul>
         
         <li><a href="#connect">8. Kafka Connect</a>


Reply via email to