This is an automated email from the ASF dual-hosted git repository.

jsancio pushed a commit to branch 3.3
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/3.3 by this push:
     new 408a17ad1f KAFKA-14188; Getting started for Kafka with KRaft (#12604)
408a17ad1f is described below

commit 408a17ad1f2fa468638509d159127547e36aa439
Author: José Armando García Sancio <jsan...@users.noreply.github.com>
AuthorDate: Thu Sep 8 16:22:09 2022 -0700

    KAFKA-14188; Getting started for Kafka with KRaft (#12604)
    
    Update the quickstart HTML pages for Kafka and Kafka Stream to include how 
to quickly start and
    experiment with a Kafka cluster using KRaft in addition to ZooKeeper.
    
    Reviews: Colin Patrick McCabe <cmcc...@apache.org>,  Chase Thomas 
<forl...@users.noreply.github.com>, Luke Chen <show...@gmail.com>
---
 docs/quickstart.html         | 39 +++++++++++++++++++++++++++++++---
 docs/streams/quickstart.html | 50 +++++++++++++++++++++++++++++++++-----------
 2 files changed, 74 insertions(+), 15 deletions(-)

diff --git a/docs/quickstart.html b/docs/quickstart.html
index a86ec56008..a382ca8c9e 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -46,12 +46,19 @@ $ cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
             NOTE: Your local environment must have Java 8+ installed.
         </p>
 
+        <p>
+            Apache Kafka can be started using ZooKeeper or KRaft. To get 
started with either configuration follow one the sections below but not both.
+        </p>
+
+        <h5>
+            Kafka with ZooKeeper
+        </h5>
+
         <p>
             Run the following commands in order to start all services in the 
correct order:
         </p>
 
         <pre class="line-numbers"><code class="language-bash"># Start the 
ZooKeeper service
-# Note: Soon, ZooKeeper will no longer be required by Apache Kafka.
 $ bin/zookeeper-server-start.sh config/zookeeper.properties</code></pre>
 
         <p>
@@ -64,6 +71,32 @@ $ bin/kafka-server-start.sh 
config/server.properties</code></pre>
         <p>
             Once all services have successfully launched, you will have a 
basic Kafka environment running and ready to use.
         </p>
+
+        <h5>
+            Kafka with KRaft
+        </h5>
+
+        <p>
+            Generate a Cluster UUID
+        </p>
+
+        <pre class="line-numbers"><code class="language-bash">$ 
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"</code></pre>
+
+        <p>
+            Format Log Directories
+        </p>
+
+        <pre class="line-numbers"><code class="language-bash">$ 
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties</code></pre>
+
+        <p>
+            Start the Kafka Server
+        </p>
+
+        <pre class="line-numbers"><code class="language-bash">$ 
bin/kafka-server-start.sh config/kraft/server.properties</code></pre>
+
+        <p>
+            Once the Kafka server has successfully launched, you will have a 
basic Kafka environment running and ready to use.
+        </p>
     </div>
 
     <div class="quickstart-step">
@@ -303,7 +336,7 @@ wordCounts.toStream().to("output-topic", 
Produced.with(Serdes.String(), Serdes.L
                 Stop the Kafka broker with <code>Ctrl-C</code>.
             </li>
             <li>
-                Lastly, stop the ZooKeeper server with <code>Ctrl-C</code>.
+                Lastly, if the Kafka with ZooKeeper section was followed, stop 
the ZooKeeper server with <code>Ctrl-C</code>.
             </li>
         </ol>
 
@@ -312,7 +345,7 @@ wordCounts.toStream().to("output-topic", 
Produced.with(Serdes.String(), Serdes.L
             along the way, run the command:
         </p>
 
-        <pre class="line-numbers"><code class="language-bash">$ rm -rf 
/tmp/kafka-logs /tmp/zookeeper</code></pre>
+        <pre class="line-numbers"><code class="language-bash">$ rm -rf 
/tmp/kafka-logs /tmp/zookeeper /tmp/kraft-combined-logs</code></pre>
 
     </div>
 
diff --git a/docs/streams/quickstart.html b/docs/streams/quickstart.html
index 2cc48ef089..5bb106d77a 100644
--- a/docs/streams/quickstart.html
+++ b/docs/streams/quickstart.html
@@ -33,8 +33,7 @@
         </div>
     </div>
 <p>
-  This tutorial assumes you are starting fresh and have no existing Kafka or 
ZooKeeper data. However, if you have already started Kafka and
-  ZooKeeper, feel free to skip the first two steps.
+  This tutorial assumes you are starting fresh and have no existing Kafka or 
ZooKeeper data. However, if you have already started Kafka, feel free to skip 
the first two steps.
 </p>
 
   <p>
@@ -98,19 +97,46 @@ Note that there are multiple downloadable Scala versions 
and we choose to use th
 <h4><a id="quickstart_streams_startserver" 
href="#quickstart_streams_startserver">Step 2: Start the Kafka server</a></h4>
 
 <p>
-Kafka uses <a href="https://zookeeper.apache.org/";>ZooKeeper</a> so you need 
to first start a ZooKeeper server if you don't already have one. You can use 
the convenience script packaged with kafka to get a quick-and-dirty single-node 
ZooKeeper instance.
+  Apache Kafka can be started using ZooKeeper or KRaft. To get started with 
either configuration follow one of the sections below but not both.
 </p>
 
-<pre class="line-numbers"><code class="language-bash">&gt; 
bin/zookeeper-server-start.sh config/zookeeper.properties
-[2013-04-22 15:01:37,495] INFO Reading configuration from: 
config/zookeeper.properties 
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
-...</code></pre>
+<h5>
+  Kafka with ZooKeeper
+</h5>
 
-<p>Now start the Kafka server:</p>
-<pre class="line-numbers"><code class="language-bash">&gt; 
bin/kafka-server-start.sh config/server.properties
-[2013-04-22 15:01:47,028] INFO Verifying properties 
(kafka.utils.VerifiableProperties)
-[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden 
to 1048576 (kafka.utils.VerifiableProperties)
-...</code></pre>
+<p>
+  Run the following commands in order to start all services in the correct 
order:
+</p>
+
+<pre class="line-numbers"><code class="language-bash">&gt; 
bin/zookeeper-server-start.sh config/zookeeper.properties</code></pre>
+
+<p>
+  Open another terminal session and run:
+</p>
+
+<pre class="line-numbers"><code class="language-bash">&gt; 
bin/kafka-server-start.sh config/server.properties</code></pre>
+
+<h5>
+  Kafka with KRaft
+</h5>
+
+<p>
+  Generate a Cluster UUID
+</p>
+
+<pre class="line-numbers"><code class="language-bash">&gt; 
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"</code></pre>
+
+<p>
+  Format Log Directories
+</p>
+
+<pre class="line-numbers"><code class="language-bash">&gt; 
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties</code></pre>
+
+<p>
+  Start the Kafka Server
+</p>
 
+<pre class="line-numbers"><code class="language-bash">&gt; 
bin/kafka-server-start.sh config/kraft/server.properties</code></pre>
 
 <h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 
3: Prepare input topic and start Kafka producer</a></h4>
 
@@ -304,7 +330,7 @@ Looking beyond the scope of this concrete example, what 
Kafka Streams is doing h
 
 <h4><a id="quickstart_streams_stop" href="#quickstart_streams_stop">Step 6: 
Teardown the application</a></h4>
 
-<p>You can now stop the console consumer, the console producer, the Wordcount 
application, the Kafka broker and the ZooKeeper server in order via 
<b>Ctrl-C</b>.</p>
+<p>You can now stop the console consumer, the console producer, the Wordcount 
application, the Kafka broker and the ZooKeeper server (if one was started) in 
order via <b>Ctrl-C</b>.</p>
 
  <div class="pagination">
         <a href="/{{version}}/documentation/streams" class="pagination__btn 
pagination__btn__prev">Previous</a>

Reply via email to