This is an automated email from the ASF dual-hosted git repository.
bbejeck pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new fbfdc49 MINOR: remove quickstart-*.html (#313)
fbfdc49 is described below
commit fbfdc4985f97693ee0ea6da3ccc02d3490811497
Author: Matthias J. Sax <[email protected]>
AuthorDate: Thu Dec 10 13:35:10 2020 -0800
MINOR: remove quickstart-*.html (#313)
Both quickstart-docker.html and quickstart-zookeeper.html were added during
the web-page redesign but were never finished. quickstart-docker.html is
dangling and is removed in this PR. quickstart-zookeeper.html is use a actual
quickstart and thus we rename it back to quickstart.html.
Cf apache/kafka#9721 for AK repo.
Also apache/kafka#9722
There is some difference between the old quickstart.html and the
quickstart-zookeeper.html and I am not sure what is actually the correct
quickstart?
Call for review @scott-confluent @guozhangwang @miguno
\cc @mimaison @bbejeck (to make sure we don't re-publish it during the
release)
Reviewers: Guozhang Wang <[email protected]>, Scott Predmore
<[email protected]>, Bill Bejeck <[email protected]>
---
25/documentation.html | 2 +-
25/quickstart-docker.html | 204 ----------------------
25/{quickstart-zookeeper.html => quickstart.html} | 0
26/documentation.html | 2 +-
26/quickstart-docker.html | 204 ----------------------
26/{quickstart-zookeeper.html => quickstart.html} | 0
27/documentation.html | 2 +-
27/quickstart-docker.html | 204 ----------------------
27/{quickstart-zookeeper.html => quickstart.html} | 0
quickstart-docker.html | 35 ----
quickstart.html | 2 +-
11 files changed, 4 insertions(+), 651 deletions(-)
diff --git a/25/documentation.html b/25/documentation.html
index f525cd5..43b19b0 100644
--- a/25/documentation.html
+++ b/25/documentation.html
@@ -62,7 +62,7 @@
<a href="#quickstart">1.3 Quick Start
</a></h3>
- <!--#include virtual="quickstart-zookeeper.html" -->
+ <!--#include virtual="quickstart.html" -->
<h3 class="anchor-heading">
<a class="anchor-link" id="ecosystem" href="#ecosystem"></a>
diff --git a/25/quickstart-docker.html b/25/quickstart-docker.html
deleted file mode 100644
index d8816ba..0000000
--- a/25/quickstart-docker.html
+++ /dev/null
@@ -1,204 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="js/templateData.js" --></script>
-
-<script id="quickstart-docker-template" type="text/x-handlebars-template">
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-1-get-kafka" href="#step-1-get-kafka"></a>
- <a href="#step-1-get-kafka">Step 1: Get Kafka</a>
-</h4>
-
-<p>
- This docker-compose file will run everything for you via <a
href="https://www.docker.com/" rel="nofollow">Docker</a>.
- Copy and paste it into a file named <code>docker-compose.yml</code> on
your local filesystem.
-</p>
-<pre class="line-numbers"><code class="language-bash">---
- version: '2'
-
- services:
- broker:
- image: apache-kafka/broker:2.5.0
- hostname: kafka-broker
- container_name: kafka-broker
-
- # ...rest omitted...</code></pre>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-2-start-kafka"
href="#step-2-start-kafka"></a>
- <a href="#step-2-start-kafka">Step 2: Start the Kafka environment</a>
-</h4>
-
-<p>
- From the directory containing the <code>docker-compose.yml</code> file
created in the previous step, run this
- command in order to start all services in the correct order:
-</p>
-<pre class="line-numbers"><code class="language-bash">$ docker-compose
up</code></pre>
-<p>
- Once all services have successfully launched, you will have a basic Kafka
environment running and ready to use.
-</p>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-3-create-a-topic"
href="#step-3-create-a-topic"></a>
- <a href="#step-3-create-a-topic">Step 3: Create a topic to store your
events</a>
-</h4>
-<p>Kafka is a distributed <em>event streaming platform</em> that lets you
read, write, store, and process
-<a href="/documentation/#messages" rel="nofollow"><em>events</em></a> (also
called <em>records</em> or <em>messages</em> in the documentation)
-across many machines.
-Example events are payment transactions, geolocation updates from mobile
phones, shipping orders, sensor measurements
-from IoT devices or medical equipment, and much more.
-These events are organized and stored in <a
href="/documentation/#intro_topics" rel="nofollow"><em>topics</em></a>.
-Very simplified, a topic is similar to a folder in a filesystem, and the
events are the files in that folder.</p>
-<p>So before you can write your first events, you must create a topic:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --create --topic quickstart-events</code></pre>
-<p>All of Kafka's command line tools have additional options: run the
<code>kafka-topics.sh</code> command without any
-arguments to display usage information.
-For example, it can also show you
-<a href="/documentation/#intro_topics" rel="nofollow">details such as the
partition count</a> of the new topic:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --describe --topic quickstart-events
- Topic:quickstart-events PartitionCount:1 ReplicationFactor:1 Configs:
- Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr:
0</code></pre>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-4-write-events"
href="#step-4-write-events"></a>
- <a href="#step-4-write-events">Step 4: Write some events into the topic</a>
-</h4>
-<p>A Kafka client communicates with the Kafka brokers via the network for
writing (or reading) events.
-Once received, the brokers will store the events in a durable and
fault-tolerant manner for as long as you
-need—even forever.</p>
-<p>Run the console producer client to write a few events into your topic.
-By default, each line you enter will result in a separate event being written
to the topic.</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-producer.sh --topic quickstart-events
-This is my first event
-This is my second event</code></pre>
-<p>You can stop the producer client with <code>Ctrl-C</code> at any time.</p>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
- <a href="#step-5-read-the-events">Step 5: Read the events</a>
-</h4>
-<p>Open another terminal session and run the console consumer client to read
the events you just created:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-consumer.sh --topic quickstart-events
--from-beginning
-This is my first event
-This is my second event</code></pre>
-<p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
-<p>Feel free to experiment: for example, switch back to your producer terminal
(previous step) to write
-additional events, and see how the events immediately show up in your consumer
terminal.</p>
-<p>Because events are durably stored in Kafka, they can be read as many times
and by as many consumers as you want.
-You can easily verify this by opening yet another terminal session and
re-running the previous command again.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
- <a href="#step-5-read-the-events">Step 6: Import/export your data as
streams of events with Kafka Connect</a>
-</h4>
-<p>You probably have lots of data in existing systems like relational
databases or traditional messaging systems, along
-with many applications that already use these systems.
-<a href="/documentation/#connect" rel="nofollow">Kafka Connect</a> allows you
to continuously ingest data from external
-systems into Kafka, and vice versa. It is thus
-very easy to integrate existing systems with Kafka. To make this process even
easier, there are hundreds of such
-connectors readily available.</p>
-<p>Take a look at the <a href="/documentation/#connect" rel="nofollow">Kafka
Connect section</a> in the documentation to
-learn more about how to continuously import/export your data into and out of
Kafka.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-7-process-events"
href="#step-7-process-events"></a>
- <a href="#step-7-process-events">Step 7: Process your events with Kafka
Streams</a>
-</h4>
-
-<p>Once your data is stored in Kafka as events, you can process the data with
the
-<a href="/documentation/streams" rel="nofollow">Kafka Streams</a> client
library for Java/Scala.
-It allows you to implement mission-critical real-time applications and
microservices, where the input and/or output data
-is stored in Kafka topics. Kafka Streams combines the simplicity of writing
and deploying standard Java and Scala
-applications on the client side with the benefits of Kafka's server-side
cluster technology to make these applications
-highly scalable, elastic, fault-tolerant, and distributed. The library
supports exactly-once processing, stateful
-operations and aggregations, windowing, joins, processing based on event-time,
and much more.</p>
-<p>To give you a first taste, here's how one would implement the popular
<code>WordCount</code> algorithm:</p>
-<pre class="line-numbers"><code class="language-java">KStream<String, String>
textLines = builder.stream("quickstart-events");
-
-KTable<String, Long> wordCounts = textLines
- .flatMapValues(line -> Arrays.asList(line.toLowerCase().split("
")))
- .groupBy((keyIgnored, word) -> word)
- .count();
-
-wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(),
Serdes.Long()));</code></pre>
-<p>The <a href="/25/documentation/streams/quickstart" rel="nofollow">Kafka
Streams demo</a> and the
-<a href="/25/documentation/streams/tutorial" rel="nofollow">app development
tutorial</a> demonstrate how to code and run
-such a streaming application from start to finish.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-8-terminate" href="#step-8-terminate"></a>
- <a href="#step-8-terminate">Step 8: Terminate the Kafka environment</a>
-</h4>
-<p>Now that you reached the end of the quickstart, feel free to tear down the
Kafka environment—or continue playing around.</p>
-<p>Run the following command to tear down the environment, which also deletes
any events you have created along the way:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker-compose
down</code></pre>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="quickstart_kafkacongrats"
href="#quickstart_kafkacongrats"></a>
- <a href="#quickstart_kafkacongrats">Congratulations!</a>
- </h4>
-
- <p>You have successfully finished the Apache Kafka quickstart.<div>
-
- <p>To learn more, we suggest the following next steps:</p>
-
- <ul>
- <li>
- Read through the brief <a href="/intro">Introduction</a> to learn
how Kafka works at a high level, its
- main concepts, and how it compares to other technologies. To
understand Kafka in more detail, head over to the
- <a href="/documentation/">Documentation</a>.
- </li>
- <li>
- Browse through the <a href="/powered-by">Use Cases</a> to learn how
other users in our world-wide
- community are getting value out of Kafka.
- </li>
- <!--
- <li>
- Learn how _Kafka compares to other technologies_ [note to design
team: this new page is not yet written] you might be familiar with.
- </li>
- -->
- <li>
- Join a <a href="/events">local Kafka meetup group</a> and
- <a href="https://kafka-summit.org/past-events/">watch talks from
Kafka Summit</a>,
- the main conference of the Kafka community.
- </li>
- </ul>
-</div>
-</script>
-
-<div class="p-quickstart-docker"></div>
diff --git a/25/quickstart-zookeeper.html b/25/quickstart.html
similarity index 100%
rename from 25/quickstart-zookeeper.html
rename to 25/quickstart.html
diff --git a/26/documentation.html b/26/documentation.html
index 5d4b5c5..f543e56 100644
--- a/26/documentation.html
+++ b/26/documentation.html
@@ -41,7 +41,7 @@
<h3 class="anchor-heading"><a id="uses" class="anchor-link"></a><a
href="#uses">1.2 Use Cases</a></h3>
<!--#include virtual="uses.html" -->
<h3 class="anchor-heading"><a id="quickstart" class="anchor-link"></a><a
href="#quickstart">1.3 Quick Start</a></h3>
- <!--#include virtual="quickstart-zookeeper.html" -->
+ <!--#include virtual="quickstart.html" -->
<h3 class="anchor-heading"><a id="ecosystem" class="anchor-link"></a><a
href="#ecosystem">1.4 Ecosystem</a></h3>
<!--#include virtual="ecosystem.html" -->
<h3 class="anchor-heading"><a id="upgrade" class="anchor-link"></a><a
href="#upgrade">1.5 Upgrading From Previous Versions</a></h3>
diff --git a/26/quickstart-docker.html b/26/quickstart-docker.html
deleted file mode 100644
index d8816ba..0000000
--- a/26/quickstart-docker.html
+++ /dev/null
@@ -1,204 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="js/templateData.js" --></script>
-
-<script id="quickstart-docker-template" type="text/x-handlebars-template">
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-1-get-kafka" href="#step-1-get-kafka"></a>
- <a href="#step-1-get-kafka">Step 1: Get Kafka</a>
-</h4>
-
-<p>
- This docker-compose file will run everything for you via <a
href="https://www.docker.com/" rel="nofollow">Docker</a>.
- Copy and paste it into a file named <code>docker-compose.yml</code> on
your local filesystem.
-</p>
-<pre class="line-numbers"><code class="language-bash">---
- version: '2'
-
- services:
- broker:
- image: apache-kafka/broker:2.5.0
- hostname: kafka-broker
- container_name: kafka-broker
-
- # ...rest omitted...</code></pre>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-2-start-kafka"
href="#step-2-start-kafka"></a>
- <a href="#step-2-start-kafka">Step 2: Start the Kafka environment</a>
-</h4>
-
-<p>
- From the directory containing the <code>docker-compose.yml</code> file
created in the previous step, run this
- command in order to start all services in the correct order:
-</p>
-<pre class="line-numbers"><code class="language-bash">$ docker-compose
up</code></pre>
-<p>
- Once all services have successfully launched, you will have a basic Kafka
environment running and ready to use.
-</p>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-3-create-a-topic"
href="#step-3-create-a-topic"></a>
- <a href="#step-3-create-a-topic">Step 3: Create a topic to store your
events</a>
-</h4>
-<p>Kafka is a distributed <em>event streaming platform</em> that lets you
read, write, store, and process
-<a href="/documentation/#messages" rel="nofollow"><em>events</em></a> (also
called <em>records</em> or <em>messages</em> in the documentation)
-across many machines.
-Example events are payment transactions, geolocation updates from mobile
phones, shipping orders, sensor measurements
-from IoT devices or medical equipment, and much more.
-These events are organized and stored in <a
href="/documentation/#intro_topics" rel="nofollow"><em>topics</em></a>.
-Very simplified, a topic is similar to a folder in a filesystem, and the
events are the files in that folder.</p>
-<p>So before you can write your first events, you must create a topic:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --create --topic quickstart-events</code></pre>
-<p>All of Kafka's command line tools have additional options: run the
<code>kafka-topics.sh</code> command without any
-arguments to display usage information.
-For example, it can also show you
-<a href="/documentation/#intro_topics" rel="nofollow">details such as the
partition count</a> of the new topic:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --describe --topic quickstart-events
- Topic:quickstart-events PartitionCount:1 ReplicationFactor:1 Configs:
- Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr:
0</code></pre>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-4-write-events"
href="#step-4-write-events"></a>
- <a href="#step-4-write-events">Step 4: Write some events into the topic</a>
-</h4>
-<p>A Kafka client communicates with the Kafka brokers via the network for
writing (or reading) events.
-Once received, the brokers will store the events in a durable and
fault-tolerant manner for as long as you
-need—even forever.</p>
-<p>Run the console producer client to write a few events into your topic.
-By default, each line you enter will result in a separate event being written
to the topic.</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-producer.sh --topic quickstart-events
-This is my first event
-This is my second event</code></pre>
-<p>You can stop the producer client with <code>Ctrl-C</code> at any time.</p>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
- <a href="#step-5-read-the-events">Step 5: Read the events</a>
-</h4>
-<p>Open another terminal session and run the console consumer client to read
the events you just created:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-consumer.sh --topic quickstart-events
--from-beginning
-This is my first event
-This is my second event</code></pre>
-<p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
-<p>Feel free to experiment: for example, switch back to your producer terminal
(previous step) to write
-additional events, and see how the events immediately show up in your consumer
terminal.</p>
-<p>Because events are durably stored in Kafka, they can be read as many times
and by as many consumers as you want.
-You can easily verify this by opening yet another terminal session and
re-running the previous command again.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
- <a href="#step-5-read-the-events">Step 6: Import/export your data as
streams of events with Kafka Connect</a>
-</h4>
-<p>You probably have lots of data in existing systems like relational
databases or traditional messaging systems, along
-with many applications that already use these systems.
-<a href="/documentation/#connect" rel="nofollow">Kafka Connect</a> allows you
to continuously ingest data from external
-systems into Kafka, and vice versa. It is thus
-very easy to integrate existing systems with Kafka. To make this process even
easier, there are hundreds of such
-connectors readily available.</p>
-<p>Take a look at the <a href="/documentation/#connect" rel="nofollow">Kafka
Connect section</a> in the documentation to
-learn more about how to continuously import/export your data into and out of
Kafka.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-7-process-events"
href="#step-7-process-events"></a>
- <a href="#step-7-process-events">Step 7: Process your events with Kafka
Streams</a>
-</h4>
-
-<p>Once your data is stored in Kafka as events, you can process the data with
the
-<a href="/documentation/streams" rel="nofollow">Kafka Streams</a> client
library for Java/Scala.
-It allows you to implement mission-critical real-time applications and
microservices, where the input and/or output data
-is stored in Kafka topics. Kafka Streams combines the simplicity of writing
and deploying standard Java and Scala
-applications on the client side with the benefits of Kafka's server-side
cluster technology to make these applications
-highly scalable, elastic, fault-tolerant, and distributed. The library
supports exactly-once processing, stateful
-operations and aggregations, windowing, joins, processing based on event-time,
and much more.</p>
-<p>To give you a first taste, here's how one would implement the popular
<code>WordCount</code> algorithm:</p>
-<pre class="line-numbers"><code class="language-java">KStream<String, String>
textLines = builder.stream("quickstart-events");
-
-KTable<String, Long> wordCounts = textLines
- .flatMapValues(line -> Arrays.asList(line.toLowerCase().split("
")))
- .groupBy((keyIgnored, word) -> word)
- .count();
-
-wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(),
Serdes.Long()));</code></pre>
-<p>The <a href="/25/documentation/streams/quickstart" rel="nofollow">Kafka
Streams demo</a> and the
-<a href="/25/documentation/streams/tutorial" rel="nofollow">app development
tutorial</a> demonstrate how to code and run
-such a streaming application from start to finish.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-8-terminate" href="#step-8-terminate"></a>
- <a href="#step-8-terminate">Step 8: Terminate the Kafka environment</a>
-</h4>
-<p>Now that you reached the end of the quickstart, feel free to tear down the
Kafka environment—or continue playing around.</p>
-<p>Run the following command to tear down the environment, which also deletes
any events you have created along the way:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker-compose
down</code></pre>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="quickstart_kafkacongrats"
href="#quickstart_kafkacongrats"></a>
- <a href="#quickstart_kafkacongrats">Congratulations!</a>
- </h4>
-
- <p>You have successfully finished the Apache Kafka quickstart.<div>
-
- <p>To learn more, we suggest the following next steps:</p>
-
- <ul>
- <li>
- Read through the brief <a href="/intro">Introduction</a> to learn
how Kafka works at a high level, its
- main concepts, and how it compares to other technologies. To
understand Kafka in more detail, head over to the
- <a href="/documentation/">Documentation</a>.
- </li>
- <li>
- Browse through the <a href="/powered-by">Use Cases</a> to learn how
other users in our world-wide
- community are getting value out of Kafka.
- </li>
- <!--
- <li>
- Learn how _Kafka compares to other technologies_ [note to design
team: this new page is not yet written] you might be familiar with.
- </li>
- -->
- <li>
- Join a <a href="/events">local Kafka meetup group</a> and
- <a href="https://kafka-summit.org/past-events/">watch talks from
Kafka Summit</a>,
- the main conference of the Kafka community.
- </li>
- </ul>
-</div>
-</script>
-
-<div class="p-quickstart-docker"></div>
diff --git a/26/quickstart-zookeeper.html b/26/quickstart.html
similarity index 100%
rename from 26/quickstart-zookeeper.html
rename to 26/quickstart.html
diff --git a/27/documentation.html b/27/documentation.html
index e96d8ba..603b972 100644
--- a/27/documentation.html
+++ b/27/documentation.html
@@ -42,7 +42,7 @@
<h3 class="anchor-heading"><a id="uses" class="anchor-link"></a><a
href="#uses">1.2 Use Cases</a></h3>
<!--#include virtual="uses.html" -->
<h3 class="anchor-heading"><a id="quickstart" class="anchor-link"></a><a
href="#quickstart">1.3 Quick Start</a></h3>
- <!--#include virtual="quickstart-zookeeper.html" -->
+ <!--#include virtual="quickstart.html" -->
<h3 class="anchor-heading"><a id="ecosystem" class="anchor-link"></a><a
href="#ecosystem">1.4 Ecosystem</a></h3>
<!--#include virtual="ecosystem.html" -->
<h3 class="anchor-heading"><a id="upgrade" class="anchor-link"></a><a
href="#upgrade">1.5 Upgrading From Previous Versions</a></h3>
diff --git a/27/quickstart-docker.html b/27/quickstart-docker.html
deleted file mode 100644
index d8816ba..0000000
--- a/27/quickstart-docker.html
+++ /dev/null
@@ -1,204 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="js/templateData.js" --></script>
-
-<script id="quickstart-docker-template" type="text/x-handlebars-template">
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-1-get-kafka" href="#step-1-get-kafka"></a>
- <a href="#step-1-get-kafka">Step 1: Get Kafka</a>
-</h4>
-
-<p>
- This docker-compose file will run everything for you via <a
href="https://www.docker.com/" rel="nofollow">Docker</a>.
- Copy and paste it into a file named <code>docker-compose.yml</code> on
your local filesystem.
-</p>
-<pre class="line-numbers"><code class="language-bash">---
- version: '2'
-
- services:
- broker:
- image: apache-kafka/broker:2.5.0
- hostname: kafka-broker
- container_name: kafka-broker
-
- # ...rest omitted...</code></pre>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-2-start-kafka"
href="#step-2-start-kafka"></a>
- <a href="#step-2-start-kafka">Step 2: Start the Kafka environment</a>
-</h4>
-
-<p>
- From the directory containing the <code>docker-compose.yml</code> file
created in the previous step, run this
- command in order to start all services in the correct order:
-</p>
-<pre class="line-numbers"><code class="language-bash">$ docker-compose
up</code></pre>
-<p>
- Once all services have successfully launched, you will have a basic Kafka
environment running and ready to use.
-</p>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-3-create-a-topic"
href="#step-3-create-a-topic"></a>
- <a href="#step-3-create-a-topic">Step 3: Create a topic to store your
events</a>
-</h4>
-<p>Kafka is a distributed <em>event streaming platform</em> that lets you
read, write, store, and process
-<a href="/documentation/#messages" rel="nofollow"><em>events</em></a> (also
called <em>records</em> or <em>messages</em> in the documentation)
-across many machines.
-Example events are payment transactions, geolocation updates from mobile
phones, shipping orders, sensor measurements
-from IoT devices or medical equipment, and much more.
-These events are organized and stored in <a
href="/documentation/#intro_topics" rel="nofollow"><em>topics</em></a>.
-Very simplified, a topic is similar to a folder in a filesystem, and the
events are the files in that folder.</p>
-<p>So before you can write your first events, you must create a topic:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --create --topic quickstart-events</code></pre>
-<p>All of Kafka's command line tools have additional options: run the
<code>kafka-topics.sh</code> command without any
-arguments to display usage information.
-For example, it can also show you
-<a href="/documentation/#intro_topics" rel="nofollow">details such as the
partition count</a> of the new topic:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --describe --topic quickstart-events
- Topic:quickstart-events PartitionCount:1 ReplicationFactor:1 Configs:
- Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr:
0</code></pre>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-4-write-events"
href="#step-4-write-events"></a>
- <a href="#step-4-write-events">Step 4: Write some events into the topic</a>
-</h4>
-<p>A Kafka client communicates with the Kafka brokers via the network for
writing (or reading) events.
-Once received, the brokers will store the events in a durable and
fault-tolerant manner for as long as you
-need—even forever.</p>
-<p>Run the console producer client to write a few events into your topic.
-By default, each line you enter will result in a separate event being written
to the topic.</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-producer.sh --topic quickstart-events
-This is my first event
-This is my second event</code></pre>
-<p>You can stop the producer client with <code>Ctrl-C</code> at any time.</p>
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
- <a href="#step-5-read-the-events">Step 5: Read the events</a>
-</h4>
-<p>Open another terminal session and run the console consumer client to read
the events you just created:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-consumer.sh --topic quickstart-events
--from-beginning
-This is my first event
-This is my second event</code></pre>
-<p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
-<p>Feel free to experiment: for example, switch back to your producer terminal
(previous step) to write
-additional events, and see how the events immediately show up in your consumer
terminal.</p>
-<p>Because events are durably stored in Kafka, they can be read as many times
and by as many consumers as you want.
-You can easily verify this by opening yet another terminal session and
re-running the previous command again.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
- <a href="#step-5-read-the-events">Step 6: Import/export your data as
streams of events with Kafka Connect</a>
-</h4>
-<p>You probably have lots of data in existing systems like relational
databases or traditional messaging systems, along
-with many applications that already use these systems.
-<a href="/documentation/#connect" rel="nofollow">Kafka Connect</a> allows you
to continuously ingest data from external
-systems into Kafka, and vice versa. It is thus
-very easy to integrate existing systems with Kafka. To make this process even
easier, there are hundreds of such
-connectors readily available.</p>
-<p>Take a look at the <a href="/documentation/#connect" rel="nofollow">Kafka
Connect section</a> in the documentation to
-learn more about how to continuously import/export your data into and out of
Kafka.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-7-process-events"
href="#step-7-process-events"></a>
- <a href="#step-7-process-events">Step 7: Process your events with Kafka
Streams</a>
-</h4>
-
-<p>Once your data is stored in Kafka as events, you can process the data with
the
-<a href="/documentation/streams" rel="nofollow">Kafka Streams</a> client
library for Java/Scala.
-It allows you to implement mission-critical real-time applications and
microservices, where the input and/or output data
-is stored in Kafka topics. Kafka Streams combines the simplicity of writing
and deploying standard Java and Scala
-applications on the client side with the benefits of Kafka's server-side
cluster technology to make these applications
-highly scalable, elastic, fault-tolerant, and distributed. The library
supports exactly-once processing, stateful
-operations and aggregations, windowing, joins, processing based on event-time,
and much more.</p>
-<p>To give you a first taste, here's how one would implement the popular
<code>WordCount</code> algorithm:</p>
-<pre class="line-numbers"><code class="language-java">KStream<String, String>
textLines = builder.stream("quickstart-events");
-
-KTable<String, Long> wordCounts = textLines
- .flatMapValues(line -> Arrays.asList(line.toLowerCase().split("
")))
- .groupBy((keyIgnored, word) -> word)
- .count();
-
-wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(),
Serdes.Long()));</code></pre>
-<p>The <a href="/25/documentation/streams/quickstart" rel="nofollow">Kafka
Streams demo</a> and the
-<a href="/25/documentation/streams/tutorial" rel="nofollow">app development
tutorial</a> demonstrate how to code and run
-such a streaming application from start to finish.</p>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="step-8-terminate" href="#step-8-terminate"></a>
- <a href="#step-8-terminate">Step 8: Terminate the Kafka environment</a>
-</h4>
-<p>Now that you reached the end of the quickstart, feel free to tear down the
Kafka environment—or continue playing around.</p>
-<p>Run the following command to tear down the environment, which also deletes
any events you have created along the way:</p>
-<pre class="line-numbers"><code class="language-bash">$ docker-compose
down</code></pre>
-
-</div>
-
-<div class="quickstart-step">
-<h4 class="anchor-heading">
- <a class="anchor-link" id="quickstart_kafkacongrats"
href="#quickstart_kafkacongrats"></a>
- <a href="#quickstart_kafkacongrats">Congratulations!</a>
- </h4>
-
- <p>You have successfully finished the Apache Kafka quickstart.<div>
-
- <p>To learn more, we suggest the following next steps:</p>
-
- <ul>
- <li>
- Read through the brief <a href="/intro">Introduction</a> to learn
how Kafka works at a high level, its
- main concepts, and how it compares to other technologies. To
understand Kafka in more detail, head over to the
- <a href="/documentation/">Documentation</a>.
- </li>
- <li>
- Browse through the <a href="/powered-by">Use Cases</a> to learn how
other users in our world-wide
- community are getting value out of Kafka.
- </li>
- <!--
- <li>
- Learn how _Kafka compares to other technologies_ [note to design
team: this new page is not yet written] you might be familiar with.
- </li>
- -->
- <li>
- Join a <a href="/events">local Kafka meetup group</a> and
- <a href="https://kafka-summit.org/past-events/">watch talks from
Kafka Summit</a>,
- the main conference of the Kafka community.
- </li>
- </ul>
-</div>
-</script>
-
-<div class="p-quickstart-docker"></div>
diff --git a/27/quickstart-zookeeper.html b/27/quickstart.html
similarity index 100%
rename from 27/quickstart-zookeeper.html
rename to 27/quickstart.html
diff --git a/quickstart-docker.html b/quickstart-docker.html
deleted file mode 100644
index 37b3159..0000000
--- a/quickstart-docker.html
+++ /dev/null
@@ -1,35 +0,0 @@
-<!--#include virtual="includes/_header.htm" -->
-<body class="page-quickstart-docker ">
-<!--#include virtual="includes/_top.htm" -->
-<div class="content">
- <!--#include virtual="includes/_nav.htm" -->
- <div class="page-header">
- <h1 class="page-header-title">Apache Kafka Quickstart<br />with Docker</h1>
- <p class="page-header-text">Interested in getting started with Kafka?
Follow the instructions in this quickstart, or watch the video below.</p>
-
- <div class="page-header-video">
- <iframe
- class="youtube-embed page-header-video-embed"
- width="480" height="270"
- src="https://www.youtube.com/embed/FKgi3n-FyNU?modestbranding=1"
- frameborder="0"
- allow="accelerometer; autoplay; encrypted-media; gyroscope;
picture-in-picture" allowfullscreen>
- </iframe>
- </div>
-
- <ul class="page-header-nav">
- <li class="page-header-nav-item">
- <a href="/quickstart" class="page-header-nav-item-anchor">Zookeeper</a>
- </li>
- <li class="page-header-nav-item">
- <a href="/quickstart-docker" class="page-header-nav-item-anchor
current">Docker</a>
- </li>
- </ul>
- </div>
-<!-- should always link the the latest release's documentation -->
- <!--#include virtual="25/quickstart-docker.html" -->
-<!--#include virtual="includes/_footer.htm" -->
-<script>
-// Show selected style on nav item
-$(function() { $('.b-nav__quickstart').addClass('selected'); });
-</script>
diff --git a/quickstart.html b/quickstart.html
index d31a6f9..726cb11 100644
--- a/quickstart.html
+++ b/quickstart.html
@@ -19,7 +19,7 @@
</div>
<!-- should always link the the latest release's documentation -->
- <!--#include virtual="26/quickstart-zookeeper.html" -->
+ <!--#include virtual="26/quickstart.html" -->
<!--#include virtual="includes/_footer.htm" -->
<script>
// Show selected style on nav item