Repository: kafka
Updated Branches:
  refs/heads/trunk 7c6d70655 -> d2a267b11


KAFKA-3697; Clean up website documentation of client usage

This is to imply that the Java consumer/producer are the recommended 
consumer/producer now.

Author: Vahid Hashemian <vahidhashem...@us.ibm.com>

Reviewers: Jason Gustafson <ja...@confluent.io>

Closes #1921 from vahidhashemian/KAFKA-3697


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/d2a267b1
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/d2a267b1
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/d2a267b1

Branch: refs/heads/trunk
Commit: d2a267b111d23d6b98f2784382095b9ae5ddf886
Parents: 7c6d706
Author: Vahid Hashemian <vahidhashem...@us.ibm.com>
Authored: Thu Sep 29 19:37:20 2016 -0700
Committer: Jason Gustafson <ja...@confluent.io>
Committed: Thu Sep 29 19:37:20 2016 -0700

----------------------------------------------------------------------
 docs/configuration.html  | 13 +++++++------
 docs/documentation.html  |  4 ++--
 docs/implementation.html |  4 ++--
 docs/ops.html            |  8 ++++----
 docs/quickstart.html     | 10 +++++-----
 docs/security.html       |  2 +-
 docs/upgrade.html        |  2 +-
 7 files changed, 22 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/configuration.html
----------------------------------------------------------------------
diff --git a/docs/configuration.html b/docs/configuration.html
index 5428691..35f1475 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -70,9 +70,14 @@ Below is the configuration of the Java producer:
 
 <h3><a id="consumerconfigs" href="#consumerconfigs">3.3 Consumer 
Configs</a></h3>
 
-We introduce both the old 0.8 consumer configs and the new consumer configs 
respectively below.
+In 0.9.0.0 we introduced the new Java consumer as a replacement for the older 
Scala-based simple and high-level consumers.
+The configs for both new and old consumers are described below.
 
-<h4><a id="oldconsumerconfigs" href="#oldconsumerconfigs">3.3.1 Old Consumer 
Configs</a></h4>
+<h4><a id="newconsumerconfigs" href="#newconsumerconfigs">3.3.1 New Consumer 
Configs</a></h4>
+Below is the configuration for the new consumer:
+<!--#include virtual="generated/consumer_config.html" -->
+
+<h4><a id="oldconsumerconfigs" href="#oldconsumerconfigs">3.3.2 Old Consumer 
Configs</a></h4>
 
 The essential old consumer configurations are the following:
 <ul>
@@ -239,10 +244,6 @@ The essential old consumer configurations are the 
following:
 
 <p>More details about consumer configuration can be found in the scala class 
<code>kafka.consumer.ConsumerConfig</code>.</p>
 
-<h4><a id="newconsumerconfigs" href="#newconsumerconfigs">3.3.2 New Consumer 
Configs</a></h4>
-Since 0.9.0.0 we have been working on a replacement for our existing simple 
and high-level consumers. The code is considered beta quality. Below is the 
configuration for the new consumer:
-<!--#include virtual="generated/consumer_config.html" -->
-
 <h3><a id="connectconfigs" href="#connectconfigs">3.4 Kafka Connect 
Configs</a></h3>
 Below is the configuration of the Kafka Connect framework.
 <!--#include virtual="generated/connect_config.html" -->

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/documentation.html
----------------------------------------------------------------------
diff --git a/docs/documentation.html b/docs/documentation.html
index f4f1ddc..07ffe84 100644
--- a/docs/documentation.html
+++ b/docs/documentation.html
@@ -46,8 +46,8 @@ Prior releases: <a href="/07/documentation.html">0.7.x</a>, 
<a href="/08/documen
             <li><a href="#producerconfigs">3.2 Producer Configs</a>
             <li><a href="#consumerconfigs">3.3 Consumer Configs</a>
                 <ul>
-                    <li><a href="#oldconsumerconfigs">3.3.1 Old Consumer 
Configs</a>
-                    <li><a href="#newconsumerconfigs">3.3.2 New Consumer 
Configs</a>
+                    <li><a href="#newconsumerconfigs">3.3.1 New Consumer 
Configs</a>
+                    <li><a href="#oldconsumerconfigs">3.3.2 Old Consumer 
Configs</a>
                 </ul>
             <li><a href="#connectconfigs">3.4 Kafka Connect Configs</a>
             <li><a href="#streamsconfigs">3.5 Kafka Streams Configs</a>

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/implementation.html
----------------------------------------------------------------------
diff --git a/docs/implementation.html b/docs/implementation.html
index 91e17a6..12846fb 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -40,9 +40,9 @@ class Producer<T> {
 
 The goal is to expose all the producer functionality through a single API to 
the client.
 
-The new producer -
+The Kafka producer
 <ul>
-<li>can handle queueing/buffering of multiple producer requests and 
asynchronous dispatch of the batched data -
+<li>can handle queueing/buffering of multiple producer requests and 
asynchronous dispatch of the batched data:
 <p><code>kafka.producer.Producer</code> provides the ability to batch multiple 
produce requests (<code>producer.type=async</code>), before serializing and 
dispatching them to the appropriate kafka broker partition. The size of the 
batch can be controlled by a few config parameters. As events enter a queue, 
they are buffered in a queue, until either <code>queue.time</code> or 
<code>batch.size</code> is reached. A background thread 
(<code>kafka.producer.async.ProducerSendThread</code>) dequeues the batch of 
data and lets the <code>kafka.producer.EventHandler</code> serialize and send 
the data to the appropriate kafka broker partition. A custom event handler can 
be plugged in through the <code>event.handler</code> config parameter. At 
various stages of this producer queue pipeline, it is helpful to be able to 
inject callbacks, either for plugging in custom logging/tracing code or custom 
monitoring logic. This is possible by implementing the 
<code>kafka.producer.async.CallbackHandler</c
 ode> interface and setting <code>callback.handler</code> config parameter to 
that class.
 </p>
 </li>

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/ops.html
----------------------------------------------------------------------
diff --git a/docs/ops.html b/docs/ops.html
index a59e134..7565738 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -143,7 +143,7 @@ my-group        my-topic                       1   0        
       0
 </pre>
 
 
-Note, however, after 0.9.0, the kafka.tools.ConsumerOffsetChecker tool is 
deprecated and you should use the kafka.admin.ConsumerGroupCommand (or the 
bin/kafka-consumer-groups.sh script) to manage consumer groups, including 
consumers created with the <a 
href="http://kafka.apache.org/documentation.html#newconsumerapi";>new consumer 
API</a>.
+NOTE: Since 0.9.0.0, the kafka.tools.ConsumerOffsetChecker tool has been 
deprecated. You should use the kafka.admin.ConsumerGroupCommand (or the 
bin/kafka-consumer-groups.sh script) to manage consumer groups, including 
consumers created with the <a 
href="http://kafka.apache.org/documentation.html#newconsumerapi";>new consumer 
API</a>.
 
 <h4><a id="basic_ops_consumer_group" href="#basic_ops_consumer_group">Managing 
Consumer Groups</a></h4>
 
@@ -183,7 +183,7 @@ The process of migrating data is manually initiated but 
fully automated. Under t
 <p>
 The partition reassignment tool can be used to move partitions across brokers. 
An ideal partition distribution would ensure even data load and partition sizes 
across all brokers. The partition reassignment tool does not have the 
capability to automatically study the data distribution in a Kafka cluster and 
move partitions around to attain an even load distribution. As such, the admin 
has to figure out which topics or partitions should be moved around.
 <p>
-The partition reassignment tool can run in 3 mutually exclusive modes -
+The partition reassignment tool can run in 3 mutually exclusive modes:
 <ul>
 <li>--generate: In this mode, given a list of topics and a list of brokers, 
the tool generates a candidate reassignment to move all partitions of the 
specified topics to the new brokers. This option merely provides a convenient 
way to generate a partition reassignment plan given a list of topics and target 
brokers.</li>
 <li>--execute: In this mode, the tool kicks off the reassignment of partitions 
based on the user provided reassignment plan. (using the 
--reassignment-json-file option). This can either be a custom reassignment plan 
hand crafted by the admin or provided by using the --generate option</li>
@@ -900,9 +900,9 @@ The following metrics are available on 
producer/consumer/connector instances.  F
   </tbody>
 </table>
 
-<h4><a id="new_producer_monitoring" href="#new_producer_monitoring">New 
producer monitoring</a></h4>
+<h4><a id="producer_monitoring" href="#producer_monitoring">Producer 
monitoring</a></h4>
 
-The following metrics are available on new producer instances.
+The following metrics are available on producer instances.
 
 <table class="data-table">
 <tbody><tr>

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/quickstart.html
----------------------------------------------------------------------
diff --git a/docs/quickstart.html b/docs/quickstart.html
index 7654d5c..32d6125 100644
--- a/docs/quickstart.html
+++ b/docs/quickstart.html
@@ -78,7 +78,7 @@ Run the producer and then type a few messages into the 
console to send to the se
 Kafka also has a command line consumer that will dump out messages to standard 
output.
 
 <pre>
-&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test 
--from-beginning</b>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic test --from-beginning</b>
 This is a message
 This is another message
 </pre>
@@ -159,7 +159,7 @@ Let's publish a few messages to our new topic:
 </pre>
 Now let's consume these messages:
 <pre>
-&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 
--from-beginning --topic my-replicated-topic</b>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--from-beginning --topic my-replicated-topic</b>
 ...
 my test message 1
 my test message 2
@@ -181,7 +181,7 @@ Topic:my-replicated-topic   PartitionCount:1        
ReplicationFactor:3     Configs:
 </pre>
 But the messages are still be available for consumption even though the leader 
that took the writes originally is down:
 <pre>
-&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 
--from-beginning --topic my-replicated-topic</b>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--from-beginning --topic my-replicated-topic</b>
 ...
 my test message 1
 my test message 2
@@ -236,7 +236,7 @@ Note that the data is being stored in the Kafka topic 
<pre>connect-test</pre>, s
 data in the topic (or use custom consumer code to process it):
 
 <pre>
-&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic 
connect-test --from-beginning</b>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
--topic connect-test --from-beginning</b>
 {"schema":{"type":"string","optional":false},"payload":"foo"}
 {"schema":{"type":"string","optional":false},"payload":"bar"}
 ...
@@ -333,7 +333,7 @@ We can now inspect the output of the WordCount demo 
application by reading from
 </p>
 
 <pre>
-&gt; <b>bin/kafka-console-consumer.sh --zookeeper localhost:2181 \</b>
+&gt; <b>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \</b>
             <b>--topic streams-wordcount-output \</b>
             <b>--from-beginning \</b>
             <b>--formatter kafka.tools.DefaultMessageFormatter \</b>

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/security.html
----------------------------------------------------------------------
diff --git a/docs/security.html b/docs/security.html
index a00bbf6..2e77c93 100644
--- a/docs/security.html
+++ b/docs/security.html
@@ -715,7 +715,7 @@ To enable ZooKeeper authentication on brokers, there are 
two necessary steps:
        <li> Set the configuration property <tt>zookeeper.set.acl</tt> in each 
broker to true</li>
 </ol>
 
-The metadata stored in ZooKeeper is such that only brokers will be able to 
modify the corresponding znodes, but znodes are world readable. The rationale 
behind this decision is that the data stored in ZooKeeper is not sensitive, but 
inappropriate manipulation of znodes can cause cluster disruption. We also 
recommend limiting the access to ZooKeeper via network segmentation (only 
brokers and some admin tools need access to ZooKeeper if the new consumer and 
new producer are used).
+The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but 
can only be modified by the brokers. The rationale behind this decision is that 
the data stored in ZooKeeper is not sensitive, but inappropriate manipulation 
of that data can cause cluster disruption. We also recommend limiting the 
access to ZooKeeper via network segmentation (only brokers and some admin tools 
need access to ZooKeeper if the new Java consumer and producer clients are 
used).
 
 <h4><a id="zk_authz_migration" href="#zk_authz_migration">7.6.2 Migrating 
clusters</a></h4>
 If you are running a version of Kafka that does not support security or simply 
with security disabled, and you want to make the cluster secure, then you need 
to execute the following steps to enable ZooKeeper authentication with minimal 
disruption to your operations:

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/upgrade.html
----------------------------------------------------------------------
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 1eaa355..7b16ab0 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -207,7 +207,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients 
should be upgraded to 0.9
     <li> The default Kafka JVM performance options 
(KAFKA_JVM_PERFORMANCE_OPTS) have been changed in kafka-run-class.sh. </li>
     <li> The kafka-topics.sh script (kafka.admin.TopicCommand) now exits with 
non-zero exit code on failure. </li>
     <li> The kafka-topics.sh script (kafka.admin.TopicCommand) will now print 
a warning when topic names risk metric collisions due to the use of a '.' or 
'_' in the topic name, and error in the case of an actual collision. </li>
-    <li> The kafka-console-producer.sh script (kafka.tools.ConsoleProducer) 
will use the new producer instead of the old producer be default, and users 
have to specify 'old-producer' to use the old producer. </li>
+    <li> The kafka-console-producer.sh script (kafka.tools.ConsoleProducer) 
will use the Java producer instead of the old Scala producer be default, and 
users have to specify 'old-producer' to use the old producer. </li>
     <li> By default all command line tools will print all logging messages to 
stderr instead of stdout. </li>
 </ul>
 

Reply via email to