kafka git commit: KAFKA-3697; Clean up website documentation of client usage

2016-09-29 Thread jgus
Repository: kafka
Updated Branches:
  refs/heads/0.10.1 2a059fa68 -> 7de050ee5


KAFKA-3697; Clean up website documentation of client usage

This is to imply that the Java consumer/producer are the recommended 
consumer/producer now.

Author: Vahid Hashemian 

Reviewers: Jason Gustafson 

Closes #1921 from vahidhashemian/KAFKA-3697

(cherry picked from commit d2a267b111d23d6b98f2784382095b9ae5ddf886)
Signed-off-by: Jason Gustafson 


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/7de050ee
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/7de050ee
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/7de050ee

Branch: refs/heads/0.10.1
Commit: 7de050ee50e78992706e0c7f72a80a422650c23a
Parents: 2a059fa
Author: Vahid Hashemian 
Authored: Thu Sep 29 19:37:20 2016 -0700
Committer: Jason Gustafson 
Committed: Thu Sep 29 19:37:37 2016 -0700

--
 docs/configuration.html  | 13 +++--
 docs/documentation.html  |  4 ++--
 docs/implementation.html |  4 ++--
 docs/ops.html|  8 
 docs/quickstart.html | 10 +-
 docs/security.html   |  2 +-
 docs/upgrade.html|  2 +-
 7 files changed, 22 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/7de050ee/docs/configuration.html
--
diff --git a/docs/configuration.html b/docs/configuration.html
index 5428691..35f1475 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -70,9 +70,14 @@ Below is the configuration of the Java producer:
 
 3.3 Consumer 
Configs
 
-We introduce both the old 0.8 consumer configs and the new consumer configs 
respectively below.
+In 0.9.0.0 we introduced the new Java consumer as a replacement for the older 
Scala-based simple and high-level consumers.
+The configs for both new and old consumers are described below.
 
-3.3.1 Old Consumer 
Configs
+3.3.1 New Consumer 
Configs
+Below is the configuration for the new consumer:
+
+
+3.3.2 Old Consumer 
Configs
 
 The essential old consumer configurations are the following:
 
@@ -239,10 +244,6 @@ The essential old consumer configurations are the 
following:
 
 More details about consumer configuration can be found in the scala class 
kafka.consumer.ConsumerConfig.
 
-3.3.2 New Consumer 
Configs
-Since 0.9.0.0 we have been working on a replacement for our existing simple 
and high-level consumers. The code is considered beta quality. Below is the 
configuration for the new consumer:
-
-
 3.4 Kafka Connect 
Configs
 Below is the configuration of the Kafka Connect framework.
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/7de050ee/docs/documentation.html
--
diff --git a/docs/documentation.html b/docs/documentation.html
index 95d1251..e96fe16 100644
--- a/docs/documentation.html
+++ b/docs/documentation.html
@@ -49,8 +49,8 @@ Prior releases: 0.7.x, 
3.2 Producer Configs
 3.3 Consumer Configs
 
-3.3.1 Old Consumer 
Configs
-3.3.2 New Consumer 
Configs
+3.3.1 New Consumer 
Configs
+3.3.2 Old Consumer 
Configs
 
 3.4 Kafka Connect Configs
 3.5 Kafka Streams Configs

http://git-wip-us.apache.org/repos/asf/kafka/blob/7de050ee/docs/implementation.html
--
diff --git a/docs/implementation.html b/docs/implementation.html
index 91e17a6..12846fb 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -40,9 +40,9 @@ class Producer {
 
 The goal is to expose all the producer functionality through a single API to 
the client.
 
-The new producer -
+The Kafka producer
 
-can handle queueing/buffering of multiple producer requests and 
asynchronous dispatch of the batched data -
+can handle queueing/buffering of multiple producer requests and 
asynchronous dispatch of the batched data:
 kafka.producer.Producer provides the ability to batch multiple 
produce requests (producer.type=async), before serializing and 
dispatching them to the appropriate kafka broker partition. The size of the 
batch can be controlled by a few config parameters. As events enter a queue, 
they are buffered in a queue, until either queue.time or 
batch.size is reached. A background thread 
(kafka.producer.async.ProducerSendThread) dequeues the batch of 
data and lets the kafka.producer.EventHandler serialize and send 
the data to the appropriate kafka broker partition. A custom event handler can 
be plugged in through the event.handler config parameter. At 
various stages of this producer queue pipeline, it is helpful to be able to 
inject callbacks, either for plugging in custom 

kafka git commit: KAFKA-3697; Clean up website documentation of client usage

2016-09-29 Thread jgus
Repository: kafka
Updated Branches:
  refs/heads/trunk 7c6d70655 -> d2a267b11


KAFKA-3697; Clean up website documentation of client usage

This is to imply that the Java consumer/producer are the recommended 
consumer/producer now.

Author: Vahid Hashemian 

Reviewers: Jason Gustafson 

Closes #1921 from vahidhashemian/KAFKA-3697


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/d2a267b1
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/d2a267b1
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/d2a267b1

Branch: refs/heads/trunk
Commit: d2a267b111d23d6b98f2784382095b9ae5ddf886
Parents: 7c6d706
Author: Vahid Hashemian 
Authored: Thu Sep 29 19:37:20 2016 -0700
Committer: Jason Gustafson 
Committed: Thu Sep 29 19:37:20 2016 -0700

--
 docs/configuration.html  | 13 +++--
 docs/documentation.html  |  4 ++--
 docs/implementation.html |  4 ++--
 docs/ops.html|  8 
 docs/quickstart.html | 10 +-
 docs/security.html   |  2 +-
 docs/upgrade.html|  2 +-
 7 files changed, 22 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/configuration.html
--
diff --git a/docs/configuration.html b/docs/configuration.html
index 5428691..35f1475 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -70,9 +70,14 @@ Below is the configuration of the Java producer:
 
 3.3 Consumer 
Configs
 
-We introduce both the old 0.8 consumer configs and the new consumer configs 
respectively below.
+In 0.9.0.0 we introduced the new Java consumer as a replacement for the older 
Scala-based simple and high-level consumers.
+The configs for both new and old consumers are described below.
 
-3.3.1 Old Consumer 
Configs
+3.3.1 New Consumer 
Configs
+Below is the configuration for the new consumer:
+
+
+3.3.2 Old Consumer 
Configs
 
 The essential old consumer configurations are the following:
 
@@ -239,10 +244,6 @@ The essential old consumer configurations are the 
following:
 
 More details about consumer configuration can be found in the scala class 
kafka.consumer.ConsumerConfig.
 
-3.3.2 New Consumer 
Configs
-Since 0.9.0.0 we have been working on a replacement for our existing simple 
and high-level consumers. The code is considered beta quality. Below is the 
configuration for the new consumer:
-
-
 3.4 Kafka Connect 
Configs
 Below is the configuration of the Kafka Connect framework.
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/documentation.html
--
diff --git a/docs/documentation.html b/docs/documentation.html
index f4f1ddc..07ffe84 100644
--- a/docs/documentation.html
+++ b/docs/documentation.html
@@ -46,8 +46,8 @@ Prior releases: 0.7.x, 
3.2 Producer Configs
 3.3 Consumer Configs
 
-3.3.1 Old Consumer 
Configs
-3.3.2 New Consumer 
Configs
+3.3.1 New Consumer 
Configs
+3.3.2 Old Consumer 
Configs
 
 3.4 Kafka Connect Configs
 3.5 Kafka Streams Configs

http://git-wip-us.apache.org/repos/asf/kafka/blob/d2a267b1/docs/implementation.html
--
diff --git a/docs/implementation.html b/docs/implementation.html
index 91e17a6..12846fb 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -40,9 +40,9 @@ class Producer {
 
 The goal is to expose all the producer functionality through a single API to 
the client.
 
-The new producer -
+The Kafka producer
 
-can handle queueing/buffering of multiple producer requests and 
asynchronous dispatch of the batched data -
+can handle queueing/buffering of multiple producer requests and 
asynchronous dispatch of the batched data:
 kafka.producer.Producer provides the ability to batch multiple 
produce requests (producer.type=async), before serializing and 
dispatching them to the appropriate kafka broker partition. The size of the 
batch can be controlled by a few config parameters. As events enter a queue, 
they are buffered in a queue, until either queue.time or 
batch.size is reached. A background thread 
(kafka.producer.async.ProducerSendThread) dequeues the batch of 
data and lets the kafka.producer.EventHandler serialize and send 
the data to the appropriate kafka broker partition. A custom event handler can 
be plugged in through the event.handler config parameter. At 
various stages of this producer queue pipeline, it is helpful to be able to 
inject callbacks, either for plugging in custom logging/tracing code or custom 
monitoring logic. This is possible by implementing the 
kafka.producer.a