Add protocol guide

Author: Grant Henke <[email protected]>

Reviewers: Gwen Shapira

Closes #9 from granthenke/protocol


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/14ffd37c
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/14ffd37c
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/14ffd37c

Branch: refs/heads/asf-site
Commit: 14ffd37c0d2a5fa1d05bda14110f0057238bbdee
Parents: 1bc5cd9
Author: Grant Henke <[email protected]>
Authored: Thu Mar 10 11:09:17 2016 -0800
Committer: Gwen Shapira <[email protected]>
Committed: Thu Mar 10 11:09:17 2016 -0800

----------------------------------------------------------------------
 090/configuration.html               |    8 +-
 090/connect_config.html              |  108 ---
 090/consumer_config.html             |  100 ---
 090/generated/connect_config.html    |  108 +++
 090/generated/consumer_config.html   |  100 +++
 090/generated/kafka_config.html      |  270 +++++++
 090/generated/producer_config.html   |  104 +++
 090/generated/protocol_api_keys.html |   39 +
 090/generated/protocol_errors.html   |   40 +
 090/generated/protocol_messages.html | 1192 +++++++++++++++++++++++++++++
 090/kafka_config.html                |  270 -------
 090/producer_config.html             |  104 ---
 090/protocol.html                    |  182 +++++
 includes/header.html                 |    3 +-
 protocol.html                        |    2 +
 15 files changed, 2043 insertions(+), 587 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/14ffd37c/090/configuration.html
----------------------------------------------------------------------
diff --git a/090/configuration.html b/090/configuration.html
index b633540..40aded5 100644
--- a/090/configuration.html
+++ b/090/configuration.html
@@ -28,7 +28,7 @@ The essential configurations are the following:
 
 Topic-level configurations and defaults are discussed in more detail <a 
href="#topic-config">below</a>.
 
-<!--#include virtual="kafka_config.html" -->
+<!--#include virtual="generated/kafka_config.html" -->
 
 <p>More details about broker configuration can be found in the scala class 
<code>kafka.server.KafkaConfig</code>.</p>
 
@@ -150,7 +150,7 @@ The following are the topic-level configurations. The 
server's default configura
 <h3><a id="producerconfigs" href="#producerconfigs">3.2 Producer 
Configs</a></h3>
 
 Below is the configuration of the Java producer:
-<!--#include virtual="producer_config.html" -->
+<!--#include virtual="generated/producer_config.html" -->
 
 <p>
     For those interested in the legacy Scala producer configs, information can 
be found <a 
href="http://kafka.apache.org/082/documentation.html#producerconfigs";>
@@ -330,7 +330,7 @@ The essential old consumer configurations are the following:
 
 <h4><a id="newconsumerconfigs" href="#newconsumerconfigs">3.3.2 New Consumer 
Configs</a></h4>
 Since 0.9.0.0 we have been working on a replacement for our existing simple 
and high-level consumers. The code is considered beta quality. Below is the 
configuration for the new consumer:
-<!--#include virtual="consumer_config.html" -->
+<!--#include virtual="generated/consumer_config.html" -->
 
 <h3><a id="connectconfigs" href="#connectconfigs">3.4 Kafka Connect 
Configs</a></h3>
-<!--#include virtual="connect_config.html" -->
+<!--#include virtual="generated/connect_config.html" -->

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/14ffd37c/090/connect_config.html
----------------------------------------------------------------------
diff --git a/090/connect_config.html b/090/connect_config.html
deleted file mode 100644
index b9d2a56..0000000
--- a/090/connect_config.html
+++ /dev/null
@@ -1,108 +0,0 @@
-<table class="data-table"><tbody>
-<tr>
-<th>Name</th>
-<th>Description</th>
-<th>Type</th>
-<th>Default</th>
-<th>Valid Values</th>
-<th>Importance</th>
-</tr>
-<tr>
-<td>group.id</td><td>A unique string that identifies the Connect cluster group 
this worker belongs to.</td><td>string</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>internal.key.converter</td><td>Converter class for internal key Connect 
data that implements the <code>Converter</code> interface. Used for converting 
data like offsets and 
configs.</td><td>class</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>internal.value.converter</td><td>Converter class for offset value Connect 
data that implements the <code>Converter</code> interface. Used for converting 
data like offsets and 
configs.</td><td>class</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>key.converter</td><td>Converter class for key Connect data that implements 
the <code>Converter</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>value.converter</td><td>Converter class for value Connect data that 
implements the <code>Converter</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td>[localhost:9092]</td><td></td><td>high</td></tr>
-<tr>
-<td>cluster</td><td>ID for this cluster, which is used to provide a namespace 
so multiple Kafka Connect clusters or instances may co-exist while sharing a 
single Kafka 
cluster.</td><td>string</td><td>connect</td><td></td><td>high</td></tr>
-<tr>
-<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the 
group coordinator when using Kafka's group management facilities. Heartbeats 
are used to ensure that the worker's session stays active and to facilitate 
rebalancing when new members join or leave the group. The value must be set 
lower than <code>session.timeout.ms</code>, but typically should be set no 
higher than 1/3 of that value. It can be adjusted even lower to control the 
expected time for normal 
rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
-<tr>
-<td>session.timeout.ms</td><td>The timeout used to detect failures when using 
Kafka's group management 
facilities.</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.keystore.password</td><td>The store password for the key store 
file.This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
-<tr>
-<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading 
data.</td><td>int</td><td>32768</td><td>[0,...]</td><td>medium</td></tr>
-<tr>
-<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>medium</td></tr>
-<tr>
-<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
-<tr>
-<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
-<tr>
-<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending 
data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
-<tr>
-<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, 
TLSv1]</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
-<tr>
-<td>worker.sync.timeout.ms</td><td>When the worker is out of sync with other 
workers and needs to resynchronize configurations, wait up to this amount of 
time before giving up, leaving the group, and waiting a backoff period before 
rejoining.</td><td>int</td><td>3000</td><td></td><td>medium</td></tr>
-<tr>
-<td>worker.unsync.backoff.ms</td><td>When the worker is out of sync with other 
workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect 
cluster for this long before 
rejoining.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
-<tr>
-<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
-<tr>
-<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>[]</td><td></td><td>low</td></tr>
-<tr>
-<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
-<tr>
-<td>metrics.sample.window.ms</td><td>The number of samples maintained to 
compute 
metrics.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>offset.flush.interval.ms</td><td>Interval at which to try committing 
offsets for tasks.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
-<tr>
-<td>offset.flush.timeout.ms</td><td>Maximum number of milliseconds to wait for 
records to flush and partition offset data to be committed to offset storage 
before cancelling the process and restoring the offset data to be committed in 
a future attempt.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
-<tr>
-<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>rest.advertised.host.name</td><td>If this is set, this is the hostname 
that will be given out to other workers to connect 
to.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>rest.advertised.port</td><td>If this is set, this is the port that will be 
given out to other workers to connect 
to.</td><td>int</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>rest.host.name</td><td>Hostname for the REST API. If this is set, it will 
only bind to this 
interface.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>rest.port</td><td>Port for the REST API to listen 
on.</td><td>int</td><td>8083</td><td></td><td>low</td></tr>
-<tr>
-<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed fetch request to a given topic partition. This avoids repeated 
fetching-and-failing in a tight 
loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol.By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
-<tr>
-<td>task.shutdown.graceful.timeout.ms</td><td>Amount of time to wait for tasks 
to shutdown gracefully. This is the total amount of time, not per task. All 
task have shutdown triggered, then they are waited on 
sequentially.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
-</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/14ffd37c/090/consumer_config.html
----------------------------------------------------------------------
diff --git a/090/consumer_config.html b/090/consumer_config.html
deleted file mode 100644
index 942dc46..0000000
--- a/090/consumer_config.html
+++ /dev/null
@@ -1,100 +0,0 @@
-<table class="data-table"><tbody>
-<tr>
-<th>Name</th>
-<th>Description</th>
-<th>Type</th>
-<th>Default</th>
-<th>Valid Values</th>
-<th>Importance</th>
-</tr>
-<tr>
-<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>key.deserializer</td><td>Deserializer class for key that implements the 
<code>Deserializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>value.deserializer</td><td>Deserializer class for value that implements 
the <code>Deserializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
-<tr>
-<td>fetch.min.bytes</td><td>The minimum amount of data the server should 
return for a fetch request. If insufficient data is available the request will 
wait for that much data to accumulate before answering the request. The default 
setting of 1 byte means that fetch requests are answered as soon as a single 
byte of data is available or the fetch request times out waiting for data to 
arrive. Setting this to something greater than 1 will cause the server to wait 
for larger amounts of data to accumulate which can improve server throughput a 
bit at the cost of some additional 
latency.</td><td>int</td><td>1</td><td>[0,...]</td><td>high</td></tr>
-<tr>
-<td>group.id</td><td>A unique string that identifies the consumer group this 
consumer belongs to. This property is required if the consumer uses either the 
group management functionality by using <code>subscribe(topic)</code> or the 
Kafka-based offset management 
strategy.</td><td>string</td><td>""</td><td></td><td>high</td></tr>
-<tr>
-<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the 
consumer coordinator when using Kafka's group management facilities. Heartbeats 
are used to ensure that the consumer's session stays active and to facilitate 
rebalancing when new consumers join or leave the group. The value must be set 
lower than <code>session.timeout.ms</code>, but typically should be set no 
higher than 1/3 of that value. It can be adjusted even lower to control the 
expected time for normal 
rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
-<tr>
-<td>max.partition.fetch.bytes</td><td>The maximum amount of data per-partition 
the server will return. The maximum total memory used for a request will be 
<code>#partitions * max.partition.fetch.bytes</code>. This size must be at 
least as large as the maximum message size the server allows or else it is 
possible for the producer to send messages larger than the consumer can fetch. 
If that happens, the consumer can get stuck trying to fetch a large message on 
a certain 
partition.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>high</td></tr>
-<tr>
-<td>session.timeout.ms</td><td>The timeout used to detect failures when using 
Kafka's group management 
facilities.</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.keystore.password</td><td>The store password for the key store 
file.This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
-<tr>
-<td>auto.offset.reset</td><td>What to do when there is no initial offset in 
Kafka or if the current offset does not exist any more on the server (e.g. 
because that data has been deleted): <ul><li>earliest: automatically reset the 
offset to the earliest offset<li>latest: automatically reset the offset to the 
latest offset</li><li>none: throw exception to the consumer if no previous 
offset is found for the consumer's group</li><li>anything else: throw exception 
to the consumer.</li></ul></td><td>string</td><td>latest</td><td>[latest, 
earliest, none]</td><td>medium</td></tr>
-<tr>
-<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
-<tr>
-<td>enable.auto.commit</td><td>If true the consumer's offset will be 
periodically committed in the 
background.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
-<tr>
-<td>partition.assignment.strategy</td><td>The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is 
used</td><td>list</td><td>[org.apache.kafka.clients.consumer.RangeAssignor]</td><td></td><td>medium</td></tr>
-<tr>
-<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading 
data.</td><td>int</td><td>32768</td><td>[0,...]</td><td>medium</td></tr>
-<tr>
-<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>medium</td></tr>
-<tr>
-<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
-<tr>
-<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
-<tr>
-<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending 
data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
-<tr>
-<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, 
TLSv1]</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
-<tr>
-<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
-<tr>
-<td>auto.commit.interval.ms</td><td>The frequency in milliseconds that the 
consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code> 
is set to 
<code>true</code>.</td><td>long</td><td>5000</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>check.crcs</td><td>Automatically check the CRC32 of the records consumed. 
This ensures no on-the-wire or on-disk corruption to the messages occurred. 
This check adds some overhead, so it may be disabled in cases seeking extreme 
performance.</td><td>boolean</td><td>true</td><td></td><td>low</td></tr>
-<tr>
-<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
-<tr>
-<td>fetch.max.wait.ms</td><td>The maximum amount of time the server will block 
before answering the fetch request if there isn't sufficient data to 
immediately satisfy the requirement given by 
fetch.min.bytes.</td><td>int</td><td>500</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>[]</td><td></td><td>low</td></tr>
-<tr>
-<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
-<tr>
-<td>metrics.sample.window.ms</td><td>The number of samples maintained to 
compute 
metrics.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed fetch request to a given topic partition. This avoids repeated 
fetching-and-failing in a tight 
loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
-<tr>
-<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol.By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
-<tr>
-<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
-</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/14ffd37c/090/generated/connect_config.html
----------------------------------------------------------------------
diff --git a/090/generated/connect_config.html 
b/090/generated/connect_config.html
new file mode 100644
index 0000000..b9d2a56
--- /dev/null
+++ b/090/generated/connect_config.html
@@ -0,0 +1,108 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>group.id</td><td>A unique string that identifies the Connect cluster group 
this worker belongs to.</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>internal.key.converter</td><td>Converter class for internal key Connect 
data that implements the <code>Converter</code> interface. Used for converting 
data like offsets and 
configs.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>internal.value.converter</td><td>Converter class for offset value Connect 
data that implements the <code>Converter</code> interface. Used for converting 
data like offsets and 
configs.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.converter</td><td>Converter class for key Connect data that implements 
the <code>Converter</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.converter</td><td>Converter class for value Connect data that 
implements the <code>Converter</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td>[localhost:9092]</td><td></td><td>high</td></tr>
+<tr>
+<td>cluster</td><td>ID for this cluster, which is used to provide a namespace 
so multiple Kafka Connect clusters or instances may co-exist while sharing a 
single Kafka 
cluster.</td><td>string</td><td>connect</td><td></td><td>high</td></tr>
+<tr>
+<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the 
group coordinator when using Kafka's group management facilities. Heartbeats 
are used to ensure that the worker's session stays active and to facilitate 
rebalancing when new members join or leave the group. The value must be set 
lower than <code>session.timeout.ms</code>, but typically should be set no 
higher than 1/3 of that value. It can be adjusted even lower to control the 
expected time for normal 
rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
+<tr>
+<td>session.timeout.ms</td><td>The timeout used to detect failures when using 
Kafka's group management 
facilities.</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store 
file.This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading 
data.</td><td>int</td><td>32768</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending 
data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, 
TLSv1]</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>worker.sync.timeout.ms</td><td>When the worker is out of sync with other 
workers and needs to resynchronize configurations, wait up to this amount of 
time before giving up, leaving the group, and waiting a backoff period before 
rejoining.</td><td>int</td><td>3000</td><td></td><td>medium</td></tr>
+<tr>
+<td>worker.unsync.backoff.ms</td><td>When the worker is out of sync with other 
workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect 
cluster for this long before 
rejoining.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>[]</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The number of samples maintained to 
compute 
metrics.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>offset.flush.interval.ms</td><td>Interval at which to try committing 
offsets for tasks.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>offset.flush.timeout.ms</td><td>Maximum number of milliseconds to wait for 
records to flush and partition offset data to be committed to offset storage 
before cancelling the process and restoring the offset data to be committed in 
a future attempt.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>rest.advertised.host.name</td><td>If this is set, this is the hostname 
that will be given out to other workers to connect 
to.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.advertised.port</td><td>If this is set, this is the port that will be 
given out to other workers to connect 
to.</td><td>int</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.host.name</td><td>Hostname for the REST API. If this is set, it will 
only bind to this 
interface.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.port</td><td>Port for the REST API to listen 
on.</td><td>int</td><td>8083</td><td></td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed fetch request to a given topic partition. This avoids repeated 
fetching-and-failing in a tight 
loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol.By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+<tr>
+<td>task.shutdown.graceful.timeout.ms</td><td>Amount of time to wait for tasks 
to shutdown gracefully. This is the total amount of time, not per task. All 
task have shutdown triggered, then they are waited on 
sequentially.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/14ffd37c/090/generated/consumer_config.html
----------------------------------------------------------------------
diff --git a/090/generated/consumer_config.html 
b/090/generated/consumer_config.html
new file mode 100644
index 0000000..942dc46
--- /dev/null
+++ b/090/generated/consumer_config.html
@@ -0,0 +1,100 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.deserializer</td><td>Deserializer class for key that implements the 
<code>Deserializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.deserializer</td><td>Deserializer class for value that implements 
the <code>Deserializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>fetch.min.bytes</td><td>The minimum amount of data the server should 
return for a fetch request. If insufficient data is available the request will 
wait for that much data to accumulate before answering the request. The default 
setting of 1 byte means that fetch requests are answered as soon as a single 
byte of data is available or the fetch request times out waiting for data to 
arrive. Setting this to something greater than 1 will cause the server to wait 
for larger amounts of data to accumulate which can improve server throughput a 
bit at the cost of some additional 
latency.</td><td>int</td><td>1</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>group.id</td><td>A unique string that identifies the consumer group this 
consumer belongs to. This property is required if the consumer uses either the 
group management functionality by using <code>subscribe(topic)</code> or the 
Kafka-based offset management 
strategy.</td><td>string</td><td>""</td><td></td><td>high</td></tr>
+<tr>
+<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the 
consumer coordinator when using Kafka's group management facilities. Heartbeats 
are used to ensure that the consumer's session stays active and to facilitate 
rebalancing when new consumers join or leave the group. The value must be set 
lower than <code>session.timeout.ms</code>, but typically should be set no 
higher than 1/3 of that value. It can be adjusted even lower to control the 
expected time for normal 
rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
+<tr>
+<td>max.partition.fetch.bytes</td><td>The maximum amount of data per-partition 
the server will return. The maximum total memory used for a request will be 
<code>#partitions * max.partition.fetch.bytes</code>. This size must be at 
least as large as the maximum message size the server allows or else it is 
possible for the producer to send messages larger than the consumer can fetch. 
If that happens, the consumer can get stuck trying to fetch a large message on 
a certain 
partition.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>session.timeout.ms</td><td>The timeout used to detect failures when using 
Kafka's group management 
facilities.</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store 
file.This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.offset.reset</td><td>What to do when there is no initial offset in 
Kafka or if the current offset does not exist any more on the server (e.g. 
because that data has been deleted): <ul><li>earliest: automatically reset the 
offset to the earliest offset<li>latest: automatically reset the offset to the 
latest offset</li><li>none: throw exception to the consumer if no previous 
offset is found for the consumer's group</li><li>anything else: throw exception 
to the consumer.</li></ul></td><td>string</td><td>latest</td><td>[latest, 
earliest, none]</td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>enable.auto.commit</td><td>If true the consumer's offset will be 
periodically committed in the 
background.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>partition.assignment.strategy</td><td>The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is 
used</td><td>list</td><td>[org.apache.kafka.clients.consumer.RangeAssignor]</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading 
data.</td><td>int</td><td>32768</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending 
data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, 
TLSv1]</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>auto.commit.interval.ms</td><td>The frequency in milliseconds that the 
consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code> 
is set to 
<code>true</code>.</td><td>long</td><td>5000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>check.crcs</td><td>Automatically check the CRC32 of the records consumed. 
This ensures no on-the-wire or on-disk corruption to the messages occurred. 
This check adds some overhead, so it may be disabled in cases seeking extreme 
performance.</td><td>boolean</td><td>true</td><td></td><td>low</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>fetch.max.wait.ms</td><td>The maximum amount of time the server will block 
before answering the fetch request if there isn't sufficient data to 
immediately satisfy the requirement given by 
fetch.min.bytes.</td><td>int</td><td>500</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>[]</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The number of samples maintained to 
compute 
metrics.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed fetch request to a given topic partition. This avoids repeated 
fetching-and-failing in a tight 
loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol.By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/14ffd37c/090/generated/kafka_config.html
----------------------------------------------------------------------
diff --git a/090/generated/kafka_config.html b/090/generated/kafka_config.html
new file mode 100644
index 0000000..efefeb9
--- /dev/null
+++ b/090/generated/kafka_config.html
@@ -0,0 +1,270 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>zookeeper.connect</td><td>Zookeeper host 
string</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>advertised.host.name</td><td>Hostname to publish to ZooKeeper for clients 
to use. In IaaS environments, this may need to be different from the interface 
to which the broker binds. If this is not set, it will use the value for 
"host.name" if configured. Otherwise it will use the value returned from 
java.net.InetAddress.getCanonicalHostName().</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>advertised.listeners</td><td>Listeners to publish to ZooKeeper for clients 
to use, if different than the listeners above. In IaaS environments, this may 
need to be different from the interface to which the broker binds. If this is 
not set, the value for "listeners" will be 
used.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>advertised.port</td><td>The port to publish to ZooKeeper for clients to 
use. In IaaS environments, this may need to be different from the port to which 
the broker binds. If this is not set, it will publish the same port that the 
broker binds to.</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.create.topics.enable</td><td>Enable auto creation of topic on the 
server</td><td>boolean</td><td>true</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.leader.rebalance.enable</td><td>Enables auto leader balancing. A 
background thread checks and triggers leader balance if required at regular 
intervals</td><td>boolean</td><td>true</td><td></td><td>high</td></tr>
+<tr>
+<td>background.threads</td><td>The number of threads to use for various 
background processing 
tasks</td><td>int</td><td>10</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>broker.id</td><td>The broker id for this server. To avoid conflicts 
between zookeeper generated brokerId and user's config.brokerId added 
MaxReservedBrokerId and zookeeper sequence starts from MaxReservedBrokerId + 
1.</td><td>int</td><td>-1</td><td></td><td>high</td></tr>
+<tr>
+<td>compression.type</td><td>Specify the final compression type for a given 
topic. This configuration accepts the standard compression codecs ('gzip', 
'snappy', lz4). It additionally accepts 'uncompressed' which is equivalent to 
no compression; and 'producer' which means retain the original compression 
codec set by the 
producer.</td><td>string</td><td>producer</td><td></td><td>high</td></tr>
+<tr>
+<td>delete.topic.enable</td><td>Enables delete topic. Delete topic through the 
admin tool will have no effect if this config is turned 
off</td><td>boolean</td><td>false</td><td></td><td>high</td></tr>
+<tr>
+<td>host.name</td><td>hostname of broker. If this is set, it will only bind to 
this address. If this is not set, it will bind to all 
interfaces</td><td>string</td><td>""</td><td></td><td>high</td></tr>
+<tr>
+<td>leader.imbalance.check.interval.seconds</td><td>The frequency with which 
the partition rebalance check is triggered by the 
controller</td><td>long</td><td>300</td><td></td><td>high</td></tr>
+<tr>
+<td>leader.imbalance.per.broker.percentage</td><td>The ratio of leader 
imbalance allowed per broker. The controller would trigger a leader balance if 
it goes above this value per broker. The value is specified in 
percentage.</td><td>int</td><td>10</td><td></td><td>high</td></tr>
+<tr>
+<td>listeners</td><td>Listener List - Comma-separated list of URIs we will 
listen on and their protocols.
+ Specify hostname as 0.0.0.0 to bind to all interfaces.
+ Leave hostname empty to bind to default interface.
+ Examples of legal listener lists:
+ PLAINTEXT://myhost:9092,TRACE://:9091
+ PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
+</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.dir</td><td>The directory in which the log data is kept (supplemental 
for log.dirs 
property)</td><td>string</td><td>/tmp/kafka-logs</td><td></td><td>high</td></tr>
+<tr>
+<td>log.dirs</td><td>The directories in which the log data is kept. If not 
set, the value in log.dir is 
used</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.flush.interval.messages</td><td>The number of messages accumulated on 
a log partition before messages are flushed to disk 
</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>log.flush.interval.ms</td><td>The maximum time in ms that a message in any 
topic is kept in memory before flushed to disk. If not set, the value in 
log.flush.scheduler.interval.ms is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.flush.offset.checkpoint.interval.ms</td><td>The frequency with which 
we update the persistent record of the last flush which acts as the log 
recovery point</td><td>int</td><td>60000</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>log.flush.scheduler.interval.ms</td><td>The frequency in ms that the log 
flusher checks whether any log needs to be flushed to 
disk</td><td>long</td><td>9223372036854775807</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.bytes</td><td>The maximum size of the log before deleting 
it</td><td>long</td><td>-1</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.hours</td><td>The number of hours to keep a log file before 
deleting it (in hours), tertiary to log.retention.ms 
property</td><td>int</td><td>168</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.minutes</td><td>The number of minutes to keep a log file 
before deleting it (in minutes), secondary to log.retention.ms property. If not 
set, the value in log.retention.hours is 
used</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.ms</td><td>The number of milliseconds to keep a log file 
before deleting it (in milliseconds), If not set, the value in 
log.retention.minutes is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.roll.hours</td><td>The maximum time before a new log segment is rolled 
out (in hours), secondary to log.roll.ms 
property</td><td>int</td><td>168</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>log.roll.jitter.hours</td><td>The maximum jitter to subtract from 
logRollTimeMillis (in hours), secondary to log.roll.jitter.ms 
property</td><td>int</td><td>0</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>log.roll.jitter.ms</td><td>The maximum jitter to subtract from 
logRollTimeMillis (in milliseconds). If not set, the value in 
log.roll.jitter.hours is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.roll.ms</td><td>The maximum time before a new log segment is rolled 
out (in milliseconds). If not set, the value in log.roll.hours is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.segment.bytes</td><td>The maximum size of a single log 
file</td><td>int</td><td>1073741824</td><td>[14,...]</td><td>high</td></tr>
+<tr>
+<td>log.segment.delete.delay.ms</td><td>The amount of time to wait before 
deleting a file from the 
filesystem</td><td>long</td><td>60000</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>message.max.bytes</td><td>The maximum size of message that the server can 
receive</td><td>int</td><td>1000012</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>min.insync.replicas</td><td>define the minimum number of replicas in ISR 
needed to satisfy a produce request with required.acks=-1 (or 
all)</td><td>int</td><td>1</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.io.threads</td><td>The number of io threads that the server uses for 
carrying out network 
requests</td><td>int</td><td>8</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.network.threads</td><td>the number of network threads that the server 
uses for handling network 
requests</td><td>int</td><td>3</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.recovery.threads.per.data.dir</td><td>The number of threads per data 
directory to be used for log recovery at startup and flushing at 
shutdown</td><td>int</td><td>1</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.replica.fetchers</td><td>Number of fetcher threads used to replicate 
messages from a source broker. Increasing this value can increase the degree of 
I/O parallelism in the follower 
broker.</td><td>int</td><td>1</td><td></td><td>high</td></tr>
+<tr>
+<td>offset.metadata.max.bytes</td><td>The maximum size for a metadata entry 
associated with an offset 
commit</td><td>int</td><td>4096</td><td></td><td>high</td></tr>
+<tr>
+<td>offsets.commit.required.acks</td><td>The required acks before the commit 
can be accepted. In general, the default (-1) should not be 
overridden</td><td>short</td><td>-1</td><td></td><td>high</td></tr>
+<tr>
+<td>offsets.commit.timeout.ms</td><td>Offset commit will be delayed until all 
replicas for the offsets topic receive the commit or this timeout is reached. 
This is similar to the producer request 
timeout.</td><td>int</td><td>5000</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.load.buffer.size</td><td>Batch size for reading from the offsets 
segments when loading offsets into the 
cache.</td><td>int</td><td>5242880</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.retention.check.interval.ms</td><td>Frequency at which to check 
for stale 
offsets</td><td>long</td><td>600000</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.retention.minutes</td><td>Log retention window in minutes for 
offsets topic</td><td>int</td><td>1440</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.topic.compression.codec</td><td>Compression codec for the offsets 
topic - compression may be used to achieve "atomic" 
commits</td><td>int</td><td>0</td><td></td><td>high</td></tr>
+<tr>
+<td>offsets.topic.num.partitions</td><td>The number of partitions for the 
offset commit topic (should not change after 
deployment)</td><td>int</td><td>50</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.topic.replication.factor</td><td>The replication factor for the 
offsets topic (set higher to ensure availability). To ensure that the effective 
replication factor of the offsets topic is the configured value, the number of 
alive brokers has to be at least the replication factor at the time of the 
first request for the offsets topic. If not, either the offsets topic creation 
will fail or it will get a replication factor of min(alive brokers, configured 
replication 
factor)</td><td>short</td><td>3</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.topic.segment.bytes</td><td>The offsets topic segment bytes should 
be kept relatively small in order to facilitate faster log compaction and cache 
loads</td><td>int</td><td>104857600</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>port</td><td>the port to listen and accept connections 
on</td><td>int</td><td>9092</td><td></td><td>high</td></tr>
+<tr>
+<td>queued.max.requests</td><td>The number of queued requests allowed before 
blocking the network 
threads</td><td>int</td><td>500</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>quota.consumer.default</td><td>Any consumer distinguished by 
clientId/consumer group will get throttled if it fetches more bytes than this 
value 
per-second</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>quota.producer.default</td><td>Any producer distinguished by clientId will 
get throttled if it produces more bytes than this value 
per-second</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>replica.fetch.max.bytes</td><td>The number of byes of messages to attempt 
to fetch</td><td>int</td><td>1048576</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.fetch.min.bytes</td><td>Minimum bytes expected for each fetch 
response. If not enough bytes, wait up to 
replicaMaxWaitTimeMs</td><td>int</td><td>1</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.fetch.wait.max.ms</td><td>max wait time for each fetcher request 
issued by follower replicas. This value should always be less than the 
replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for 
low throughput topics</td><td>int</td><td>500</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.high.watermark.checkpoint.interval.ms</td><td>The frequency with 
which the high watermark is saved out to 
disk</td><td>long</td><td>5000</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.lag.time.max.ms</td><td>If a follower hasn't sent any fetch 
requests or hasn't consumed up to the leaders log end offset for at least this 
time, the leader will remove the follower from 
isr</td><td>long</td><td>10000</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.socket.receive.buffer.bytes</td><td>The socket receive buffer for 
network requests</td><td>int</td><td>65536</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.socket.timeout.ms</td><td>The socket timeout for network requests. 
Its value should be at least 
replica.fetch.wait.max.ms</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
+<tr>
+<td>socket.receive.buffer.bytes</td><td>The SO_RCVBUF buffer of the socket 
sever sockets</td><td>int</td><td>102400</td><td></td><td>high</td></tr>
+<tr>
+<td>socket.request.max.bytes</td><td>The maximum number of bytes in a socket 
request</td><td>int</td><td>104857600</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>socket.send.buffer.bytes</td><td>The SO_SNDBUF buffer of the socket sever 
sockets</td><td>int</td><td>102400</td><td></td><td>high</td></tr>
+<tr>
+<td>unclean.leader.election.enable</td><td>Indicates whether to enable 
replicas not in the ISR set to be elected as leader as a last resort, even 
though doing so may result in data 
loss</td><td>boolean</td><td>true</td><td></td><td>high</td></tr>
+<tr>
+<td>zookeeper.connection.timeout.ms</td><td>The max time that the client waits 
to establish a connection to zookeeper. If not set, the value in 
zookeeper.session.timeout.ms is 
used</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>zookeeper.session.timeout.ms</td><td>Zookeeper session 
timeout</td><td>int</td><td>6000</td><td></td><td>high</td></tr>
+<tr>
+<td>zookeeper.set.acl</td><td>Set client to use secure 
ACLs</td><td>boolean</td><td>false</td><td></td><td>high</td></tr>
+<tr>
+<td>broker.id.generation.enable</td><td>Enable automatic broker id generation 
on the server? When enabled the value configured for reserved.broker.max.id 
should be 
reviewed.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Idle connections timeout: the server 
socket processor threads close the connections that idle more than 
this</td><td>long</td><td>600000</td><td></td><td>medium</td></tr>
+<tr>
+<td>controlled.shutdown.enable</td><td>Enable controlled shutdown of the 
server</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>controlled.shutdown.max.retries</td><td>Controlled shutdown can fail for 
multiple reasons. This determines the number of retries when such failure 
happens</td><td>int</td><td>3</td><td></td><td>medium</td></tr>
+<tr>
+<td>controlled.shutdown.retry.backoff.ms</td><td>Before each retry, the system 
needs time to recover from the state that caused the previous failure 
(Controller fail over, replica lag etc). This config determines the amount of 
time to wait before 
retrying.</td><td>long</td><td>5000</td><td></td><td>medium</td></tr>
+<tr>
+<td>controller.socket.timeout.ms</td><td>The socket timeout for 
controller-to-broker 
channels</td><td>int</td><td>30000</td><td></td><td>medium</td></tr>
+<tr>
+<td>default.replication.factor</td><td>default replication factors for 
automatically created 
topics</td><td>int</td><td>1</td><td></td><td>medium</td></tr>
+<tr>
+<td>fetch.purgatory.purge.interval.requests</td><td>The purge interval (in 
number of requests) of the fetch request 
purgatory</td><td>int</td><td>1000</td><td></td><td>medium</td></tr>
+<tr>
+<td>group.max.session.timeout.ms</td><td>The maximum allowed session timeout 
for registered 
consumers</td><td>int</td><td>30000</td><td></td><td>medium</td></tr>
+<tr>
+<td>group.min.session.timeout.ms</td><td>The minimum allowed session timeout 
for registered 
consumers</td><td>int</td><td>6000</td><td></td><td>medium</td></tr>
+<tr>
+<td>inter.broker.protocol.version</td><td>Specify which version of the 
inter-broker protocol will be used.
+ This is typically bumped after all brokers were upgraded to a new version.
+ Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 
0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full 
list.</td><td>string</td><td>0.9.0.X</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.backoff.ms</td><td>The amount of time to sleep when there are 
no logs to 
clean</td><td>long</td><td>15000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.dedupe.buffer.size</td><td>The total memory used for log 
deduplication across all cleaner 
threads</td><td>long</td><td>134217728</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.delete.retention.ms</td><td>How long are delete records 
retained?</td><td>long</td><td>86400000</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.enable</td><td>Enable the log cleaner process to run on the 
server? Should be enabled if using any topics with a cleanup.policy=compact 
including the internal offsets topic. If disabled those topics will not be 
compacted and continually grow in 
size.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.io.buffer.load.factor</td><td>Log cleaner dedupe buffer load 
factor. The percentage full the dedupe buffer can become. A higher value will 
allow more log to be cleaned at once but will lead to more hash 
collisions</td><td>double</td><td>0.9</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.io.buffer.size</td><td>The total memory used for log cleaner 
I/O buffers across all cleaner 
threads</td><td>int</td><td>524288</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.io.max.bytes.per.second</td><td>The log cleaner will be 
throttled so that the sum of its read and write i/o will be less than this 
value on 
average</td><td>double</td><td>1.7976931348623157E308</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.min.cleanable.ratio</td><td>The minimum ratio of dirty log to 
total log for a log to eligible for 
cleaning</td><td>double</td><td>0.5</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.threads</td><td>The number of background threads to use for 
log cleaning</td><td>int</td><td>1</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.cleanup.policy</td><td>The default cleanup policy for segments beyond 
the retention window, must be either "delete" or 
"compact"</td><td>string</td><td>delete</td><td>[compact, 
delete]</td><td>medium</td></tr>
+<tr>
+<td>log.index.interval.bytes</td><td>The interval with which we add an entry 
to the offset 
index</td><td>int</td><td>4096</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.index.size.max.bytes</td><td>The maximum size in bytes of the offset 
index</td><td>int</td><td>10485760</td><td>[4,...]</td><td>medium</td></tr>
+<tr>
+<td>log.preallocate</td><td>Should pre allocate file when create new segment? 
If you are using Kafka on Windows, you probably need to set it to 
true.</td><td>boolean</td><td>false</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.retention.check.interval.ms</td><td>The frequency in milliseconds that 
the log cleaner checks whether any log is eligible for 
deletion</td><td>long</td><td>300000</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>max.connections.per.ip</td><td>The maximum number of connections we allow 
from each ip 
address</td><td>int</td><td>2147483647</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>max.connections.per.ip.overrides</td><td>Per-ip or hostname overrides to 
the default maximum number of 
connections</td><td>string</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>num.partitions</td><td>The default number of log partitions per 
topic</td><td>int</td><td>1</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>principal.builder.class</td><td>The fully qualified name of a class that 
implements the PrincipalBuilder interface, which is currently used to build the 
Principal for connections with the SSL 
SecurityProtocol.</td><td>class</td><td>class 
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder</td><td></td><td>medium</td></tr>
+<tr>
+<td>producer.purgatory.purge.interval.requests</td><td>The purge interval (in 
number of requests) of the producer request 
purgatory</td><td>int</td><td>1000</td><td></td><td>medium</td></tr>
+<tr>
+<td>replica.fetch.backoff.ms</td><td>The amount of time to sleep when fetch 
partition error 
occurs.</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>reserved.broker.max.id</td><td>Max number that can be used for a 
broker.id</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.principal.to.local.rules</td><td>A list of rules for mapping 
from principal names to short names (typically operating system usernames). The 
rules are evaluated in order and the first rule that matches a principal name 
is used to map it to a short name. Any later rules in the list are ignored. By 
default, principal names of the form {username}/{hostname}@{REALM} are mapped 
to {username}. For more details on the format please see <a 
href="#security_authz"> security authorization and 
acls</a>.</td><td>list</td><td>[DEFAULT]</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.inter.broker.protocol</td><td>Security protocol used to 
communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol.By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.client.auth</td><td>Configures kafka broker to request client 
authentication. The following settings are common:  <ul> 
<li><code>ssl.client.auth=required</code> If set to required client 
authentication is required. <li><code>ssl.client.auth=requested</code> This 
means client authentication is optional. unlike requested , if this option is 
set client can choose not to provide authentication information about itself 
<li><code>ssl.client.auth=none</code> This means client authentication is not 
needed.</td><td>string</td><td>none</td><td>[required, requested, 
none]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>[TLSv1.2, TLSv1.1, 
TLSv1]</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store 
file.This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>authorizer.class.name</td><td>The authorizer class that should be used for 
authorization</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>[]</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The number of samples maintained to 
compute 
metrics.</td><td>long</td><td>30000</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>quota.window.num</td><td>The number of samples to retain in 
memory</td><td>int</td><td>11</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>quota.window.size.seconds</td><td>The time span of each 
sample</td><td>int</td><td>1</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>zookeeper.sync.time.ms</td><td>How far a ZK follower can be behind a ZK 
leader</td><td>int</td><td>2000</td><td></td><td>low</td></tr>
+</tbody></table>

Reply via email to