http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/connect_config.html
----------------------------------------------------------------------
diff --git a/0102/generated/connect_config.html 
b/0102/generated/connect_config.html
new file mode 100644
index 0000000..f127fa2
--- /dev/null
+++ b/0102/generated/connect_config.html
@@ -0,0 +1,124 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>config.storage.topic</td><td>kafka topic to store 
configs</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>group.id</td><td>A unique string that identifies the Connect cluster group 
this worker belongs to.</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.converter</td><td>Converter class used to convert between Kafka 
Connect format and the serialized form that is written to Kafka. This controls 
the format of the keys in messages written to or read from Kafka, and since 
this is independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and 
Avro.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>offset.storage.topic</td><td>kafka topic to store connector offsets 
in</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>status.storage.topic</td><td>kafka topic to track connector and task 
status</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.converter</td><td>Converter class used to convert between Kafka 
Connect format and the serialized form that is written to Kafka. This controls 
the format of the values in messages written to or read from Kafka, and since 
this is independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and 
Avro.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>internal.key.converter</td><td>Converter class used to convert between 
Kafka Connect format and the serialized form that is written to Kafka. This 
controls the format of the keys in messages written to or read from Kafka, and 
since this is independent of connectors it allows any connector to work with 
any serialization format. Examples of common formats include JSON and Avro. 
This setting controls the format used for internal bookkeeping data used by the 
framework, such as configs and offsets, so users can typically use any 
functioning Converter 
implementation.</td><td>class</td><td></td><td></td><td>low</td></tr>
+<tr>
+<td>internal.value.converter</td><td>Converter class used to convert between 
Kafka Connect format and the serialized form that is written to Kafka. This 
controls the format of the values in messages written to or read from Kafka, 
and since this is independent of connectors it allows any connector to work 
with any serialization format. Examples of common formats include JSON and 
Avro. This setting controls the format used for internal bookkeeping data used 
by the framework, such as configs and offsets, so users can typically use any 
functioning Converter 
implementation.</td><td>class</td><td></td><td></td><td>low</td></tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td>localhost:9092</td><td></td><td>high</td></tr>
+<tr>
+<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the 
group coordinator when using Kafka's group management facilities. Heartbeats 
are used to ensure that the worker's session stays active and to facilitate 
rebalancing when new members join or leave the group. The value must be set 
lower than <code>session.timeout.ms</code>, but typically should be set no 
higher than 1/3 of that value. It can be adjusted even lower to control the 
expected time for normal 
rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
+<tr>
+<td>rebalance.timeout.ms</td><td>The maximum allowed time for each worker to 
join the group once a rebalance has begun. This is basically a limit on the 
amount of time needed for all tasks to flush any pending data and commit 
offsets. If the timeout is exceeded, then the worker will be removed from the 
group, which will cause offset commit 
failures.</td><td>int</td><td>60000</td><td></td><td>high</td></tr>
+<tr>
+<td>session.timeout.ms</td><td>The timeout used to detect worker failures. The 
worker sends periodic heartbeats to indicate its liveness to the broker. If no 
heartbeats are received by the broker before the expiration of this session 
timeout, then the broker will remove the worker from the group and initiate a 
rebalance. Note that the value must be in the allowable range as configured in 
the broker configuration by <code>group.min.session.timeout.ms</code> and 
<code>group.max.session.timeout.ms</code>.</td><td>int</td><td>10000</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. 
This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading data. If the value is -1, the OS default will 
be used.</td><td>int</td><td>32768</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.jaas.config</td><td>JAAS login context parameters for SASL 
connections in the format used by JAAS configuration files. JAAS configuration 
file format is described <a 
href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html";>here</a>.
 The format for the value is: '<loginModuleClass> <controlFlag> 
(<optionName>=<optionValue>)*;'</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This 
may be any mechanism for which a security provider is available. GSSAPI is the 
default 
mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending data. If the value is -1, the OS default will be 
used.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>worker.sync.timeout.ms</td><td>When the worker is out of sync with other 
workers and needs to resynchronize configurations, wait up to this amount of 
time before giving up, leaving the group, and waiting a backoff period before 
rejoining.</td><td>int</td><td>3000</td><td></td><td>medium</td></tr>
+<tr>
+<td>worker.unsync.backoff.ms</td><td>When the worker is out of sync with other 
workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect 
cluster for this long before 
rejoining.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
+<tr>
+<td>access.control.allow.methods</td><td>Sets the methods supported for cross 
origin requests by setting the Access-Control-Allow-Methods header. The default 
value of the Access-Control-Allow-Methods header allows cross origin requests 
for GET, POST and HEAD.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>access.control.allow.origin</td><td>Value to set the 
Access-Control-Allow-Origin header to for REST API requests.To enable cross 
origin access, set this to the domain of the application that should be 
permitted to access the API, or '*' to allow access from any domain. The 
default value only allows access from the domain of the REST 
API.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is 
computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>offset.flush.interval.ms</td><td>Interval at which to try committing 
offsets for tasks.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>offset.flush.timeout.ms</td><td>Maximum number of milliseconds to wait for 
records to flush and partition offset data to be committed to offset storage 
before cancelling the process and restoring the offset data to be committed in 
a future attempt.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>rest.advertised.host.name</td><td>If this is set, this is the hostname 
that will be given out to other workers to connect 
to.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.advertised.port</td><td>If this is set, this is the port that will be 
given out to other workers to connect 
to.</td><td>int</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.host.name</td><td>Hostname for the REST API. If this is set, it will 
only bind to this 
interface.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.port</td><td>Port for the REST API to listen 
on.</td><td>int</td><td>8083</td><td></td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed request to a given topic partition. This avoids repeatedly 
sending requests in a tight loop under some failure 
scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol. By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG 
implementation to use for SSL cryptography operations. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+<tr>
+<td>task.shutdown.graceful.timeout.ms</td><td>Amount of time to wait for tasks 
to shutdown gracefully. This is the total amount of time, not per task. All 
task have shutdown triggered, then they are waited on 
sequentially.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/connect_transforms.html
----------------------------------------------------------------------
diff --git a/0102/generated/connect_transforms.html 
b/0102/generated/connect_transforms.html
new file mode 100644
index 0000000..624bec4
--- /dev/null
+++ b/0102/generated/connect_transforms.html
@@ -0,0 +1,173 @@
+<div id="org.apache.kafka.connect.transforms.InsertField">
+<h5>org.apache.kafka.connect.transforms.InsertField</h5>
+Insert field(s) using attributes from the record metadata or a configured 
static value.<p/>Use the concrete transformation type designed for the record 
key (<code>org.apache.kafka.connect.transforms.InsertField.Key</code>) or value 
(<code>org.apache.kafka.connect.transforms.InsertField.Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>offset.field</td><td>Field name for Kafka offset - only applicable to sink 
connectors.<br/>Suffix with <code>!</code> to make this a required field, or 
<code>?</code> to keep it optional (the 
default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>partition.field</td><td>Field name for Kafka partition. Suffix with 
<code>!</code> to make this a required field, or <code>?</code> to keep it 
optional (the 
default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>static.field</td><td>Field name for static data field. Suffix with 
<code>!</code> to make this a required field, or <code>?</code> to keep it 
optional (the 
default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>static.value</td><td>Static field value, if field name 
configured.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>timestamp.field</td><td>Field name for record timestamp. Suffix with 
<code>!</code> to make this a required field, or <code>?</code> to keep it 
optional (the 
default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>topic.field</td><td>Field name for Kafka topic. Suffix with <code>!</code> 
to make this a required field, or <code>?</code> to keep it optional (the 
default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.ReplaceField">
+<h5>org.apache.kafka.connect.transforms.ReplaceField</h5>
+Filter or rename fields.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>blacklist</td><td>Fields to exclude. This takes precedence over the 
whitelist.</td><td>list</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>renames</td><td>Field rename 
mappings.</td><td>list</td><td>""</td><td>list of colon-delimited pairs, e.g. 
<code>foo:bar,abc:xyz</code></td><td>medium</td></tr>
+<tr>
+<td>whitelist</td><td>Fields to include. If specified, only these fields will 
be used.</td><td>list</td><td>""</td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.MaskField">
+<h5>org.apache.kafka.connect.transforms.MaskField</h5>
+Mask specified fields with a valid null value for the field type (i.e. 0, 
false, empty string, and so on).<p/>Use the concrete transformation type 
designed for the record key 
(<code>org.apache.kafka.connect.transforms.MaskField.Key</code>) or value 
(<code>org.apache.kafka.connect.transforms.MaskField.Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>fields</td><td>Names of fields to 
mask.</td><td>list</td><td></td><td>non-empty list</td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.ValueToKey">
+<h5>org.apache.kafka.connect.transforms.ValueToKey</h5>
+Replace the record key with a new key formed from a subset of fields in the 
record value.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>fields</td><td>Field names on the record value to extract as the record 
key.</td><td>list</td><td></td><td>non-empty list</td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.HoistField">
+<h5>org.apache.kafka.connect.transforms.HoistField</h5>
+Wrap data using the specified field name in a Struct when schema present, or a 
Map in the case of schemaless data.<p/>Use the concrete transformation type 
designed for the record key 
(<code>org.apache.kafka.connect.transforms.HoistField.Key</code>) or value 
(<code>org.apache.kafka.connect.transforms.HoistField.Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>field</td><td>Field name for the single field that will be created in the 
resulting Struct or 
Map.</td><td>string</td><td></td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.ExtractField">
+<h5>org.apache.kafka.connect.transforms.ExtractField</h5>
+Extract the specified field from a Struct when schema present, or a Map in the 
case of schemaless data.<p/>Use the concrete transformation type designed for 
the record key 
(<code>org.apache.kafka.connect.transforms.ExtractField.Key</code>) or value 
(<code>org.apache.kafka.connect.transforms.ExtractField.Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>field</td><td>Field name to 
extract.</td><td>string</td><td></td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.SetSchemaMetadata">
+<h5>org.apache.kafka.connect.transforms.SetSchemaMetadata</h5>
+Set the schema name, version or both on the record's key 
(<code>org.apache.kafka.connect.transforms.SetSchemaMetadata.Key</code>) or 
value 
(<code>org.apache.kafka.connect.transforms.SetSchemaMetadata.Value</code>) 
schema.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>schema.name</td><td>Schema name to 
set.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>schema.version</td><td>Schema version to 
set.</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.TimestampRouter">
+<h5>org.apache.kafka.connect.transforms.TimestampRouter</h5>
+Update the record's topic field as a function of the original topic value and 
the record timestamp.<p/>This is mainly useful for sink connectors, since the 
topic field is often used to determine the equivalent entity name in the 
destination system(e.g. database table or search index name).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>timestamp.format</td><td>Format string for the timestamp that is 
compatible with 
<code>java.text.SimpleDateFormat</code>.</td><td>string</td><td>yyyyMMdd</td><td></td><td>high</td></tr>
+<tr>
+<td>topic.format</td><td>Format string which can contain <code>${topic}</code> 
and <code>${timestamp}</code> as placeholders for the topic and timestamp, 
respectively.</td><td>string</td><td>${topic}-${timestamp}</td><td></td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.RegexRouter">
+<h5>org.apache.kafka.connect.transforms.RegexRouter</h5>
+Update the record topic using the configured regular expression and 
replacement string.<p/>Under the hood, the regex is compiled to a 
<code>java.util.regex.Pattern</code>. If the pattern matches the input topic, 
<code>java.util.regex.Matcher#replaceFirst()</code> is used with the 
replacement string to obtain the new topic.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>regex</td><td>Regular expression to use for 
matching.</td><td>string</td><td></td><td>valid regex</td><td>high</td></tr>
+<tr>
+<td>replacement</td><td>Replacement 
string.</td><td>string</td><td></td><td></td><td>high</td></tr>
+</tbody></table>
+</div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/consumer_config.html
----------------------------------------------------------------------
diff --git a/0102/generated/consumer_config.html 
b/0102/generated/consumer_config.html
new file mode 100644
index 0000000..6aa7e5b
--- /dev/null
+++ b/0102/generated/consumer_config.html
@@ -0,0 +1,118 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.deserializer</td><td>Deserializer class for key that implements the 
<code>Deserializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.deserializer</td><td>Deserializer class for value that implements 
the <code>Deserializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>fetch.min.bytes</td><td>The minimum amount of data the server should 
return for a fetch request. If insufficient data is available the request will 
wait for that much data to accumulate before answering the request. The default 
setting of 1 byte means that fetch requests are answered as soon as a single 
byte of data is available or the fetch request times out waiting for data to 
arrive. Setting this to something greater than 1 will cause the server to wait 
for larger amounts of data to accumulate which can improve server throughput a 
bit at the cost of some additional 
latency.</td><td>int</td><td>1</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>group.id</td><td>A unique string that identifies the consumer group this 
consumer belongs to. This property is required if the consumer uses either the 
group management functionality by using <code>subscribe(topic)</code> or the 
Kafka-based offset management 
strategy.</td><td>string</td><td>""</td><td></td><td>high</td></tr>
+<tr>
+<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the 
consumer coordinator when using Kafka's group management facilities. Heartbeats 
are used to ensure that the consumer's session stays active and to facilitate 
rebalancing when new consumers join or leave the group. The value must be set 
lower than <code>session.timeout.ms</code>, but typically should be set no 
higher than 1/3 of that value. It can be adjusted even lower to control the 
expected time for normal 
rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
+<tr>
+<td>max.partition.fetch.bytes</td><td>The maximum amount of data per-partition 
the server will return. If the first message in the first non-empty partition 
of the fetch is larger than this limit, the message will still be returned to 
ensure that the consumer can make progress. The maximum message size accepted 
by the broker is defined via <code>message.max.bytes</code> (broker config) or 
<code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting 
the consumer request 
size.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>session.timeout.ms</td><td>The timeout used to detect consumer failures 
when using Kafka's group management facility. The consumer sends periodic 
heartbeats to indicate its liveness to the broker. If no heartbeats are 
received by the broker before the expiration of this session timeout, then the 
broker will remove this consumer from the group and initiate a rebalance. Note 
that the value must be in the allowable range as configured in the broker 
configuration by <code>group.min.session.timeout.ms</code> and 
<code>group.max.session.timeout.ms</code>.</td><td>int</td><td>10000</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. 
This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.offset.reset</td><td>What to do when there is no initial offset in 
Kafka or if the current offset does not exist any more on the server (e.g. 
because that data has been deleted): <ul><li>earliest: automatically reset the 
offset to the earliest offset<li>latest: automatically reset the offset to the 
latest offset</li><li>none: throw exception to the consumer if no previous 
offset is found for the consumer's group</li><li>anything else: throw exception 
to the consumer.</li></ul></td><td>string</td><td>latest</td><td>[latest, 
earliest, none]</td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>enable.auto.commit</td><td>If true the consumer's offset will be 
periodically committed in the 
background.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>exclude.internal.topics</td><td>Whether records from internal topics (such 
as offsets) should be exposed to the consumer. If set to <code>true</code> the 
only way to receive records from an internal topic is subscribing to 
it.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>fetch.max.bytes</td><td>The maximum amount of data the server should 
return for a fetch request. This is not an absolute maximum, if the first 
message in the first non-empty partition of the fetch is larger than this 
value, the message will still be returned to ensure that the consumer can make 
progress. The maximum message size accepted by the broker is defined via 
<code>message.max.bytes</code> (broker config) or 
<code>max.message.bytes</code> (topic config). Note that the consumer performs 
multiple fetches in 
parallel.</td><td>int</td><td>52428800</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>max.poll.interval.ms</td><td>The maximum delay between invocations of 
poll() when using consumer group management. This places an upper bound on the 
amount of time that the consumer can be idle before fetching more records. If 
poll() is not called before expiration of this timeout, then the consumer is 
considered failed and the group will rebalance in order to reassign the 
partitions to another member. 
</td><td>int</td><td>300000</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>max.poll.records</td><td>The maximum number of records returned in a 
single call to 
poll().</td><td>int</td><td>500</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>partition.assignment.strategy</td><td>The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is 
used</td><td>list</td><td>class 
org.apache.kafka.clients.consumer.RangeAssignor</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading data. If the value is -1, the OS default will 
be used.</td><td>int</td><td>65536</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>305000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.jaas.config</td><td>JAAS login context parameters for SASL 
connections in the format used by JAAS configuration files. JAAS configuration 
file format is described <a 
href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html";>here</a>.
 The format for the value is: '<loginModuleClass> <controlFlag> 
(<optionName>=<optionValue>)*;'</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This 
may be any mechanism for which a security provider is available. GSSAPI is the 
default 
mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending data. If the value is -1, the OS default will be 
used.</td><td>int</td><td>131072</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>auto.commit.interval.ms</td><td>The frequency in milliseconds that the 
consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code> 
is set to 
<code>true</code>.</td><td>int</td><td>5000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>check.crcs</td><td>Automatically check the CRC32 of the records consumed. 
This ensures no on-the-wire or on-disk corruption to the messages occurred. 
This check adds some overhead, so it may be disabled in cases seeking extreme 
performance.</td><td>boolean</td><td>true</td><td></td><td>low</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>fetch.max.wait.ms</td><td>The maximum amount of time the server will block 
before answering the fetch request if there isn't sufficient data to 
immediately satisfy the requirement given by 
fetch.min.bytes.</td><td>int</td><td>500</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>interceptor.classes</td><td>A list of classes to use as interceptors. 
Implementing the <code>ConsumerInterceptor</code> interface allows you to 
intercept (and possibly mutate) records received by the consumer. By default, 
there are no 
interceptors.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.recording.level</td><td>The highest recording level for 
metrics.</td><td>string</td><td>INFO</td><td>[INFO, DEBUG]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is 
computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed request to a given topic partition. This avoids repeatedly 
sending requests in a tight loop under some failure 
scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol. By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG 
implementation to use for SSL cryptography operations. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/kafka_config.html
----------------------------------------------------------------------
diff --git a/0102/generated/kafka_config.html b/0102/generated/kafka_config.html
new file mode 100644
index 0000000..4eb8569
--- /dev/null
+++ b/0102/generated/kafka_config.html
@@ -0,0 +1,304 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>zookeeper.connect</td><td>Zookeeper host 
string</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>advertised.host.name</td><td>DEPRECATED: only used when 
`advertised.listeners` or `listeners` are not set. Use `advertised.listeners` 
instead. 
+Hostname to publish to ZooKeeper for clients to use. In IaaS environments, 
this may need to be different from the interface to which the broker binds. If 
this is not set, it will use the value for `host.name` if configured. Otherwise 
it will use the value returned from 
java.net.InetAddress.getCanonicalHostName().</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>advertised.listeners</td><td>Listeners to publish to ZooKeeper for clients 
to use, if different than the listeners above. In IaaS environments, this may 
need to be different from the interface to which the broker binds. If this is 
not set, the value for `listeners` will be 
used.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>advertised.port</td><td>DEPRECATED: only used when `advertised.listeners` 
or `listeners` are not set. Use `advertised.listeners` instead. 
+The port to publish to ZooKeeper for clients to use. In IaaS environments, 
this may need to be different from the port to which the broker binds. If this 
is not set, it will publish the same port that the broker binds 
to.</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.create.topics.enable</td><td>Enable auto creation of topic on the 
server</td><td>boolean</td><td>true</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.leader.rebalance.enable</td><td>Enables auto leader balancing. A 
background thread checks and triggers leader balance if required at regular 
intervals</td><td>boolean</td><td>true</td><td></td><td>high</td></tr>
+<tr>
+<td>background.threads</td><td>The number of threads to use for various 
background processing 
tasks</td><td>int</td><td>10</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>broker.id</td><td>The broker id for this server. If unset, a unique broker 
id will be generated.To avoid conflicts between zookeeper generated broker id's 
and user configured broker id's, generated broker ids start from 
reserved.broker.max.id + 
1.</td><td>int</td><td>-1</td><td></td><td>high</td></tr>
+<tr>
+<td>compression.type</td><td>Specify the final compression type for a given 
topic. This configuration accepts the standard compression codecs ('gzip', 
'snappy', 'lz4'). It additionally accepts 'uncompressed' which is equivalent to 
no compression; and 'producer' which means retain the original compression 
codec set by the 
producer.</td><td>string</td><td>producer</td><td></td><td>high</td></tr>
+<tr>
+<td>delete.topic.enable</td><td>Enables delete topic. Delete topic through the 
admin tool will have no effect if this config is turned 
off</td><td>boolean</td><td>false</td><td></td><td>high</td></tr>
+<tr>
+<td>host.name</td><td>DEPRECATED: only used when `listeners` is not set. Use 
`listeners` instead. 
+hostname of broker. If this is set, it will only bind to this address. If this 
is not set, it will bind to all 
interfaces</td><td>string</td><td>""</td><td></td><td>high</td></tr>
+<tr>
+<td>leader.imbalance.check.interval.seconds</td><td>The frequency with which 
the partition rebalance check is triggered by the 
controller</td><td>long</td><td>300</td><td></td><td>high</td></tr>
+<tr>
+<td>leader.imbalance.per.broker.percentage</td><td>The ratio of leader 
imbalance allowed per broker. The controller would trigger a leader balance if 
it goes above this value per broker. The value is specified in 
percentage.</td><td>int</td><td>10</td><td></td><td>high</td></tr>
+<tr>
+<td>listeners</td><td>Listener List - Comma-separated list of URIs we will 
listen on and the listener names. If the listener name is not a security 
protocol, listener.security.protocol.map must also be set.
+ Specify hostname as 0.0.0.0 to bind to all interfaces.
+ Leave hostname empty to bind to default interface.
+ Examples of legal listener lists:
+ PLAINTEXT://myhost:9092,SSL://:9091
+ CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093
+</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.dir</td><td>The directory in which the log data is kept (supplemental 
for log.dirs 
property)</td><td>string</td><td>/tmp/kafka-logs</td><td></td><td>high</td></tr>
+<tr>
+<td>log.dirs</td><td>The directories in which the log data is kept. If not 
set, the value in log.dir is 
used</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.flush.interval.messages</td><td>The number of messages accumulated on 
a log partition before messages are flushed to disk 
</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>log.flush.interval.ms</td><td>The maximum time in ms that a message in any 
topic is kept in memory before flushed to disk. If not set, the value in 
log.flush.scheduler.interval.ms is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.flush.offset.checkpoint.interval.ms</td><td>The frequency with which 
we update the persistent record of the last flush which acts as the log 
recovery point</td><td>int</td><td>60000</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>log.flush.scheduler.interval.ms</td><td>The frequency in ms that the log 
flusher checks whether any log needs to be flushed to 
disk</td><td>long</td><td>9223372036854775807</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.bytes</td><td>The maximum size of the log before deleting 
it</td><td>long</td><td>-1</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.hours</td><td>The number of hours to keep a log file before 
deleting it (in hours), tertiary to log.retention.ms 
property</td><td>int</td><td>168</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.minutes</td><td>The number of minutes to keep a log file 
before deleting it (in minutes), secondary to log.retention.ms property. If not 
set, the value in log.retention.hours is 
used</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.retention.ms</td><td>The number of milliseconds to keep a log file 
before deleting it (in milliseconds), If not set, the value in 
log.retention.minutes is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.roll.hours</td><td>The maximum time before a new log segment is rolled 
out (in hours), secondary to log.roll.ms 
property</td><td>int</td><td>168</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>log.roll.jitter.hours</td><td>The maximum jitter to subtract from 
logRollTimeMillis (in hours), secondary to log.roll.jitter.ms 
property</td><td>int</td><td>0</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>log.roll.jitter.ms</td><td>The maximum jitter to subtract from 
logRollTimeMillis (in milliseconds). If not set, the value in 
log.roll.jitter.hours is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.roll.ms</td><td>The maximum time before a new log segment is rolled 
out (in milliseconds). If not set, the value in log.roll.hours is 
used</td><td>long</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>log.segment.bytes</td><td>The maximum size of a single log 
file</td><td>int</td><td>1073741824</td><td>[14,...]</td><td>high</td></tr>
+<tr>
+<td>log.segment.delete.delay.ms</td><td>The amount of time to wait before 
deleting a file from the 
filesystem</td><td>long</td><td>60000</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>message.max.bytes</td><td>The maximum size of message that the server can 
receive</td><td>int</td><td>1000012</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>min.insync.replicas</td><td>When a producer sets acks to "all" (or "-1"), 
min.insync.replicas specifies the minimum number of replicas that must 
acknowledge a write for the write to be considered successful. If this minimum 
cannot be met, then the producer will raise an exception (either 
NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, 
min.insync.replicas and acks allow you to enforce greater durability 
guarantees. A typical scenario would be to create a topic with a replication 
factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This 
will ensure that the producer raises an exception if a majority of replicas do 
not receive a 
write.</td><td>int</td><td>1</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.io.threads</td><td>The number of io threads that the server uses for 
carrying out network 
requests</td><td>int</td><td>8</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.network.threads</td><td>the number of network threads that the server 
uses for handling network 
requests</td><td>int</td><td>3</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.recovery.threads.per.data.dir</td><td>The number of threads per data 
directory to be used for log recovery at startup and flushing at 
shutdown</td><td>int</td><td>1</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>num.replica.fetchers</td><td>Number of fetcher threads used to replicate 
messages from a source broker. Increasing this value can increase the degree of 
I/O parallelism in the follower 
broker.</td><td>int</td><td>1</td><td></td><td>high</td></tr>
+<tr>
+<td>offset.metadata.max.bytes</td><td>The maximum size for a metadata entry 
associated with an offset 
commit</td><td>int</td><td>4096</td><td></td><td>high</td></tr>
+<tr>
+<td>offsets.commit.required.acks</td><td>The required acks before the commit 
can be accepted. In general, the default (-1) should not be 
overridden</td><td>short</td><td>-1</td><td></td><td>high</td></tr>
+<tr>
+<td>offsets.commit.timeout.ms</td><td>Offset commit will be delayed until all 
replicas for the offsets topic receive the commit or this timeout is reached. 
This is similar to the producer request 
timeout.</td><td>int</td><td>5000</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.load.buffer.size</td><td>Batch size for reading from the offsets 
segments when loading offsets into the 
cache.</td><td>int</td><td>5242880</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.retention.check.interval.ms</td><td>Frequency at which to check 
for stale 
offsets</td><td>long</td><td>600000</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.retention.minutes</td><td>Log retention window in minutes for 
offsets topic</td><td>int</td><td>1440</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.topic.compression.codec</td><td>Compression codec for the offsets 
topic - compression may be used to achieve "atomic" 
commits</td><td>int</td><td>0</td><td></td><td>high</td></tr>
+<tr>
+<td>offsets.topic.num.partitions</td><td>The number of partitions for the 
offset commit topic (should not change after 
deployment)</td><td>int</td><td>50</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.topic.replication.factor</td><td>The replication factor for the 
offsets topic (set higher to ensure availability). To ensure that the effective 
replication factor of the offsets topic is the configured value, the number of 
alive brokers has to be at least the replication factor at the time of the 
first request for the offsets topic. If not, either the offsets topic creation 
will fail or it will get a replication factor of min(alive brokers, configured 
replication 
factor)</td><td>short</td><td>3</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>offsets.topic.segment.bytes</td><td>The offsets topic segment bytes should 
be kept relatively small in order to facilitate faster log compaction and cache 
loads</td><td>int</td><td>104857600</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>port</td><td>DEPRECATED: only used when `listeners` is not set. Use 
`listeners` instead. 
+the port to listen and accept connections 
on</td><td>int</td><td>9092</td><td></td><td>high</td></tr>
+<tr>
+<td>queued.max.requests</td><td>The number of queued requests allowed before 
blocking the network 
threads</td><td>int</td><td>500</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>quota.consumer.default</td><td>DEPRECATED: Used only when dynamic default 
quotas are not configured for <user, <client-id> or <user, client-id> in 
Zookeeper. Any consumer distinguished by clientId/consumer group will get 
throttled if it fetches more bytes than this value 
per-second</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>quota.producer.default</td><td>DEPRECATED: Used only when dynamic default 
quotas are not configured for <user>, <client-id> or <user, client-id> in 
Zookeeper. Any producer distinguished by clientId will get throttled if it 
produces more bytes than this value 
per-second</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>replica.fetch.min.bytes</td><td>Minimum bytes expected for each fetch 
response. If not enough bytes, wait up to 
replicaMaxWaitTimeMs</td><td>int</td><td>1</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.fetch.wait.max.ms</td><td>max wait time for each fetcher request 
issued by follower replicas. This value should always be less than the 
replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for 
low throughput topics</td><td>int</td><td>500</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.high.watermark.checkpoint.interval.ms</td><td>The frequency with 
which the high watermark is saved out to 
disk</td><td>long</td><td>5000</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.lag.time.max.ms</td><td>If a follower hasn't sent any fetch 
requests or hasn't consumed up to the leaders log end offset for at least this 
time, the leader will remove the follower from 
isr</td><td>long</td><td>10000</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.socket.receive.buffer.bytes</td><td>The socket receive buffer for 
network requests</td><td>int</td><td>65536</td><td></td><td>high</td></tr>
+<tr>
+<td>replica.socket.timeout.ms</td><td>The socket timeout for network requests. 
Its value should be at least 
replica.fetch.wait.max.ms</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are 
exhausted.</td><td>int</td><td>30000</td><td></td><td>high</td></tr>
+<tr>
+<td>socket.receive.buffer.bytes</td><td>The SO_RCVBUF buffer of the socket 
sever sockets. If the value is -1, the OS default will be 
used.</td><td>int</td><td>102400</td><td></td><td>high</td></tr>
+<tr>
+<td>socket.request.max.bytes</td><td>The maximum number of bytes in a socket 
request</td><td>int</td><td>104857600</td><td>[1,...]</td><td>high</td></tr>
+<tr>
+<td>socket.send.buffer.bytes</td><td>The SO_SNDBUF buffer of the socket sever 
sockets. If the value is -1, the OS default will be 
used.</td><td>int</td><td>102400</td><td></td><td>high</td></tr>
+<tr>
+<td>unclean.leader.election.enable</td><td>Indicates whether to enable 
replicas not in the ISR set to be elected as leader as a last resort, even 
though doing so may result in data 
loss</td><td>boolean</td><td>true</td><td></td><td>high</td></tr>
+<tr>
+<td>zookeeper.connection.timeout.ms</td><td>The max time that the client waits 
to establish a connection to zookeeper. If not set, the value in 
zookeeper.session.timeout.ms is 
used</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>zookeeper.session.timeout.ms</td><td>Zookeeper session 
timeout</td><td>int</td><td>6000</td><td></td><td>high</td></tr>
+<tr>
+<td>zookeeper.set.acl</td><td>Set client to use secure 
ACLs</td><td>boolean</td><td>false</td><td></td><td>high</td></tr>
+<tr>
+<td>broker.id.generation.enable</td><td>Enable automatic broker id generation 
on the server. When enabled the value configured for reserved.broker.max.id 
should be 
reviewed.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>broker.rack</td><td>Rack of the broker. This will be used in rack aware 
replication assignment for fault tolerance. Examples: `RACK1`, 
`us-east-1d`</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Idle connections timeout: the server 
socket processor threads close the connections that idle more than 
this</td><td>long</td><td>600000</td><td></td><td>medium</td></tr>
+<tr>
+<td>controlled.shutdown.enable</td><td>Enable controlled shutdown of the 
server</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>controlled.shutdown.max.retries</td><td>Controlled shutdown can fail for 
multiple reasons. This determines the number of retries when such failure 
happens</td><td>int</td><td>3</td><td></td><td>medium</td></tr>
+<tr>
+<td>controlled.shutdown.retry.backoff.ms</td><td>Before each retry, the system 
needs time to recover from the state that caused the previous failure 
(Controller fail over, replica lag etc). This config determines the amount of 
time to wait before 
retrying.</td><td>long</td><td>5000</td><td></td><td>medium</td></tr>
+<tr>
+<td>controller.socket.timeout.ms</td><td>The socket timeout for 
controller-to-broker 
channels</td><td>int</td><td>30000</td><td></td><td>medium</td></tr>
+<tr>
+<td>default.replication.factor</td><td>default replication factors for 
automatically created 
topics</td><td>int</td><td>1</td><td></td><td>medium</td></tr>
+<tr>
+<td>fetch.purgatory.purge.interval.requests</td><td>The purge interval (in 
number of requests) of the fetch request 
purgatory</td><td>int</td><td>1000</td><td></td><td>medium</td></tr>
+<tr>
+<td>group.max.session.timeout.ms</td><td>The maximum allowed session timeout 
for registered consumers. Longer timeouts give consumers more time to process 
messages in between heartbeats at the cost of a longer time to detect 
failures.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
+<tr>
+<td>group.min.session.timeout.ms</td><td>The minimum allowed session timeout 
for registered consumers. Shorter timeouts result in quicker failure detection 
at the cost of more frequent consumer heartbeating, which can overwhelm broker 
resources.</td><td>int</td><td>6000</td><td></td><td>medium</td></tr>
+<tr>
+<td>inter.broker.listener.name</td><td>Name of listener used for communication 
between brokers. If this is unset, the listener name is defined by 
security.inter.broker.protocol. It is an error to set this and 
security.inter.broker.protocol properties at the same 
time.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>inter.broker.protocol.version</td><td>Specify which version of the 
inter-broker protocol will be used.
+ This is typically bumped after all brokers were upgraded to a new version.
+ Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 
0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full 
list.</td><td>string</td><td>0.10.2-IV0</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.backoff.ms</td><td>The amount of time to sleep when there are 
no logs to 
clean</td><td>long</td><td>15000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.dedupe.buffer.size</td><td>The total memory used for log 
deduplication across all cleaner 
threads</td><td>long</td><td>134217728</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.delete.retention.ms</td><td>How long are delete records 
retained?</td><td>long</td><td>86400000</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.enable</td><td>Enable the log cleaner process to run on the 
server. Should be enabled if using any topics with a cleanup.policy=compact 
including the internal offsets topic. If disabled those topics will not be 
compacted and continually grow in 
size.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.io.buffer.load.factor</td><td>Log cleaner dedupe buffer load 
factor. The percentage full the dedupe buffer can become. A higher value will 
allow more log to be cleaned at once but will lead to more hash 
collisions</td><td>double</td><td>0.9</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.io.buffer.size</td><td>The total memory used for log cleaner 
I/O buffers across all cleaner 
threads</td><td>int</td><td>524288</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.io.max.bytes.per.second</td><td>The log cleaner will be 
throttled so that the sum of its read and write i/o will be less than this 
value on 
average</td><td>double</td><td>1.7976931348623157E308</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.min.cleanable.ratio</td><td>The minimum ratio of dirty log to 
total log for a log to eligible for 
cleaning</td><td>double</td><td>0.5</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.min.compaction.lag.ms</td><td>The minimum time a message will 
remain uncompacted in the log. Only applicable for logs that are being 
compacted.</td><td>long</td><td>0</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.cleaner.threads</td><td>The number of background threads to use for 
log cleaning</td><td>int</td><td>1</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.cleanup.policy</td><td>The default cleanup policy for segments beyond 
the retention window. A comma separated list of valid policies. Valid policies 
are: "delete" and "compact"</td><td>list</td><td>delete</td><td>[compact, 
delete]</td><td>medium</td></tr>
+<tr>
+<td>log.index.interval.bytes</td><td>The interval with which we add an entry 
to the offset 
index</td><td>int</td><td>4096</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.index.size.max.bytes</td><td>The maximum size in bytes of the offset 
index</td><td>int</td><td>10485760</td><td>[4,...]</td><td>medium</td></tr>
+<tr>
+<td>log.message.format.version</td><td>Specify the message format version the 
broker will use to append messages to the logs. The value should be a valid 
ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for 
more details. By setting a particular message format version, the user is 
certifying that all the existing messages on disk are smaller or equal than the 
specified version. Setting this value incorrectly will cause consumers with 
older versions to break as they will receive messages with a format that they 
don't 
understand.</td><td>string</td><td>0.10.2-IV0</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.message.timestamp.difference.max.ms</td><td>The maximum difference 
allowed between the timestamp when a broker receives a message and the 
timestamp specified in the message. If log.message.timestamp.type=CreateTime, a 
message will be rejected if the difference in timestamp exceeds this threshold. 
This configuration is ignored if 
log.message.timestamp.type=LogAppendTime.</td><td>long</td><td>9223372036854775807</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>log.message.timestamp.type</td><td>Define whether the timestamp in the 
message is message create time or log append time. The value should be either 
`CreateTime` or 
`LogAppendTime`</td><td>string</td><td>CreateTime</td><td>[CreateTime, 
LogAppendTime]</td><td>medium</td></tr>
+<tr>
+<td>log.preallocate</td><td>Should pre allocate file when create new segment? 
If you are using Kafka on Windows, you probably need to set it to 
true.</td><td>boolean</td><td>false</td><td></td><td>medium</td></tr>
+<tr>
+<td>log.retention.check.interval.ms</td><td>The frequency in milliseconds that 
the log cleaner checks whether any log is eligible for 
deletion</td><td>long</td><td>300000</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>max.connections.per.ip</td><td>The maximum number of connections we allow 
from each ip 
address</td><td>int</td><td>2147483647</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>max.connections.per.ip.overrides</td><td>Per-ip or hostname overrides to 
the default maximum number of 
connections</td><td>string</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>num.partitions</td><td>The default number of log partitions per 
topic</td><td>int</td><td>1</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>principal.builder.class</td><td>The fully qualified name of a class that 
implements the PrincipalBuilder interface, which is currently used to build the 
Principal for connections with the SSL 
SecurityProtocol.</td><td>class</td><td>org.apache.kafka.common.security.auth.DefaultPrincipalBuilder</td><td></td><td>medium</td></tr>
+<tr>
+<td>producer.purgatory.purge.interval.requests</td><td>The purge interval (in 
number of requests) of the producer request 
purgatory</td><td>int</td><td>1000</td><td></td><td>medium</td></tr>
+<tr>
+<td>replica.fetch.backoff.ms</td><td>The amount of time to sleep when fetch 
partition error 
occurs.</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>replica.fetch.max.bytes</td><td>The number of bytes of messages to attempt 
to fetch for each partition. This is not an absolute maximum, if the first 
message in the first non-empty partition of the fetch is larger than this 
value, the message will still be returned to ensure that progress can be made. 
The maximum message size accepted by the broker is defined via 
<code>message.max.bytes</code> (broker config) or 
<code>max.message.bytes</code> (topic 
config).</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>replica.fetch.response.max.bytes</td><td>Maximum bytes expected for the 
entire fetch response. This is not an absolute maximum, if the first message in 
the first non-empty partition of the fetch is larger than this value, the 
message will still be returned to ensure that progress can be made. The maximum 
message size accepted by the broker is defined via 
<code>message.max.bytes</code> (broker config) or 
<code>max.message.bytes</code> (topic 
config).</td><td>int</td><td>10485760</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>reserved.broker.max.id</td><td>Max number that can be used for a 
broker.id</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.enabled.mechanisms</td><td>The list of SASL mechanisms enabled in the 
Kafka server. The list may contain any mechanism for which a security provider 
is available. Only GSSAPI is enabled by 
default.</td><td>list</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.principal.to.local.rules</td><td>A list of rules for mapping 
from principal names to short names (typically operating system usernames). The 
rules are evaluated in order and the first rule that matches a principal name 
is used to map it to a short name. Any later rules in the list are ignored. By 
default, principal names of the form {username}/{hostname}@{REALM} are mapped 
to {username}. For more details on the format please see <a 
href="#security_authz"> security authorization and 
acls</a>.</td><td>list</td><td>DEFAULT</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism.inter.broker.protocol</td><td>SASL mechanism used for 
inter-broker communication. Default is 
GSSAPI.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.inter.broker.protocol</td><td>Security protocol used to 
communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL. It is an error to set this and inter.broker.listener.name properties 
at the same 
time.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol. By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.client.auth</td><td>Configures kafka broker to request client 
authentication. The following settings are common:  <ul> 
<li><code>ssl.client.auth=required</code> If set to required client 
authentication is required. <li><code>ssl.client.auth=requested</code> This 
means client authentication is optional. unlike requested , if this option is 
set client can choose not to provide authentication information about itself 
<li><code>ssl.client.auth=none</code> This means client authentication is not 
needed.</td><td>string</td><td>none</td><td>[required, requested, 
none]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. 
This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>authorizer.class.name</td><td>The authorizer class that should be used for 
authorization</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>create.topic.policy.class.name</td><td>The create topic policy class that 
should be used for validation. The class should implement the 
<code>org.apache.kafka.server.policy.CreateTopicPolicy</code> 
interface.</td><td>class</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>listener.security.protocol.map</td><td>Map between listener names and 
security protocols. This must be defined for the same security protocol to be 
usable in more than one port or IP. For example, we can separate internal and 
external traffic even if SSL is required for both. Concretely, we could define 
listeners with names INTERNAL and EXTERNAL and this property as: 
`INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon 
and map entries are separated by commas. Each listener name should only appear 
once in the 
map.</td><td>string</td><td>SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT</td><td></td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.recording.level</td><td>The highest recording level for 
metrics.</td><td>string</td><td>INFO</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is 
computed over.</td><td>long</td><td>30000</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>quota.window.num</td><td>The number of samples to retain in memory for 
client quotas</td><td>int</td><td>11</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>quota.window.size.seconds</td><td>The time span of each sample for client 
quotas</td><td>int</td><td>1</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>replication.quota.window.num</td><td>The number of samples to retain in 
memory for replication 
quotas</td><td>int</td><td>11</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>replication.quota.window.size.seconds</td><td>The time span of each sample 
for replication 
quotas</td><td>int</td><td>1</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG 
implementation to use for SSL cryptography operations. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>zookeeper.sync.time.ms</td><td>How far a ZK follower can be behind a ZK 
leader</td><td>int</td><td>2000</td><td></td><td>low</td></tr>
+</tbody></table>

Reply via email to