http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/producer_config.html
----------------------------------------------------------------------
diff --git a/0102/generated/producer_config.html 
b/0102/generated/producer_config.html
new file mode 100644
index 0000000..2279747
--- /dev/null
+++ b/0102/generated/producer_config.html
@@ -0,0 +1,112 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for 
establishing the initial connection to the Kafka cluster. The client will make 
use of all servers irrespective of which servers are specified here for 
bootstrapping&mdash;this list only impacts the initial hosts used to discover 
the full set of servers. This list should be in the form 
<code>host1:port1,host2:port2,...</code>. Since these servers are just used for 
the initial connection to discover the full cluster membership (which may 
change dynamically), this list need not contain the full set of servers (you 
may want more than one, though, in case a server is 
down).</td><td>list</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.serializer</td><td>Serializer class for key that implements the 
<code>Serializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.serializer</td><td>Serializer class for value that implements the 
<code>Serializer</code> 
interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>acks</td><td>The number of acknowledgments the producer requires the 
leader to have received before considering a request complete. This controls 
the  durability of records that are sent. The following settings are allowed:  
<ul> <li><code>acks=0</code> If set to zero then the producer will not wait for 
any acknowledgment from the server at all. The record will be immediately added 
to the socket buffer and considered sent. No guarantee can be made that the 
server has received the record in this case, and the <code>retries</code> 
configuration will not take effect (as the client won't generally know of any 
failures). The offset given back for each record will always be set to -1. 
<li><code>acks=1</code> This will mean the leader will write the record to its 
local log but will respond without awaiting full acknowledgement from all 
followers. In this case should the leader fail immediately after acknowledging 
the record but before the followers have replicated it then the record wi
 ll be lost. <li><code>acks=all</code> This means the leader will wait for the 
full set of in-sync replicas to acknowledge the record. This guarantees that 
the record will not be lost as long as at least one in-sync replica remains 
alive. This is the strongest available guarantee. This is equivalent to the 
acks=-1 setting.</td><td>string</td><td>1</td><td>[all, -1, 0, 
1]</td><td>high</td></tr>
+<tr>
+<td>buffer.memory</td><td>The total bytes of memory the producer can use to 
buffer records waiting to be sent to the server. If records are sent faster 
than they can be delivered to the server the producer will block for 
<code>max.block.ms</code> after which it will throw an exception.<p>This 
setting should correspond roughly to the total memory the producer will use, 
but is not a hard bound since not all memory the producer uses is used for 
buffering. Some additional memory will be used for compression (if compression 
is enabled) as well as for maintaining in-flight 
requests.</td><td>long</td><td>33554432</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>compression.type</td><td>The compression type for all data generated by 
the producer. The default is none (i.e. no compression). Valid  values are 
<code>none</code>, <code>gzip</code>, <code>snappy</code>, or <code>lz4</code>. 
Compression is of full batches of data, so the efficacy of batching will also 
impact the compression ratio (more batching means better 
compression).</td><td>string</td><td>none</td><td></td><td>high</td></tr>
+<tr>
+<td>retries</td><td>Setting a value greater than zero will cause the client to 
resend any record whose send fails with a potentially transient error. Note 
that this retry is no different than if the client resent the record upon 
receiving the error. Allowing retries without setting 
<code>max.in.flight.requests.per.connection</code> to 1 will potentially change 
the ordering of records because if two batches are sent to a single partition, 
and the first fails and is retried but the second succeeds, then the records in 
the second batch may appear 
first.</td><td>int</td><td>0</td><td>[0,...,2147483647]</td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store 
file. This is optional for 
client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is 
optional for client and can be used for two-way authentication for 
client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. 
This is optional for client and only needed if ssl.keystore.location is 
configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. 
</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. 
</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>batch.size</td><td>The producer will attempt to batch records together 
into fewer requests whenever multiple records are being sent to the same 
partition. This helps performance on both the client and the server. This 
configuration controls the default batch size in bytes. <p>No attempt will be 
made to batch records larger than this size. <p>Requests sent to brokers will 
contain multiple batches, one for each partition with data available to be 
sent. <p>A small batch size will make batching less common and may reduce 
throughput (a batch size of zero will disable batching entirely). A very large 
batch size may use memory a bit more wastefully as we will always allocate a 
buffer of the specified batch size in anticipation of additional 
records.</td><td>int</td><td>16384</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. 
The purpose of this is to be able to track the source of requests beyond just 
ip/port by allowing a logical application name to be included in server-side 
request logging.</td><td>string</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of 
milliseconds specified by this 
config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>linger.ms</td><td>The producer groups together any records that arrive in 
between request transmissions into a single batched request. Normally this 
occurs only under load when records arrive faster than they can be sent out. 
However in some circumstances the client may want to reduce the number of 
requests even under moderate load. This setting accomplishes this by adding a 
small amount of artificial delay&mdash;that is, rather than immediately sending 
out a record the producer will wait for up to the given delay to allow other 
records to be sent so that the sends can be batched together. This can be 
thought of as analogous to Nagle's algorithm in TCP. This setting gives the 
upper bound on the delay for batching: once we get <code>batch.size</code> 
worth of records for a partition it will be sent immediately regardless of this 
setting, however if we have fewer than this many bytes accumulated for this 
partition we will 'linger' for the specified time waiting for more records to
  show up. This setting defaults to 0 (i.e. no delay). Setting 
<code>linger.ms=5</code>, for example, would have the effect of reducing the 
number of requests sent but would add up to 5ms of latency to records sent in 
the absense of 
load.</td><td>long</td><td>0</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>max.block.ms</td><td>The configuration controls how long 
<code>KafkaProducer.send()</code> and 
<code>KafkaProducer.partitionsFor()</code> will block.These methods can be 
blocked either because the buffer is full or metadata unavailable.Blocking in 
the user-supplied serializers or partitioner will not be counted against this 
timeout.</td><td>long</td><td>60000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>max.request.size</td><td>The maximum size of a request in bytes. This is 
also effectively a cap on the maximum record size. Note that the server has its 
own cap on record size which may be different from this. This setting will 
limit the number of record batches the producer will send in a single request 
to avoid sending huge 
requests.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>partitioner.class</td><td>Partitioner class that implements the 
<code>Partitioner</code> 
interface.</td><td>class</td><td>org.apache.kafka.clients.producer.internals.DefaultPartitioner</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer 
(SO_RCVBUF) to use when reading data. If the value is -1, the OS default will 
be used.</td><td>int</td><td>32768</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount 
of time the client will wait for the response of a request. If the response is 
not received before the timeout elapses the client will resend the request if 
necessary or fail the request if retries are exhausted. This should be larger 
than replica.lag.time.max.ms (a broker configuration) to reduce the possibility 
of message duplication due to unnecessary producer 
retries.</td><td>int</td><td>30000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.jaas.config</td><td>JAAS login context parameters for SASL 
connections in the format used by JAAS configuration files. JAAS configuration 
file format is described <a 
href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html";>here</a>.
 The format for the value is: '<loginModuleClass> <controlFlag> 
(<optionName>=<optionValue>)*;'</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka 
runs as. This can be defined either in Kafka's JAAS config or in Kafka's 
config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This 
may be any mechanism for which a security provider is available. GSSAPI is the 
default 
mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid 
values are: PLAINTEXT, SSL, SASL_PLAINTEXT, 
SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to 
use when sending data. If the value is -1, the OS default will be 
used.</td><td>int</td><td>131072</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL 
connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is 
optional for 
client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. 
Default setting is TLS, which is fine for most cases. Allowed values in recent 
JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in 
older JVMs, but their usage is discouraged due to known security 
vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL 
connections. Default value is the default security provider of the 
JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store 
file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>timeout.ms</td><td>The configuration controls the maximum amount of time 
the server will wait for acknowledgments from followers to meet the 
acknowledgment requirements the producer has specified with the 
<code>acks</code> configuration. If the requested number of acknowledgments are 
not met when the timeout elapses an error will be returned. This timeout is 
measured on the server side and does not include the network latency of the 
request.</td><td>int</td><td>30000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>block.on.buffer.full</td><td>When our memory buffer is exhausted we must 
either stop accepting new records (block) or throw errors. By default this 
setting is false and the producer will no longer throw a BufferExhaustException 
but instead will use the <code>max.block.ms</code> value to block, after which 
it will throw a TimeoutException. Setting this property to true will set the 
<code>max.block.ms</code> to Long.MAX_VALUE. <em>Also if this property is set 
to true, parameter <code>metadata.fetch.timeout.ms</code> is no longer 
honored.</em><p>This parameter is deprecated and will be removed in a future 
release. Parameter <code>max.block.ms</code> should be used 
instead.</td><td>boolean</td><td>false</td><td></td><td>low</td></tr>
+<tr>
+<td>interceptor.classes</td><td>A list of classes to use as interceptors. 
Implementing the <code>ProducerInterceptor</code> interface allows you to 
intercept (and possibly mutate) the records received by the producer before 
they are published to the Kafka cluster. By default, there are no 
interceptors.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>max.in.flight.requests.per.connection</td><td>The maximum number of 
unacknowledged requests the client will send on a single connection before 
blocking. Note that if this setting is set to be greater than 1 and there are 
failed sends, there is a risk of message re-ordering due to retries (i.e., if 
retries are 
enabled).</td><td>int</td><td>5</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metadata.fetch.timeout.ms</td><td>The first time data is sent to a topic 
we must fetch metadata about that topic to know which servers host the topic's 
partitions. This config specifies the maximum time, in milliseconds, for this 
fetch to succeed before throwing an exception back to the 
client.</td><td>long</td><td>60000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which 
we force a refresh of metadata even if we haven't seen any partition leadership 
changes to proactively discover any new brokers or 
partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. 
Implementing the <code>MetricReporter</code> interface allows plugging in 
classes that will be notified of new metric creation. The JmxReporter is always 
included to register JMX 
statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute 
metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is 
computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting 
to reconnect to a given host. This avoids repeatedly connecting to a host in a 
tight loop. This backoff applies to all requests sent by the consumer to the 
broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to 
retry a failed request to a given topic partition. This avoids repeatedly 
sending requests in a tight loop under some failure 
scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command 
path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time 
between refresh 
attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter 
added to the renewal 
time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep 
until the specified window factor of time from last refresh to ticket's expiry 
has been reached, at which time it will try to renew the 
ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named 
combination of authentication, encryption, MAC and key exchange algorithm used 
to negotiate the security settings for a network connection using TLS or SSL 
network protocol. By default all the available cipher suites are 
supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification 
algorithm to validate server hostname using server certificate. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory 
for SSL connections. Default value is the key manager factory algorithm 
configured for the Java Virtual 
Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG 
implementation to use for SSL cryptography operations. 
</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager 
factory for SSL connections. Default value is the trust manager factory 
algorithm configured for the Java Virtual 
Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/protocol_api_keys.html
----------------------------------------------------------------------
diff --git a/0102/generated/protocol_api_keys.html 
b/0102/generated/protocol_api_keys.html
new file mode 100644
index 0000000..b5b743c
--- /dev/null
+++ b/0102/generated/protocol_api_keys.html
@@ -0,0 +1,47 @@
+<table class="data-table"><tbody>
+<tr><th>Name</th>
+<th>Key</th>
+</tr><tr>
+<td>Produce</td><td>0</td></tr>
+<tr>
+<td>Fetch</td><td>1</td></tr>
+<tr>
+<td>Offsets</td><td>2</td></tr>
+<tr>
+<td>Metadata</td><td>3</td></tr>
+<tr>
+<td>LeaderAndIsr</td><td>4</td></tr>
+<tr>
+<td>StopReplica</td><td>5</td></tr>
+<tr>
+<td>UpdateMetadata</td><td>6</td></tr>
+<tr>
+<td>ControlledShutdown</td><td>7</td></tr>
+<tr>
+<td>OffsetCommit</td><td>8</td></tr>
+<tr>
+<td>OffsetFetch</td><td>9</td></tr>
+<tr>
+<td>GroupCoordinator</td><td>10</td></tr>
+<tr>
+<td>JoinGroup</td><td>11</td></tr>
+<tr>
+<td>Heartbeat</td><td>12</td></tr>
+<tr>
+<td>LeaveGroup</td><td>13</td></tr>
+<tr>
+<td>SyncGroup</td><td>14</td></tr>
+<tr>
+<td>DescribeGroups</td><td>15</td></tr>
+<tr>
+<td>ListGroups</td><td>16</td></tr>
+<tr>
+<td>SaslHandshake</td><td>17</td></tr>
+<tr>
+<td>ApiVersions</td><td>18</td></tr>
+<tr>
+<td>CreateTopics</td><td>19</td></tr>
+<tr>
+<td>DeleteTopics</td><td>20</td></tr>
+</table>
+

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/a7c3675d/0102/generated/protocol_errors.html
----------------------------------------------------------------------
diff --git a/0102/generated/protocol_errors.html 
b/0102/generated/protocol_errors.html
new file mode 100644
index 0000000..bd81d6d
--- /dev/null
+++ b/0102/generated/protocol_errors.html
@@ -0,0 +1,54 @@
+<table class="data-table"><tbody>
+<tr><th>Error</th>
+<th>Code</th>
+<th>Retriable</th>
+<th>Description</th>
+</tr>
+<tr><td>UNKNOWN</td><td>-1</td><td>False</td><td>The server experienced an 
unexpected error when processing the request</td></tr>
+<tr><td>NONE</td><td>0</td><td>False</td><td></td></tr>
+<tr><td>OFFSET_OUT_OF_RANGE</td><td>1</td><td>False</td><td>The requested 
offset is not within the range of offsets maintained by the server.</td></tr>
+<tr><td>CORRUPT_MESSAGE</td><td>2</td><td>True</td><td>This message has failed 
its CRC checksum, exceeds the valid size, or is otherwise corrupt.</td></tr>
+<tr><td>UNKNOWN_TOPIC_OR_PARTITION</td><td>3</td><td>True</td><td>This server 
does not host this topic-partition.</td></tr>
+<tr><td>INVALID_FETCH_SIZE</td><td>4</td><td>False</td><td>The requested fetch 
size is invalid.</td></tr>
+<tr><td>LEADER_NOT_AVAILABLE</td><td>5</td><td>True</td><td>There is no leader 
for this topic-partition as we are in the middle of a leadership 
election.</td></tr>
+<tr><td>NOT_LEADER_FOR_PARTITION</td><td>6</td><td>True</td><td>This server is 
not the leader for that topic-partition.</td></tr>
+<tr><td>REQUEST_TIMED_OUT</td><td>7</td><td>True</td><td>The request timed 
out.</td></tr>
+<tr><td>BROKER_NOT_AVAILABLE</td><td>8</td><td>False</td><td>The broker is not 
available.</td></tr>
+<tr><td>REPLICA_NOT_AVAILABLE</td><td>9</td><td>False</td><td>The replica is 
not available for the requested topic-partition</td></tr>
+<tr><td>MESSAGE_TOO_LARGE</td><td>10</td><td>False</td><td>The request 
included a message larger than the max message size the server will 
accept.</td></tr>
+<tr><td>STALE_CONTROLLER_EPOCH</td><td>11</td><td>False</td><td>The controller 
moved to another broker.</td></tr>
+<tr><td>OFFSET_METADATA_TOO_LARGE</td><td>12</td><td>False</td><td>The 
metadata field of the offset request was too large.</td></tr>
+<tr><td>NETWORK_EXCEPTION</td><td>13</td><td>True</td><td>The server 
disconnected before a response was received.</td></tr>
+<tr><td>GROUP_LOAD_IN_PROGRESS</td><td>14</td><td>True</td><td>The coordinator 
is loading and hence can't process requests for this group.</td></tr>
+<tr><td>GROUP_COORDINATOR_NOT_AVAILABLE</td><td>15</td><td>True</td><td>The 
group coordinator is not available.</td></tr>
+<tr><td>NOT_COORDINATOR_FOR_GROUP</td><td>16</td><td>True</td><td>This is not 
the correct coordinator for this group.</td></tr>
+<tr><td>INVALID_TOPIC_EXCEPTION</td><td>17</td><td>False</td><td>The request 
attempted to perform an operation on an invalid topic.</td></tr>
+<tr><td>RECORD_LIST_TOO_LARGE</td><td>18</td><td>False</td><td>The request 
included message batch larger than the configured segment size on the 
server.</td></tr>
+<tr><td>NOT_ENOUGH_REPLICAS</td><td>19</td><td>True</td><td>Messages are 
rejected since there are fewer in-sync replicas than required.</td></tr>
+<tr><td>NOT_ENOUGH_REPLICAS_AFTER_APPEND</td><td>20</td><td>True</td><td>Messages
 are written to the log, but to fewer in-sync replicas than required.</td></tr>
+<tr><td>INVALID_REQUIRED_ACKS</td><td>21</td><td>False</td><td>Produce request 
specified an invalid value for required acks.</td></tr>
+<tr><td>ILLEGAL_GENERATION</td><td>22</td><td>False</td><td>Specified group 
generation id is not valid.</td></tr>
+<tr><td>INCONSISTENT_GROUP_PROTOCOL</td><td>23</td><td>False</td><td>The group 
member's supported protocols are incompatible with those of existing 
members.</td></tr>
+<tr><td>INVALID_GROUP_ID</td><td>24</td><td>False</td><td>The configured 
groupId is invalid</td></tr>
+<tr><td>UNKNOWN_MEMBER_ID</td><td>25</td><td>False</td><td>The coordinator is 
not aware of this member.</td></tr>
+<tr><td>INVALID_SESSION_TIMEOUT</td><td>26</td><td>False</td><td>The session 
timeout is not within the range allowed by the broker (as configured by 
group.min.session.timeout.ms and group.max.session.timeout.ms).</td></tr>
+<tr><td>REBALANCE_IN_PROGRESS</td><td>27</td><td>False</td><td>The group is 
rebalancing, so a rejoin is needed.</td></tr>
+<tr><td>INVALID_COMMIT_OFFSET_SIZE</td><td>28</td><td>False</td><td>The 
committing offset data size is not valid</td></tr>
+<tr><td>TOPIC_AUTHORIZATION_FAILED</td><td>29</td><td>False</td><td>Not 
authorized to access topics: [Topic authorization failed.]</td></tr>
+<tr><td>GROUP_AUTHORIZATION_FAILED</td><td>30</td><td>False</td><td>Not 
authorized to access group: Group authorization failed.</td></tr>
+<tr><td>CLUSTER_AUTHORIZATION_FAILED</td><td>31</td><td>False</td><td>Cluster 
authorization failed.</td></tr>
+<tr><td>INVALID_TIMESTAMP</td><td>32</td><td>False</td><td>The timestamp of 
the message is out of acceptable range.</td></tr>
+<tr><td>UNSUPPORTED_SASL_MECHANISM</td><td>33</td><td>False</td><td>The broker 
does not support the requested SASL mechanism.</td></tr>
+<tr><td>ILLEGAL_SASL_STATE</td><td>34</td><td>False</td><td>Request is not 
valid given the current SASL state.</td></tr>
+<tr><td>UNSUPPORTED_VERSION</td><td>35</td><td>False</td><td>The version of 
API is not supported.</td></tr>
+<tr><td>TOPIC_ALREADY_EXISTS</td><td>36</td><td>False</td><td>Topic with this 
name already exists.</td></tr>
+<tr><td>INVALID_PARTITIONS</td><td>37</td><td>False</td><td>Number of 
partitions is invalid.</td></tr>
+<tr><td>INVALID_REPLICATION_FACTOR</td><td>38</td><td>False</td><td>Replication-factor
 is invalid.</td></tr>
+<tr><td>INVALID_REPLICA_ASSIGNMENT</td><td>39</td><td>False</td><td>Replica 
assignment is invalid.</td></tr>
+<tr><td>INVALID_CONFIG</td><td>40</td><td>False</td><td>Configuration is 
invalid.</td></tr>
+<tr><td>NOT_CONTROLLER</td><td>41</td><td>True</td><td>This is not the correct 
controller for this cluster.</td></tr>
+<tr><td>INVALID_REQUEST</td><td>42</td><td>False</td><td>This most likely 
occurs because of a request being malformed by the client library or the 
message was sent to an incompatible broker. See the broker logs for more 
details.</td></tr>
+<tr><td>UNSUPPORTED_FOR_MESSAGE_FORMAT</td><td>43</td><td>False</td><td>The 
message format version on the broker does not support the request.</td></tr>
+<tr><td>POLICY_VIOLATION</td><td>44</td><td>False</td><td>Request parameters 
do not satisfy the configured policy.</td></tr>
+</table>
+

Reply via email to