Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 0f672e80e -> 98c07f419


Fix a few typos in the generated configs


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/98c07f41
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/98c07f41
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/98c07f41

Branch: refs/heads/asf-site
Commit: 98c07f419fdb34bfb919626192103a7b37675786
Parents: 0f672e8
Author: Ismael Juma <[email protected]>
Authored: Wed Jun 28 11:25:15 2017 +0100
Committer: Ismael Juma <[email protected]>
Committed: Wed Jun 28 11:25:15 2017 +0100

----------------------------------------------------------------------
 0110/generated/consumer_config.html | 2 +-
 0110/generated/kafka_config.html    | 2 +-
 0110/generated/topic_config.html    | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/98c07f41/0110/generated/consumer_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/consumer_config.html 
b/0110/generated/consumer_config.html
index 20cbce8..6a7fa2a 100644
--- a/0110/generated/consumer_config.html
+++ b/0110/generated/consumer_config.html
@@ -42,7 +42,7 @@
 <tr>
 <td>exclude.internal.topics</td><td>Whether records from internal topics (such 
as offsets) should be exposed to the consumer. If set to <code>true</code> the 
only way to receive records from an internal topic is subscribing to 
it.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
 <tr>
-<td>fetch.max.bytes</td><td>The maximum amount of data the server should 
return for a fetch request. Records are fetched in batches by the consumer, and 
if the first record batch in the first non-empty partition of the fetch is 
larger than this value, the record batch will still be returned to ensure that 
the consumer can make progress. As such, this is not a absolute maximum.The 
maximum record batch size accepted by the broker is defined via 
<code>message.max.bytes</code> (broker config) or 
<code>max.message.bytes</code> (topic config). Note that the consumer performs 
multiple fetches in 
parallel.</td><td>int</td><td>52428800</td><td>[0,...]</td><td>medium</td></tr>
+<td>fetch.max.bytes</td><td>The maximum amount of data the server should 
return for a fetch request. Records are fetched in batches by the consumer, and 
if the first record batch in the first non-empty partition of the fetch is 
larger than this value, the record batch will still be returned to ensure that 
the consumer can make progress. As such, this is not a absolute maximum. The 
maximum record batch size accepted by the broker is defined via 
<code>message.max.bytes</code> (broker config) or 
<code>max.message.bytes</code> (topic config). Note that the consumer performs 
multiple fetches in 
parallel.</td><td>int</td><td>52428800</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
 <td>isolation.level</td><td><p>Controls how to read messages written 
transactionally. If set to <code>read_committed</code>, consumer.poll() will 
only return transactional messages which have been committed. If set to 
<code>read_uncommitted</code>' (the default), consumer.poll() will return all 
messages, even transactional messages which have been aborted. 
Non-transactional messages will be returned unconditionally in either mode.</p> 
<p>Messages will always be returned in offset order. Hence, in  
<code>read_committed</code> mode, consumer.poll() will only return messages up 
to the last stable offset (LSO), which is the one less than the offset of the 
first open transaction. In particular any messages appearing after messages 
belonging to ongoing transactions will be withheld until the relevant 
transaction has been completed. As a result, <code>read_committed</code> 
consumers will not be able to read up to the high watermark when there are in 
flight transactions.</p><p> Further, whe
 n in <code>read_committed</mode> the seekToEnd method will return the 
LSO</td><td>string</td><td>read_uncommitted</td><td>[read_committed, 
read_uncommitted]</td><td>medium</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/98c07f41/0110/generated/kafka_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/kafka_config.html b/0110/generated/kafka_config.html
index 8ea1021..a6be430 100644
--- a/0110/generated/kafka_config.html
+++ b/0110/generated/kafka_config.html
@@ -79,7 +79,7 @@ hostname of broker. If this is set, it will only bind to this 
address. If this i
 <tr>
 <td>log.segment.delete.delay.ms</td><td>The amount of time to wait before 
deleting a file from the 
filesystem</td><td>long</td><td>60000</td><td>[0,...]</td><td>high</td></tr>
 <tr>
-<td>message.max.bytes</td><td><p>The largest record batch size allowed by 
Kafka. If this is increased and there are consumers older than 0.10.2, the 
consumers' fetch size must also be increased so that the they can fetch record 
batches this large.</p><p>In the latest message format version, records are 
always grouped into batches for efficiency. In previous message format 
versions, uncompressed records are not grouped into batches and this limit only 
applies to asingle record in that case.</p><p>This can be set per topic with 
the topic level <code>max.message.bytes</code> 
config.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>high</td></tr>
+<td>message.max.bytes</td><td><p>The largest record batch size allowed by 
Kafka. If this is increased and there are consumers older than 0.10.2, the 
consumers' fetch size must also be increased so that the they can fetch record 
batches this large.</p><p>In the latest message format version, records are 
always grouped into batches for efficiency. In previous message format 
versions, uncompressed records are not grouped into batches and this limit only 
applies to a single record in that case.</p><p>This can be set per topic with 
the topic level <code>max.message.bytes</code> 
config.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>high</td></tr>
 <tr>
 <td>min.insync.replicas</td><td>When a producer sets acks to "all" (or "-1"), 
min.insync.replicas specifies the minimum number of replicas that must 
acknowledge a write for the write to be considered successful. If this minimum 
cannot be met, then the producer will raise an exception (either 
NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, 
min.insync.replicas and acks allow you to enforce greater durability 
guarantees. A typical scenario would be to create a topic with a replication 
factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This 
will ensure that the producer raises an exception if a majority of replicas do 
not receive a 
write.</td><td>int</td><td>1</td><td>[1,...]</td><td>high</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/98c07f41/0110/generated/topic_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/topic_config.html b/0110/generated/topic_config.html
index ebf03a1..38587fc 100644
--- a/0110/generated/topic_config.html
+++ b/0110/generated/topic_config.html
@@ -27,7 +27,7 @@
 <tr>
 <td>leader.replication.throttled.replicas</td><td>A list of replicas for which 
log replication should be throttled on the leader side. The list should 
describe a set of replicas in the form 
[PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the 
wildcard '*' can be used to throttle all replicas for this 
topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@52feb982</td><td>leader.replication.throttled.replicas</td><td>medium</td></tr>
 <tr>
-<td>max.message.bytes</td><td><p>The largest record batch size allowed by 
Kafka. If this is increased and there are consumers older than 0.10.2, the 
consumers' fetch size must also be increased so that the they can fetch record 
batches this large.</p><p>In the latest message format version, records are 
always grouped into batches for efficiency. In previous message format 
versions, uncompressed records are not grouped into batches and this limit only 
applies to asingle record in that 
case.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>message.max.bytes</td><td>medium</td></tr>
+<td>max.message.bytes</td><td><p>The largest record batch size allowed by 
Kafka. If this is increased and there are consumers older than 0.10.2, the 
consumers' fetch size must also be increased so that the they can fetch record 
batches this large.</p><p>In the latest message format version, records are 
always grouped into batches for efficiency. In previous message format 
versions, uncompressed records are not grouped into batches and this limit only 
applies to a single record in that 
case.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>message.max.bytes</td><td>medium</td></tr>
 <tr>
 <td>message.format.version</td><td>Specify the message format version the 
broker will use to append messages to the logs. The value should be a valid 
ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for 
more details. By setting a particular message format version, the user is 
certifying that all the existing messages on disk are smaller or equal than the 
specified version. Setting this value incorrectly will cause consumers with 
older versions to break as they will receive messages with a format that they 
don't 
understand.</td><td>string</td><td>0.11.0-IV2</td><td></td><td>log.message.format.version</td><td>medium</td></tr>
 <tr>

Reply via email to