Repository: kafka
Updated Branches:
  refs/heads/0.10.0 f4dc90e9e -> addbaefa6


MINOR: Fix order of compression algorithms in upgrade note

Author: Ismael Juma <[email protected]>

Reviewers: Guozhang Wang <[email protected]>, Jun Rao <[email protected]>

Closes #1373 from ijuma/fix-producer-buffer-size-upgrade-note

(cherry picked from commit 84d17bdf220292dc9950566afe1de34b64be4746)
Signed-off-by: Ismael Juma <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/addbaefa
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/addbaefa
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/addbaefa

Branch: refs/heads/0.10.0
Commit: addbaefa6a254013853f05c938996919d5097da8
Parents: f4dc90e
Author: Ismael Juma <[email protected]>
Authored: Thu May 12 01:38:50 2016 +0100
Committer: Ismael Juma <[email protected]>
Committed: Thu May 12 01:39:06 2016 +0100

----------------------------------------------------------------------
 docs/upgrade.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/addbaefa/docs/upgrade.html
----------------------------------------------------------------------
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 3c98540..3e07ef8 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -91,7 +91,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should 
be upgraded to 0.9
 
 <ul>
     <li> Starting from Kafka 0.10.0.0, a new client library named <b>Kafka 
Streams</b> is available for stream processing on data stored in Kafka topics. 
This new client library only works with 0.10.x and upward versioned brokers due 
to message format changes mentioned above. For more information please read <a 
href="#streams_overview">this section</a>.</li>
-    <li> If compression with snappy or gzip is enabled, the new producer will 
use the compression scheme's default buffer size (this is already the case for 
LZ4) instead of 1 KB in order to improve the compression ratio. Note that the 
default buffer sizes for snappy, gzip and LZ4 are 0.5 KB, 32 KB and 64KB 
respectively. For the snappy case, a producer with 5000 partitions will require 
an additional 155 MB of JVM heap.</li>
+    <li> If compression with snappy or gzip is enabled, the new producer will 
use the compression scheme's default buffer size (this is already the case for 
LZ4) instead of 1 KB in order to improve the compression ratio. Note that the 
default buffer sizes for gzip, snappy and LZ4 are 0.5 KB, 32 KB and 64KB 
respectively. For the snappy case, a producer with 5000 partitions will require 
an additional 155 MB of JVM heap.</li>
     <li> The default value of the configuration parameter 
<code>receive.buffer.bytes</code> is now 64K for the new consumer.</li>
     <li> The new consumer now exposes the configuration parameter 
<code>exclude.internal.topics</code> to restrict internal topics (such as the 
consumer offsets topic) from accidentally being included in regular expression 
subscriptions. By default, it is enabled.</li>
     <li> The old Scala producer has been deprecated. Users should migrate 
their code to the Java producer included in the kafka-clients JAR as soon as 
possible. </li>

Reply via email to