Repository: kafka
Updated Branches:
  refs/heads/trunk b0a4a57c5 -> d9ddc109f


KAFKA-3043: Replace request.required.acks with acks in docs.

In Kafka 0.9, request.required.acks=-1 which configration of producer is 
replaced by acks=all,
but this old config is remained in docs.

Author: Sasaki Toru <[email protected]>

Reviewers: Gwen Shapira

Closes #716 from sasakitoa/acks_doc


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/d9ddc109
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/d9ddc109
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/d9ddc109

Branch: refs/heads/trunk
Commit: d9ddc109fd360d5deb2894ad73cb9e12e2f8edbf
Parents: b0a4a57
Author: Sasaki Toru <[email protected]>
Authored: Sat Dec 26 23:14:31 2015 -0800
Committer: Gwen Shapira <[email protected]>
Committed: Sat Dec 26 23:14:31 2015 -0800

----------------------------------------------------------------------
 core/src/main/scala/kafka/server/KafkaConfig.scala | 2 +-
 docs/configuration.html                            | 4 ++--
 docs/design.html                                   | 6 +++---
 3 files changed, 6 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/d9ddc109/core/src/main/scala/kafka/server/KafkaConfig.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/kafka/server/KafkaConfig.scala 
b/core/src/main/scala/kafka/server/KafkaConfig.scala
index 9d3150a..856742f 100755
--- a/core/src/main/scala/kafka/server/KafkaConfig.scala
+++ b/core/src/main/scala/kafka/server/KafkaConfig.scala
@@ -414,7 +414,7 @@ object KafkaConfig {
   val LogPreAllocateEnableDoc = "Should pre allocate file when create new 
segment? If you are using Kafka on Windows, you probably need to set it to 
true."
   val NumRecoveryThreadsPerDataDirDoc = "The number of threads per data 
directory to be used for log recovery at startup and flushing at shutdown"
   val AutoCreateTopicsEnableDoc = "Enable auto creation of topic on the server"
-  val MinInSyncReplicasDoc = "define the minimum number of replicas in ISR 
needed to satisfy a produce request with required.acks=-1 (or all)"
+  val MinInSyncReplicasDoc = "define the minimum number of replicas in ISR 
needed to satisfy a produce request with acks=all (or -1)"
   /** ********* Replication configuration ***********/
   val ControllerSocketTimeoutMsDoc = "The socket timeout for 
controller-to-broker channels"
   val ControllerMessageQueueSizeDoc = "The buffer size for 
controller-to-broker-channels"

http://git-wip-us.apache.org/repos/asf/kafka/blob/d9ddc109/docs/configuration.html
----------------------------------------------------------------------
diff --git a/docs/configuration.html b/docs/configuration.html
index 358431e..4b88f26 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -106,8 +106,8 @@ The following are the topic-level configurations. The 
server's default configura
       <td>min.insync.replicas</td>
       <td>1</td>
       <td>min.insync.replicas</td>
-      <td>When a producer sets request.required.acks to -1, 
min.insync.replicas specifies the minimum number of replicas that must 
acknowledge a write for the write to be considered successful. If this minimum 
cannot be met, then the producer will raise an exception (either 
NotEnoughReplicas or NotEnoughReplicasAfterAppend).
-      When used together, min.insync.replicas and request.required.acks allow 
you to enforce greater durability guarantees. A typical scenario would be to 
create a topic with a replication factor of 3, set min.insync.replicas to 2, 
and produce with request.required.acks of -1. This will ensure that the 
producer raises an exception if a majority of replicas do not receive a 
write.</td>
+      <td>When a producer sets acks to "all", min.insync.replicas specifies 
the minimum number of replicas that must acknowledge a write for the write to 
be considered successful. If this minimum cannot be met, then the producer will 
raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
+      When used together, min.insync.replicas and acks allow you to enforce 
greater durability guarantees. A typical scenario would be to create a topic 
with a replication factor of 3, set min.insync.replicas to 2, and produce with 
acks of "all". This will ensure that the producer raises an exception if a 
majority of replicas do not receive a write.</td>
     </tr>
     <tr>
       <td>retention.bytes</td>

http://git-wip-us.apache.org/repos/asf/kafka/blob/d9ddc109/docs/design.html
----------------------------------------------------------------------
diff --git a/docs/design.html b/docs/design.html
index b2cd7ab..10e8f9d 100644
--- a/docs/design.html
+++ b/docs/design.html
@@ -200,7 +200,7 @@ We refer to nodes satisfying these two conditions as being 
"in sync" to avoid th
 <p>
 In distributed systems terminology we only attempt to handle a "fail/recover" 
model of failures where nodes suddenly cease working and then later recover 
(perhaps without knowing that they have died). Kafka does not handle so-called 
"Byzantine" failures in which nodes produce arbitrary or malicious responses 
(perhaps due to bugs or foul play).
 <p>
-A message is considered "committed" when all in sync replicas for that 
partition have applied it to their log. Only committed messages are ever given 
out to the consumer. This means that the consumer need not worry about 
potentially seeing a message that could be lost if the leader fails. Producers, 
on the other hand, have the option of either waiting for the message to be 
committed or not, depending on their preference for tradeoff between latency 
and durability. This preference is controlled by the request.required.acks 
setting that the producer uses.
+A message is considered "committed" when all in sync replicas for that 
partition have applied it to their log. Only committed messages are ever given 
out to the consumer. This means that the consumer need not worry about 
potentially seeing a message that could be lost if the leader fails. Producers, 
on the other hand, have the option of either waiting for the message to be 
committed or not, depending on their preference for tradeoff between latency 
and durability. This preference is controlled by the acks setting that the 
producer uses.
 <p>
 The guarantee that Kafka offers is that a committed message will not be lost, 
as long as there is at least one in sync replica alive, at all times.
 <p>
@@ -248,12 +248,12 @@ This dilemma is not specific to Kafka. It exists in any 
quorum-based scheme. For
 <h4><a id="design_ha" href="#design_ha">Availability and Durability 
Guarantees</a></h4>
 
 When writing to Kafka, producers can choose whether they wait for the message 
to be acknowledged by 0,1 or all (-1) replicas.
-Note that "acknowledgement by all replicas" does not guarantee that the full 
set of assigned replicas have received the message. By default, when 
request.required.acks=-1, acknowledgement happens as soon as all the current 
in-sync replicas have received the message. For example, if a topic is 
configured with only two replicas and one fails (i.e., only one in sync replica 
remains), then writes that specify request.required.acks=-1 will succeed. 
However, these writes could be lost if the remaining replica also fails.
+Note that "acknowledgement by all replicas" does not guarantee that the full 
set of assigned replicas have received the message. By default, when acks=all, 
acknowledgement happens as soon as all the current in-sync replicas have 
received the message. For example, if a topic is configured with only two 
replicas and one fails (i.e., only one in sync replica remains), then writes 
that specify acks=all will succeed. However, these writes could be lost if the 
remaining replica also fails.
 
 Although this ensures maximum availability of the partition, this behavior may 
be undesirable to some users who prefer durability over availability. 
Therefore, we provide two topic-level configurations that can be used to prefer 
message durability over availability:
 <ol>
      <li> Disable unclean leader election - if all replicas become 
unavailable, then the partition will remain unavailable until the most recent 
leader becomes available again. This effectively prefers unavailability over 
the risk of message loss. See the previous section on Unclean Leader Election 
for clarification. </li>
-     <li> Specify a minimum ISR size - the partition will only accept writes 
if the size of the ISR is above a certain minimum, in order to prevent the loss 
of messages that were written to just a single replica, which subsequently 
becomes unavailable. This setting only takes effect if the producer uses 
required.acks=-1 and guarantees that the message will be acknowledged by at 
least this many in-sync replicas.
+     <li> Specify a minimum ISR size - the partition will only accept writes 
if the size of the ISR is above a certain minimum, in order to prevent the loss 
of messages that were written to just a single replica, which subsequently 
becomes unavailable. This setting only takes effect if the producer uses 
acks=all and guarantees that the message will be acknowledged by at least this 
many in-sync replicas.
 This setting offers a trade-off between consistency and availability. A higher 
setting for minimum ISR size guarantees better consistency since the message is 
guaranteed to be written to more replicas which reduces the probability that it 
will be lost. However, it reduces availability since the partition will be 
unavailable for writes if the number of in-sync replicas drops below the 
minimum threshold. </li>
 </ol>
 

Reply via email to