Repository: kafka
Updated Branches:
  refs/heads/trunk 9b0ddc555 -> b609645dc


trivial change to 0.9.0 docs to fix outdated ConsumerMetadataRequest


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/b609645d
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/b609645d
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/b609645d

Branch: refs/heads/trunk
Commit: b609645dc4813de7ca5c6366e940831f6e7c1177
Parents: 9b0ddc5
Author: Jun Rao <[email protected]>
Authored: Fri Nov 20 13:26:40 2015 -0800
Committer: Jun Rao <[email protected]>
Committed: Fri Nov 20 13:26:40 2015 -0800

----------------------------------------------------------------------
 docs/implementation.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/b609645d/docs/implementation.html
----------------------------------------------------------------------
diff --git a/docs/implementation.html b/docs/implementation.html
index 0b603d4..9ae7d4e 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -243,7 +243,7 @@ Note that two kinds of corruption must be handled: 
truncation in which an unwrit
 <h3><a id="distributionimpl" href="#distributionimpl">5.6 Distribution</a></h3>
 <h4><a id="impl_offsettracking" href="#impl_offsettracking">Consumer Offset 
Tracking</a></h4>
 <p>
-The high-level consumer tracks the maximum offset it has consumed in each 
partition and periodically commits its offset vector so that it can resume from 
those offsets in the event of a restart. Kafka provides the option to store all 
the offsets for a given consumer group in a designated broker (for that group) 
called the <i>offset manager</i>. i.e., any consumer instance in that consumer 
group should send its offset commits and fetches to that offset manager 
(broker). The high-level consumer handles this automatically. If you use the 
simple consumer you will need to manage offsets manually. This is currently 
unsupported in the Java simple consumer which can only commit or fetch offsets 
in ZooKeeper. If you use the Scala simple consumer you can discover the offset 
manager and explicitly commit or fetch offsets to the offset manager. A 
consumer can look up its offset manager by issuing a ConsumerMetadataRequest to 
any Kafka broker and reading the ConsumerMetadataResponse which will c
 ontain the offset manager. The consumer can then proceed to commit or fetch 
offsets from the offsets manager broker. In case the offset manager moves, the 
consumer will need to rediscover the offset manager. If you wish to manage your 
offsets manually, you can take a look at these <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka";>code
 samples that explain how to issue OffsetCommitRequest and 
OffsetFetchRequest</a>.
+The high-level consumer tracks the maximum offset it has consumed in each 
partition and periodically commits its offset vector so that it can resume from 
those offsets in the event of a restart. Kafka provides the option to store all 
the offsets for a given consumer group in a designated broker (for that group) 
called the <i>offset manager</i>. i.e., any consumer instance in that consumer 
group should send its offset commits and fetches to that offset manager 
(broker). The high-level consumer handles this automatically. If you use the 
simple consumer you will need to manage offsets manually. This is currently 
unsupported in the Java simple consumer which can only commit or fetch offsets 
in ZooKeeper. If you use the Scala simple consumer you can discover the offset 
manager and explicitly commit or fetch offsets to the offset manager. A 
consumer can look up its offset manager by issuing a GroupCoordinatorRequest to 
any Kafka broker and reading the GroupCoordinatorResponse which will c
 ontain the offset manager. The consumer can then proceed to commit or fetch 
offsets from the offsets manager broker. In case the offset manager moves, the 
consumer will need to rediscover the offset manager. If you wish to manage your 
offsets manually, you can take a look at these <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka";>code
 samples that explain how to issue OffsetCommitRequest and 
OffsetFetchRequest</a>.
 </p>
 
 <p>

Reply via email to