Github user hmcl commented on a diff in the pull request:
https://github.com/apache/storm/pull/2380#discussion_r146996683
--- Diff: docs/storm-kafka-client.md ---
@@ -298,25 +298,44 @@ Currently the Kafka spout has has the following
default values, which have been
* max.uncommitted.offsets = 10000000
<br/>
-# Messaging reliability modes
+# Processing Guarantees
-In some cases you may not need or want the spout to guarantee
at-least-once processing of messages. The spout also supports at-most-once and
any-times modes. At-most-once guarantees that any tuple emitted to the topology
will never be reemitted. Any-times makes no guarantees, but may reduce the
overhead of committing offsets to Kafka in cases where you truly don't care how
many times a message is processed.
+The `KafkaSpoutConfig.ProcessingGuarantee` enum parameter controls when
the tuple with the `ConsumerRecord` for an offset is marked
--- End diff --
Well, this is tricky because Storm does not process offsets, storm
processes tuples. More exactly, it processes tuples that contain
ConsumerRecord's. The offset is just part of the ConsumerRecord, which also
contains key, val, etc... We commit the offset, but by committing the offset we
are technically marking that the tuple was processed because even if the tuple
fails, it won't be retried (processed again).
---