JimGalasyn commented on a change in pull request #8621:
URL: https://github.com/apache/kafka/pull/8621#discussion_r422309799



##########
File path: docs/streams/core-concepts.html
##########
@@ -206,17 +206,26 @@ <h2><a id="streams_processing_guarantee" 
href="#streams_processing_guarantee">Pr
         to the stream processing pipeline, known as the <a 
href="http://lambda-architecture.net/";>Lambda Architecture</a>.
         Prior to 0.11.0.0, Kafka only provides at-least-once delivery 
guarantees and hence any stream processing systems that leverage it as the 
backend storage could not guarantee end-to-end exactly-once semantics.
         In fact, even for those stream processing systems that claim to 
support exactly-once processing, as long as they are reading from / writing to 
Kafka as the source / sink, their applications cannot actually guarantee that
-        no duplicates will be generated throughout the pipeline.
+        no duplicates will be generated throughout the pipeline.<br />
 
         Since the 0.11.0.0 release, Kafka has added support to allow its 
producers to send messages to different topic partitions in a <a 
href="https://kafka.apache.org/documentation/#semantics";>transactional and 
idempotent manner</a>,
         and Kafka Streams has hence added the end-to-end exactly-once 
processing semantics by leveraging these features.
         More specifically, it guarantees that for any record read from the 
source Kafka topics, its processing results will be reflected exactly once in 
the output Kafka topic as well as in the state stores for stateful operations.
         Note the key difference between Kafka Streams end-to-end exactly-once 
guarantee with other stream processing frameworks' claimed guarantees is that 
Kafka Streams tightly integrates with the underlying Kafka storage system and 
ensure that
         commits on the input topic offsets, updates on the state stores, and 
writes to the output topics will be completed atomically instead of treating 
Kafka as an external system that may have side-effects.
-        To read more details on how this is done inside Kafka Streams, readers 
are recommended to read <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-129%3A+Streams+Exactly-Once+Semantics";>KIP-129</a>.
-
-        In order to achieve exactly-once semantics when running Kafka Streams 
applications, users can simply set the <code>processing.guarantee</code> config 
value to <b>exactly_once</b> (default value is <b>at_least_once</b>).
-        More details can be found in the <a 
href="/{{version}}/documentation#streamsconfigs"><b>Kafka Streams 
Configs</b></a> section.
+        To read more details on how this is done inside Kafka Streams, readers 
are recommended to read <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-129%3A+Streams+Exactly-Once+Semantics";>KIP-129</a>.<br
 />
+
+        As of the 2.6.0 release, Kafka Streams supports an improve 
implementation of exactly-once processing called "exactly-once beta"
+        (requires broker version 2.5.0 or newer).
+        This implementation is more efficient (i.e., less client and broker 
resource utilization; like client threads, used network connections etc.)

Review comment:
       ```suggestion
           This implementation is more efficient, because it reduces client and 
broker resource utilization, like client threads and used network connections.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to