Repository: flink
Updated Branches:
  refs/heads/release-1.4 62bf00189 -> e100861f8


[hotfix][docs] Improve Kafka exactly-once docs


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/e100861f
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/e100861f
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/e100861f

Branch: refs/heads/release-1.4
Commit: e100861f84fe60ec6bb8172bb5a3cc453640fdb3
Parents: 62bf001
Author: Piotr Nowojski <piotr.nowoj...@gmail.com>
Authored: Thu Nov 23 13:08:43 2017 +0100
Committer: Aljoscha Krettek <aljoscha.kret...@gmail.com>
Committed: Thu Nov 23 15:02:49 2017 +0100

----------------------------------------------------------------------
 docs/dev/connectors/kafka.md | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/e100861f/docs/dev/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index ad4cc2f..5376d5b 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -538,7 +538,10 @@ chosen by passing appropriate `semantic` parameter to the 
`FlinkKafkaProducer011
  be duplicated.
  * `Semantic.AT_LEAST_ONCE` (default setting): similar to 
`setFlushOnCheckpoint(true)` in
  `FlinkKafkaProducer010`. This guarantees that no records will be lost 
(although they can be duplicated).
- * `Semantic.EXACTLY_ONCE`: uses Kafka transactions to provide exactly-once 
semantic.
+ * `Semantic.EXACTLY_ONCE`: uses Kafka transactions to provide exactly-once 
semantic. Whenever you write
+ to Kafka using transactions, do not forget about setting desired 
`isolation.level` (`read_committed`
+ or `read_uncommitted` - the latter one is the default value) for any 
application consuming records
+ from Kafka.
 
 <div class="alert alert-warning">
   <strong>Attention:</strong> Depending on your Kafka configuration, even 
after Kafka acknowledges

Reply via email to