[ 
https://issues.apache.org/jira/browse/FLINK-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16073283#comment-16073283
 ] 

ASF GitHub Bot commented on FLINK-6996:
---------------------------------------

Github user tzulitai commented on a diff in the pull request:

    https://github.com/apache/flink/pull/4206#discussion_r125411974
  
    --- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironment.java
 ---
    @@ -80,6 +82,12 @@ public void createTestTopic(String topic, int 
numberOfPartitions, int replicatio
     
        public abstract <T> FlinkKafkaConsumerBase<T> getConsumer(List<String> 
topics, KeyedDeserializationSchema<T> readSchema, Properties props);
     
    +   public abstract <K, V> Collection<ConsumerRecord<K, V>> 
getAllRecordsFromTopic(
    +           Properties properties,
    +           String topic,
    +           int partition,
    +           long timeout);
    --- End diff --
    
    nit: the indentation pattern is inconsistent with the other abstract method 
declarations here.


> FlinkKafkaProducer010 doesn't guarantee at-least-once semantic
> --------------------------------------------------------------
>
>                 Key: FLINK-6996
>                 URL: https://issues.apache.org/jira/browse/FLINK-6996
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.2.0, 1.3.0, 1.2.1, 1.3.1
>            Reporter: Piotr Nowojski
>            Assignee: Piotr Nowojski
>
> FlinkKafkaProducer010 doesn't implement CheckpointedFunction interface. This 
> means, when it's used like a "regular sink function" (option a from [the java 
> doc|https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer010.html])
>  it will not flush the data on "snapshotState"  as it is supposed to.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to