SuXingLee commented on a change in pull request #50: [BAHIR-202] Improve 
KuduSink throughput by using async FlushMode
URL: https://github.com/apache/bahir-flink/pull/50#discussion_r268609055
 
 

 ##########
 File path: 
flink-connector-kudu/src/main/java/org/apache/flink/streaming/connectors/kudu/KuduSink.java
 ##########
 @@ -79,9 +86,12 @@ public KuduSink(String kuduMasters, KuduTableInfo 
tableInfo, KuduSerialization<O
 
     @Override
     public void open(Configuration parameters) throws IOException {
-        if (connector != null) return;
-        connector = new KuduConnector(kuduMasters, tableInfo, consistency, 
writeMode);
-        serializer.withSchema(tableInfo.getSchema());
+        if (this.connector != null) return;
+        FlushMode flushMode = ((StreamingRuntimeContext) 
getRuntimeContext()).isCheckpointingEnabled() ?
 
 Review comment:
   - ```AUTO_FLUSH_BACKGROUND``` don't support  strong consistence.
   - ```MANUAL_FLUSH``` support  strong consistence by wait for one flush() and 
join() before triggering another flush. but in this mode, flush data only 
depend on checkpoint trigger. is it suitable in flink sink? should we add 
another strategy(count、size) ? we shoud do more in another PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to