echauchot commented on code in PR #19586:
URL: https://github.com/apache/flink/pull/19586#discussion_r860674923


##########
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java:
##########
@@ -72,6 +74,16 @@
         ClosureCleaner.clean(builder, 
ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
     }
 
+    /**
+     * Set writes to be synchronous (block until writes are completed).
+     *
+     * @param timeout Maximum number of seconds to wait for write completion
+     */
+    public void setSynchronousWrites(int timeout) {

Review Comment:
   To give some more details about the uncovered race condition I mentioned: 
when I upgraded the versions 
`CassandraConnectorITCase#testCassandraBatchPojoFormat` started to fail 
claiming no records were written. So the problem was not on `CassandraSinkBase` 
but on `CassandraPojoOutputFormat`. The problem was that the Cassandra session 
was closed before the end of the asynchronous writes leading to a Cassandra 
exception saying that the session is already closed. That is why I added the 
option for synchronous write on `CassandraPojoOutputFormat` so that 
`sink.writeRecord(pojo)` becomes a blocking call and `sink.close()` is not 
called until the write is actually done. Then I generalized this option to all 
sinks for coherence. So, in short, there was no problem with 
`CassandraSinkBase` and subclasses, the `flush` behavior works just fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to