[ https://issues.apache.org/jira/browse/FLINK-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466209#comment-16466209 ]
Stephan Ewen commented on FLINK-9308: ------------------------------------- The rate at which Kafka can handle offset commits should not be really the biggest concern here. Flink checkpoints do not need to commit to Kafka, this is optional. > The method enableCheckpointing with low values like 10 are forming DoS on > Kafka Clusters > ---------------------------------------------------------------------------------------- > > Key: FLINK-9308 > URL: https://issues.apache.org/jira/browse/FLINK-9308 > Project: Flink > Issue Type: Bug > Reporter: Seweryn Habdank-Wojewodzki > Assignee: vinoyang > Priority: Major > > Hi, > Docus about Checkpoints in Flink contains such an example: > {code} > StreamExecutionEnvironment env = > StreamExecutionEnvironment.getExecutionEnvironment(); > // start a checkpoint every 1000 ms > env.enableCheckpointing(1000); > {code} > Nice. There is one hack. The enableCheckpointing( parametr /* in [ms]*/); > when used with eg. 1 or 10 will kill Kafka Server by continuous commits of > offsets. > Every creatiive developer, who would like to defend the SW from duplication > of messages in case of crash, will decrease this parameter to minimum. He > will protect his app, but on the Kafka Broker/Server side he will cause DoS. > Can you have a look, to limit minimum value in case of Kafka Stream > Environment? > I am not sure if 100ms as minimum is enough, but 1000 ms as minimum would be > nice. -- This message was sent by Atlassian JIRA (v7.6.3#76005)