[ https://issues.apache.org/jira/browse/FLINK-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15647949#comment-15647949 ]
ASF GitHub Bot commented on FLINK-4975: --------------------------------------- Github user tillrohrmann commented on a diff in the pull request: https://github.com/apache/flink/pull/2754#discussion_r87019713 --- Diff: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/BarrierBuffer.java --- @@ -254,8 +357,20 @@ public void cleanup() throws IOException { for (BufferSpiller.SpilledBufferOrEventSequence seq : queuedBuffered) { seq.cleanup(); } + queuedBuffered.clear(); } - + + private void beginNewAlignment(long checkpointId, int channelIndex) throws IOException { + currentCheckpointId = checkpointId; + onBarrier(channelIndex); + + startOfAlignmentTimestamp = System.nanoTime(); + + if (LOG.isDebugEnabled()) { + LOG.debug("Starting stream alignment for checkpoint " + checkpointId); --- End diff -- Does not make a difference since it's guarded but IntelliJ warns because the logging statement does not use a parameterized debug message with `{}`. > Add a limit for how much data may be buffered during checkpoint alignment > ------------------------------------------------------------------------- > > Key: FLINK-4975 > URL: https://issues.apache.org/jira/browse/FLINK-4975 > Project: Flink > Issue Type: Improvement > Components: State Backends, Checkpointing > Affects Versions: 1.1.3 > Reporter: Stephan Ewen > Assignee: Stephan Ewen > Fix For: 1.2.0, 1.1.4 > > > During checkpoint alignment, data may be buffered/spilled. > We should introduce an upper limit for the spilled data volume. After > exceeding that limit, the checkpoint alignment should abort and the > checkpoint be canceled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)