[ https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17294247#comment-17294247 ]
xiaogang zhou commented on FLINK-21543: --------------------------------------- [~yunta] As level compaction can cause too many compaction in every snapshot, our job is not applicable in that situation as compaction caused all the cpu resources. Our situation is 1 write and 1 read after... we do not need merge files. in the flink configuration, the fifo is mentioned. for FIFO compaction, normally it works fine. But when recover from checkpoint, I found the issue, and I attached the rocksdb log . I think in our case, the fifo is the only way. can you please review? > when using FIFO compaction, I found sst being deleted on the first checkpoint > ----------------------------------------------------------------------------- > > Key: FLINK-21543 > URL: https://issues.apache.org/jira/browse/FLINK-21543 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends > Reporter: xiaogang zhou > Priority: Major > Attachments: LOG (2) > > > 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time > 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] > [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with > creation time 0 for deletion > > the configuration is like > currentOptions.setCompactionStyle(getCompactionStyle()); > currentOptions.setLevel0FileNumCompactionTrigger(8); > // > currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes()); > CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO(); > > compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes()); > compactionOptionsFIFO.setAllowCompaction(true); > > the rocksdb version is > <dependency> > <groupId>io.github.myasuka</groupId> > <artifactId>frocksdbjni</artifactId> > <version>6.10.2-ververica-3.0</version> > </dependency> > > I think the problem is caused by tableproperty is lost by snapshot. Can any > one suggest how i can skip this problem? -- This message was sent by Atlassian Jira (v8.3.4#803005)