[ https://issues.apache.org/jira/browse/HBASE-23350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17055038#comment-17055038 ]
Viraj Jasani commented on HBASE-23350: -------------------------------------- [~abhinaba.sarkar] [~ram_krish] I was just preparing branch-1 backport for this patch. I have one question: Why are we passing -1 to totalCompactedFilesSize here? {code:java} public StoreFileWriter createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTag, boolean shouldDropBehind) throws IOException { return createWriterInTmp(maxKeyCount, compression, isCompaction, includeMVCCReadpoint, includesTag, shouldDropBehind, -1); } {code} Is it to make this condition true by default? {code:java} if (cacheCompactedBlocksOnWrite && totalCompactedFilesSize <= cacheConf.getCacheCompactedBlocksOnWriteThreshold()) { {code} By default cacheConf.getCacheCompactedBlocksOnWriteThreshold() would be Long.MAX_VALUE, so if we pass totalCompactedFilesSize as -1, the above condition will always be true (as a default behavior of createWriterInTmp). Is that the only purpose? > Make compaction files cacheonWrite configurable based on threshold > ------------------------------------------------------------------ > > Key: HBASE-23350 > URL: https://issues.apache.org/jira/browse/HBASE-23350 > Project: HBase > Issue Type: Sub-task > Components: Compaction > Reporter: ramkrishna.s.vasudevan > Assignee: Abhinaba Sarkar > Priority: Major > Fix For: 3.0.0, 2.3.0 > > > As per comment from [~javaman_chen] in the parent JIRA > https://issues.apache.org/jira/browse/HBASE-23066?focusedCommentId=16937361&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16937361 > This is to introduce a config to identify if the resulting compacted file's > blocks should be added to the cache - while writing. -- This message was sent by Atlassian Jira (v8.3.4#803005)