vinayakphegde commented on code in PR #5866:
URL: https://github.com/apache/hbase/pull/5866#discussion_r1601338343


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java:
##########
@@ -2203,6 +2204,21 @@ public Optional<Boolean> shouldCacheFile(HFileInfo 
hFileInfo, Configuration conf
     return Optional.of(!fullyCachedFiles.containsKey(fileName));
   }
 
+  @Override
+  public Optional<Boolean> shouldCacheBlock(BlockCacheKey key, Configuration 
conf) {
+    try {
+      DataTieringManager dataTieringManager = DataTieringManager.getInstance();
+      if (dataTieringManager != null && !dataTieringManager.isHotData(key, 
conf)) {

Review Comment:
   I agree, it will increase the size of each BlockCacheKey by 8 bytes. 
Assuming we have 10 million BlockCacheKeys on average on each region server (I 
think, this is what @wchevreuil said the other day), this would increase the 
heap size by approximately 76 MB.
   
   However, I believe the proposed solution is not correct. The 
`shouldCacheBlock` function should not process information related to data 
tiering; this responsibility should be delegated to the `DataTieringManager`. 
Additionally, we should not assume that the DataTieringType of the block is 
TIME_RANGE, as it could be different in the future.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to