jhungund commented on code in PR #5856:
URL: https://github.com/apache/hbase/pull/5856#discussion_r1583471499


##########
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDataTieringManager.java:
##########
@@ -402,6 +406,67 @@ public void testBlockEvictionsHotBlocks() throws Exception 
{
     validateBlocks(bucketCache.getBackingMap().keySet(), 2, 2, 0);
   }
 
+  @Test
+  public void testFeatureKeyDisabled() throws Exception {
+    DataTieringManager.resetForTestingOnly();
+    defaultConf.setBoolean(DataTieringManager.GLOBAL_DATA_TIERING_ENABLED_KEY, 
false);
+    try {
+      assertFalse(DataTieringManager.instantiate(defaultConf, 
testOnlineRegions));
+      // Verify that the DataaTieringManager instance is not instantiated in 
the
+      // instantiate call above.
+      assertNull(DataTieringManager.getInstance());
+
+      // Also validate that data temperature is not honoured.
+      long capacitySize = 40 * 1024;
+      int writeThreads = 3;
+      int writerQLen = 64;
+      int[] bucketSizes = new int[] { 8 * 1024 + 1024 };
+
+      // Setup: Create a bucket cache with lower capacity
+      BucketCache bucketCache = new BucketCache("file:" + testDir + 
"/bucket.cache", capacitySize,
+        8192, bucketSizes, writeThreads, writerQLen, testDir + 
"/bucket.persistence",
+        DEFAULT_ERROR_TOLERATION_DURATION, defaultConf);
+
+      // Create three Cache keys with two hot data blocks and one cold data 
block
+      // hStoreFiles.get(0) is a hot data file and hStoreFiles.get(3) is a 
cold data file.
+      Set<BlockCacheKey> cacheKeys = new HashSet<>();
+      cacheKeys.add(new BlockCacheKey(hStoreFiles.get(0).getPath(), 0, true, 
BlockType.DATA));
+      cacheKeys.add(new BlockCacheKey(hStoreFiles.get(0).getPath(), 8192, 
true, BlockType.DATA));
+      cacheKeys.add(new BlockCacheKey(hStoreFiles.get(3).getPath(), 0, true, 
BlockType.DATA));
+
+      // Create dummy data to be cached and fill the cache completely.
+      CacheTestUtils.HFileBlockPair[] blocks = 
CacheTestUtils.generateHFileBlocks(8192, 3);
+
+      int blocksIter = 0;
+      for (BlockCacheKey key : cacheKeys) {
+        bucketCache.cacheBlock(key, blocks[blocksIter++].getBlock());
+        // Ensure that the block is persisted to the file.
+        Waiter.waitFor(defaultConf, 10000, 100,
+          () -> (bucketCache.getBackingMap().containsKey(key)));
+      }
+
+      // Verify that the bucket cache contains 3 blocks.
+      assertEquals(3, bucketCache.getBackingMap().keySet().size());
+
+      // Add an additional block which should evict the only cold block with 
an additional hot
+      // block.
+      BlockCacheKey newKey =
+        new BlockCacheKey(hStoreFiles.get(2).getPath(), 0, true, 
BlockType.DATA);
+      CacheTestUtils.HFileBlockPair[] newBlock = 
CacheTestUtils.generateHFileBlocks(8192, 1);
+
+      bucketCache.cacheBlock(newKey, newBlock[0].getBlock());
+      Waiter.waitFor(defaultConf, 10000, 100,
+        () -> (bucketCache.getBackingMap().containsKey(newKey)));
+
+      // Verify that the bucket still contains the 2 cold blocks and one newly 
added hot block.

Review Comment:
   Currently, in this test, which blocks stay is not deterministic. Any block 
may get evicted leading to test failure.
   @wchevreuil, @vinayakphegde, any idea to enforce the eviction of hot blocks 
and retain the cold blocks in cache to validate that data-tiering is not 
effective? If not, we may have to rely on the absence of DataTieringManager 
instance only.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to