[ 
https://issues.apache.org/jira/browse/HBASE-27437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621116#comment-17621116
 ] 

Duo Zhang commented on HBASE-27437:
-----------------------------------

OK, the code is for testing performance...

{noformat}
  @Test
  public void testAutoCalcFixedOverHead() {
    Class[] classList = new Class[] { HFileContext.class, HRegion.class, 
BlockCacheKey.class,
      HFileBlock.class, HStore.class, LruBlockCache.class, StoreContext.class };
    for (Class cl : classList) {
      // do estimate in advance to ensure class is loaded
      ClassSize.estimateBase(cl, false);

      long startTime = EnvironmentEdgeManager.currentTime();
      ClassSize.estimateBase(cl, false);
      long endTime = EnvironmentEdgeManager.currentTime();
      assertTrue(endTime - startTime < 5);
    }
  }
{noformat}

It is introduced in HBASE-24659, which is for making sure that the calculation 
overhead is small. But if the machine itself is small, it is possible that we 
fail since the assertion is a bit strict, especially if there is a gc...

[~niuyulin] [~stack] WDYT? Can we relax the threshold?

Thanks.

> TestHeapSize is flaky
> ---------------------
>
>                 Key: HBASE-27437
>                 URL: https://issues.apache.org/jira/browse/HBASE-27437
>             Project: HBase
>          Issue Type: Bug
>          Components: test
>            Reporter: Duo Zhang
>            Priority: Major
>
> I believe it is just in memory computation, so it is weird that why it can be 
> flaky.
> Need to dig more.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to