[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189517#comment-14189517
 ] 

zhangduo commented on HBASE-10201:
----------------------------------

Yes, I run without the patch first, the result is 

{quote}
Results :

Failed tests:
  
IntegrationTestIngestWithACL>IntegrationTestBase.setUp:122->setUpCluster:64->IntegrationTestIngest.setUpCluster:88->IntegrationTestIngest.initTable:93
 Failed to initialize LoadTestTool expected:<0> but was:<1>

Tests in error:
  IntegrationTestMTTR.testRestartRsHoldingTable:261->run:305 ? Execution 
org.apa...

Tests run: 20, Failures: 1, Errors: 1, Skipped: 1
{quote}

Result with patch is
{quote}
Results :

Failed tests:
  
IntegrationTestIngestWithVisibilityLabels>IntegrationTestIngest.testIngest:104->IntegrationTestIngest.runIngestTest:166
 Update failed with error code 1
  
IntegrationTestIngestWithACL>IntegrationTestBase.setUp:122->setUpCluster:64->IntegrationTestIngest.setUpCluster:88->IntegrationTestIngest.initTable:93
 Failed to initialize LoadTestTool expected:<0> but was:<1>
  
IntegrationTestIngestWithTags>IntegrationTestIngest.testIngest:104->IntegrationTestIngest.runIngestTest:174
 Verification failed with error code 1

Tests in error:
  IntegrationTestMTTR.testRestartRsHoldingTable:261->run:305 ? Execution 
org.apa...
  IntegrationTestMTTR.testMoveRegion:271->run:305 ? Execution 
org.apache.hadoop....

Tests run: 20, Failures: 3, Errors: 2, Skipped: 1
{quote}

For IntegrationTestMTTR.testMoveRegion, it will be passed if I run it 
separately with other methods in the same class being commented, and using 
command "mvn clean test-compile failsafe:integration-test 
-Dit.test=IntegrationTestMTTR -DfailIfNoTests=false".

Now i'm debugging IntegrationTestIngestWithVisibilityLabels, but the log is 
flooded with
{quote}
java.io.IOException: Compression algorithm 'lz4' previously failed test.
        at 
org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:90)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:4936)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4923)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4896)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4868)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4824)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4775)
        at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:276)
        at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:103)
        at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:103)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
java.io.IOException: Compression algorithm 'snappy' previously failed test.
        at 
org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:90)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:4936)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4923)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4896)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4868)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4824)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4775)
        at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:276)
        at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:103)
        at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:103)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
{quote}
and it is hard to find useful informations from the output.

I have compiled hadoop native libs, but I do not know where to place it when 
running tests...

Or is there a way to disable compression when running integration tests? I 
think the result will not be changed since the patch has nothing to do with 
compression...

Thanks.

> Port 'Make flush decisions per column family' to trunk
> ------------------------------------------------------
>
>                 Key: HBASE-10201
>                 URL: https://issues.apache.org/jira/browse/HBASE-10201
>             Project: HBase
>          Issue Type: Improvement
>          Components: wal
>            Reporter: Ted Yu
>            Assignee: zhangduo
>            Priority: Critical
>             Fix For: 2.0.0, 0.99.2
>
>         Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
> HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
> HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_2.patch, 
> HBASE-10201_3.patch
>
>
> Currently the flush decision is made using the aggregate size of all column 
> families. When large and small column families co-exist, this causes many 
> small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to