[ 
https://issues.apache.org/jira/browse/HBASE-25972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799356#comment-17799356
 ] 

Kadir Ozdemir commented on HBASE-25972:
---------------------------------------

I added the sequentialDelete and randomDelete tests to PerformanceEvaluation 
and did some performance testing on a local HBase as follows.
 # Enabled the dual file compaction. 
{code:java}
<property>
    <name>hbase.hstore.defaultengine.enable.dualfilewriter</name>
    <value>true</value>
</property> {code}

 # Created a table with one column family and inserted 1000000 rows with 20 
columns each with 32 byte value.
{code:java}
bin/hbase pe --nomapred --rows=1000000 --table=T1 --columns=20 --valueSize=32 
sequentialWrite 1 {code}

 # Set KeepDeleteCells to true. This is to keep delete cells after major 
compaction. This simulates minor compaction where deleted cells are not removed.
{code:java}
alter "T1", {NAME =>"info0", KEEP_DELETED_CELLS => TRUE} {code}

 # Inserted 500000 delete family markers randomly which deleted around 32% of 
the inserted rows.
{code:java}
bin/hbase pe --nomapred --rows=500000 --table=T1 randomDelete 1 {code}

 # Flushed and major compacted T1.
 # Scaned 10000 rows. 
{code:java}
bin/hbase pe --nomapred --rows=1000000 --table=T1 scan 1  {code}

The above scan took 7040ms.

Stopped the local HBase, disabled the dual file compaction, restarted the local 
HBase,  run major compaction, and repeated the scan test. This time the scan 
took 8539ms.

Then deleted all the rows using 
{code:java}
bin/hbase pe --nomapred --rows=1000000 --table=T1 sequentialDelete 1 {code}
Scanned the table within the HBase shell.  It took 6.9936 seconds.

Stopped the local HBase, enabled the dual compaction, restarted the local 
HBase, and run major compaction. Then scanned the table within the HBase shell 
again. This time it took 0.5660 seconds.

The above tests confirms the expected performance gain from the dual file 
compaction.

> Dual File Compaction
> --------------------
>
>                 Key: HBASE-25972
>                 URL: https://issues.apache.org/jira/browse/HBASE-25972
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Kadir Ozdemir
>            Assignee: Kadir Ozdemir
>            Priority: Major
>
> HBase stores tables row by row in its files, HFiles. An HFile is composed of 
> blocks. The number of rows stored in a block depends on the row sizes. The 
> number of rows per block gets lower when rows get larger on disk due to 
> multiple row versions since HBase stores all row versions sequentially in the 
> same HFile after compaction. However, applications (e.g., Phoenix) mostly 
> query the most recent row versions.
> The default compactor in HBase compacts HFiles into one file. This Jira 
> introduces a new store file writer which writes the retained cells by 
> compaction into two files, which will be called DualFileWriter. One of these 
> files will include the live cells. This file will be called a live-version 
> file. The other file will include the rest of the cells, that is, historical 
> versions. This file will be called a historical-version file. DualFileWriter 
> will work with the default compactor.
> The historical files will not be read for the scans scanning latest row 
> versions. This eliminates scanning unnecessary cell versions in compacted 
> files and thus it is expected to improve performance of these scans.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to