[ 
https://issues.apache.org/jira/browse/HBASE-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12987418#action_12987418
 ] 

ryan rawson commented on HBASE-3480:
------------------------------------

I changed the code to use LZF pure java compression from this library:  
https://github.com/ning/compress

The result size is smaller now:
grep 'Serialized size' * | perl -ne '/Serialized size: (\d+?) in (\d+?) ns 
compressed: (true|false)/ ; print $1, " ", $2, , " ", $3, "\n" if $1 > 10000;'  
 | cut -f1 -d' ' | perl -ne '$sum += $_; $count++; END {print $sum/$count, 
"\n"}'
277735.361445783

But the times still arent so great:
1775773 106297860 true
1620568 68043741 true
1334129 98508585 true
1408999 78860459 true
1264817 60595079 true
622714 28482354 true
511205 23480742 true

The 'true' means that this response was compressed.  The first column is 
response size from HRS -> client in bytes, the second column is time it took to 
serialize, including both the Result.write* code and the compression in 
nanoseconds.

We may be better off with a C-based compression algorithm, which would be 
feasible via this mechanism:
- we will only compress responses we know the size for, and if the size > 
$THRESHOLD
- we can serialize the Result/Result[] into a DirectByteBuffer
- we can then compress the resulting DirectByteBuffer in to a new one
- we can then use nio to directly reply from this DBB.

I think we could dig up some benchmarks for the java vs C implementation of LZF 
and figure out if this might be worthwhile or not.

> Reduce the size of Result serialization
> ---------------------------------------
>
>                 Key: HBASE-3480
>                 URL: https://issues.apache.org/jira/browse/HBASE-3480
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.0
>            Reporter: ryan rawson
>         Attachments: HBASE-3480.txt
>
>
> When faced with a gigabit ethernet network connection, things are pretty slow 
> actually.  For example, let's take a 2 MB reply, using a 120MB/sec line rate, 
> we are talking about about 16ms to transfer that data across a gige line.  
> This is a pretty significant amount of time.
> So this JIRA is about reducing the size of the Result[] serialization.  By 
> exploiting family and qualifier and rowkey duplication, I created a simple 
> encoding scheme to use a dictionary instead of literal strings.  
> in my testing, I am seeing some success with the sizes.  Average serialized 
> size is about 1/2 of previous, but time to serialize on the regionserver side 
> is way up, by a factor of 10x.  This might be due to the simplistic first 
> implementation however.
> Here is the post change size:
> grep 'Serialized size' * | perl -ne '/Serialized size: (\d+?) in (\d+?) ns/ ; 
> print $1, " ", $2, "\n" if $1 > 10000;' | cut -f1 -d' ' | perl -ne '$sum += 
> $_; $count++; END {print $sum/$count, "\n"}'
> 377047.1125
> Here is the pre change size:
> grep 'Serialized size' * | perl -ne '/Serialized size: (\d+?) in (\d+?) ns/ ; 
> print $1, " ", $2, "\n" if $1 > 10000;' | cut -f1 -d' ' | perl -ne '$sum += 
> $_; $count++; END {print $sum/$count, "\n"}'
> 601078.505882353
> That is about a 60% improvement in size.
> But times are not so good, here are some samples of the old, in (size) (time 
> in ns)
> 3874599 10685836
> 5582725 11525888
> so that is about 11ms to serialize 3-5mb of data.
> In the new implementation:
> 1898788 118504672
> 1630058 91133003
> this is 118-91ms for serialized sizes of 1.6-1.8 MB.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to