[ 
https://issues.apache.org/jira/browse/HBASE-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12998810#comment-12998810
 ] 

Matteo Bertozzi commented on HBASE-3514:
----------------------------------------

@ryan
Errors are caused by block size greater than 1.5G
I don't know if this patch is a good idea with block this large.
Maybe a bufferedOutputStream will be better.

> Speedup HFile.Writer append
> ---------------------------
>
>                 Key: HBASE-3514
>                 URL: https://issues.apache.org/jira/browse/HBASE-3514
>             Project: HBase
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.90.0
>            Reporter: Matteo Bertozzi
>            Priority: Minor
>         Attachments: HBASE-3514-append-0.90.patch, HBASE-3514-append.patch, 
> HBASE-3514-metaBlock-bsearch.patch
>
>
> Remove double writes when block cache is specified, by using, only, the 
> ByteArrayDataStream.
> baos is flushed with the compress stream on finishBlock.
> On my machines HFilePerformanceEvaluation SequentialWriteBenchmark passes 
> from 4000ms to 2500ms.
> Running SequentialWriteBenchmark for 1000000 rows took 4247ms.
> Running SequentialWriteBenchmark for 1000000 rows took 4512ms.
> Running SequentialWriteBenchmark for 1000000 rows took 4498ms.
> Running SequentialWriteBenchmark for 1000000 rows took 2697ms.
> Running SequentialWriteBenchmark for 1000000 rows took 2770ms.
> Running SequentialWriteBenchmark for 1000000 rows took 2721ms.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to