[ https://issues.apache.org/jira/browse/HBASE-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13001340#comment-13001340 ]
Matteo Bertozzi commented on HBASE-3514: ---------------------------------------- @ryan I still don't have looked tests in deep, but we can't just replace MAX_VALUE with DEFAULT_BLOCKSIZE? {code} Index: src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java =================================================================== --- src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java (revision 1076127) +++ src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java (working copy) @@ -547,7 +547,7 @@ HColumnDescriptor.DEFAULT_COMPRESSION, HColumnDescriptor.DEFAULT_IN_MEMORY, HColumnDescriptor.DEFAULT_BLOCKCACHE, - Integer.MAX_VALUE, HColumnDescriptor.DEFAULT_TTL, + HColumnDescriptor.DEFAULT_BLOCKSIZE, HColumnDescriptor.DEFAULT_TTL, HColumnDescriptor.DEFAULT_BLOOMFILTER, HColumnDescriptor.DEFAULT_REPLICATION_SCOPE); desc.addFamily(hcd); @@ -574,7 +574,7 @@ HColumnDescriptor.DEFAULT_COMPRESSION, HColumnDescriptor.DEFAULT_IN_MEMORY, HColumnDescriptor.DEFAULT_BLOCKCACHE, - Integer.MAX_VALUE, HColumnDescriptor.DEFAULT_TTL, + HColumnDescriptor.DEFAULT_BLOCKSIZE, HColumnDescriptor.DEFAULT_TTL, HColumnDescriptor.DEFAULT_BLOOMFILTER, HColumnDescriptor.DEFAULT_REPLICATION_SCOPE); desc.addFamily(hcd); {code} > Speedup HFile.Writer append > --------------------------- > > Key: HBASE-3514 > URL: https://issues.apache.org/jira/browse/HBASE-3514 > Project: HBase > Issue Type: Improvement > Components: io > Affects Versions: 0.90.0 > Reporter: Matteo Bertozzi > Priority: Minor > Attachments: HBASE-3514-append-0.90-v2.patch, > HBASE-3514-append-0.90-v2b.patch, HBASE-3514-append-0.90-v3.patch, > HBASE-3514-append-0.90.patch, HBASE-3514-append-trunk-v2.patch, > HBASE-3514-append-trunk-v2b.patch, HBASE-3514-append-trunk-v3.patch, > HBASE-3514-append.patch, HBASE-3514-metaBlock-bsearch.patch > > > Remove double writes when block cache is specified, by using, only, the > ByteArrayDataStream. > baos is flushed with the compress stream on finishBlock. > On my machines HFilePerformanceEvaluation SequentialWriteBenchmark passes > from 4000ms to 2500ms. > Running SequentialWriteBenchmark for 1000000 rows took 4247ms. > Running SequentialWriteBenchmark for 1000000 rows took 4512ms. > Running SequentialWriteBenchmark for 1000000 rows took 4498ms. > Running SequentialWriteBenchmark for 1000000 rows took 2697ms. > Running SequentialWriteBenchmark for 1000000 rows took 2770ms. > Running SequentialWriteBenchmark for 1000000 rows took 2721ms. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira