[ https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13223567#comment-13223567 ]
Mikhail Bautin commented on HBASE-5074: --------------------------------------- @Dhruba: could you please rerun the failed tests locally, as well as check the test reports? org.apache.hadoop.hbase.coprocessor.TestMasterObserver org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat org.apache.hadoop.hbase.mapred.TestTableMapReduce org.apache.hadoop.hbase.mapreduce.TestImportTsv > support checksums in HBase block cache > -------------------------------------- > > Key: HBASE-5074 > URL: https://issues.apache.org/jira/browse/HBASE-5074 > Project: HBase > Issue Type: Improvement > Components: regionserver > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Fix For: 0.94.0 > > Attachments: D1521.1.patch, D1521.1.patch, D1521.10.patch, > D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, > D1521.11.patch, D1521.11.patch, D1521.12.patch, D1521.12.patch, > D1521.13.patch, D1521.13.patch, D1521.2.patch, D1521.2.patch, D1521.3.patch, > D1521.3.patch, D1521.4.patch, D1521.4.patch, D1521.5.patch, D1521.5.patch, > D1521.6.patch, D1521.6.patch, D1521.7.patch, D1521.7.patch, D1521.8.patch, > D1521.8.patch, D1521.9.patch, D1521.9.patch > > > The current implementation of HDFS stores the data in one block file and the > metadata(checksum) in another block file. This means that every read into the > HBase block cache actually consumes two disk iops, one to the datafile and > one to the checksum file. This is a major problem for scaling HBase, because > HBase is usually bottlenecked on the number of random disk iops that the > storage-hardware offers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira