[ 
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13217927#comment-13217927
 ] 

stack commented on HBASE-5074:
------------------------------

Hey Ted.  Comment was not for you, it was for the patch author.

bq. The exception about org.apache.hadoop.util.PureJavaCrc32C not found should 
be normal - it was WARN.

The above makes no sense.  You have WARN and 'normal' in the same sentence.

If you look at the log, it says:

1. 2012-02-27 23:34:20,930 INFO org.apache.hadoop.hbase.util.ChecksumType: 
org.apache.hadoop.util.PureJavaCrc32 not available.
2. 2012-02-27 23:34:20,930 INFO org.apache.hadoop.hbase.util.ChecksumType: 
Checksum using java.util.zip.CRC32
3. It spews a thread dump saying AGAIN that 
org.apache.hadoop.util.PureJavaCrc32C not available.

That is going to confuse.

bq. Metrics should be collected on the cluster to see the difference.

Go easy on telling folks what they should do.  It tends to piss them off.


                
> support checksums in HBase block cache
> --------------------------------------
>
>                 Key: HBASE-5074
>                 URL: https://issues.apache.org/jira/browse/HBASE-5074
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: D1521.1.patch, D1521.1.patch, D1521.10.patch, 
> D1521.10.patch, D1521.10.patch, D1521.10.patch, D1521.10.patch, 
> D1521.2.patch, D1521.2.patch, D1521.3.patch, D1521.3.patch, D1521.4.patch, 
> D1521.4.patch, D1521.5.patch, D1521.5.patch, D1521.6.patch, D1521.6.patch, 
> D1521.7.patch, D1521.7.patch, D1521.8.patch, D1521.8.patch, D1521.9.patch, 
> D1521.9.patch
>
>
> The current implementation of HDFS stores the data in one block file and the 
> metadata(checksum) in another block file. This means that every read into the 
> HBase block cache actually consumes two disk iops, one to the datafile and 
> one to the checksum file. This is a major problem for scaling HBase, because 
> HBase is usually bottlenecked on the number of random disk iops that the 
> storage-hardware offers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to