[ https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171951#comment-13171951 ]
dhruba borthakur commented on HDFS-2699: ---------------------------------------- Thanks for your comments Scott, Andrew, Todd and Allen. Scott: most of our our hbase production clusters have io.bytes.per.checksum to 4096 (instead of 512) Allen: One can put crcs on a logging device, e.g. bookkeeper perhaps? But at the end of day, each random io from an hdfs file will consume two disk iops (one on the hdfs block storage and one from the loogging device), is it not? Won't it be optimal to inline crc and data. If we decide to implement inline crc, can we make the hdfs support two different data formats and not do any automatic data format upgrade for exisiting data? pre-existing data can remain in the older format while newly created files will have data in the new -inline-data-and-crc format. What to do people think about this idea? > Store data and checksums together in block file > ----------------------------------------------- > > Key: HDFS-2699 > URL: https://issues.apache.org/jira/browse/HDFS-2699 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: dhruba borthakur > Assignee: dhruba borthakur > > The current implementation of HDFS stores the data in one block file and the > metadata(checksum) in another block file. This means that every read from > HDFS actually consumes two disk iops, one to the datafile and one to the > checksum file. This is a major problem for scaling HBase, because HBase is > usually bottlenecked on the number of random disk iops that the > storage-hardware offers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira