[
https://issues.apache.org/jira/browse/HADOOP-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485051
]
Konstantin Shvachko commented on HADOOP-1134:
---------------------------------------------
Sorry, joining the discussion on such a late stage. There's been a lot of
progress here. Got some fresh thoughts :-)
Currently the name-node does not know anything about .crc files. IMO we should
keep it that way or at least minimize the impact.
That said, I'd not support the idea of implementing the getChecksumAuthority()
method on the name-node, since if we do
it the name-node will act as a client to itself and its data-nodes.
I would propose to implement a separate crcConverter - a client, which acts as
an fsck client as we used to have in the past.
We can make it MR-distributed if we want to accelerate things.
The crcConverter
- reads a set of files and checks each block determining which crc block is
valid, basically implementing
Sameer's getChecksumAuthority() but on the client.
- Then asks each data-node to takeChecksumAuthority() over the block by either
copying its crc from another node or generating it locally.
- And then removes the crc file, which will also lead to removal of old crc
blocks.
In case of failure the crcConverter will restart as of nothing has been done
before.
For those files that have already been converted the converter will not find
its crc files and will ask the data-nodes to
takeChecksumAuthority() over the corresponding blocks by generating them
locally. Data-nodes will see that corresponding
crc files have already been generated and will do nothing.
With respect to the name-node, during crc conversion it should be started with
-upgrade option, and the converter should wait
until everything is upgraded, replicated and stabilized on the cluster. Then it
can enter the manual safe mode and do the conversion.
I am sure there is a lot of details missing in my proposal, but it seems
simpler to me because it requires less changes on the name- and data-nodes.
> Block level CRCs in HDFS
> ------------------------
>
> Key: HADOOP-1134
> URL: https://issues.apache.org/jira/browse/HADOOP-1134
> Project: Hadoop
> Issue Type: New Feature
> Components: dfs
> Reporter: Raghu Angadi
> Assigned To: Raghu Angadi
>
> Currently CRCs are handled at FileSystem level and are transparent to core
> HDFS. See recent improvement HADOOP-928 ( that can add checksums to a given
> filesystem ) regd more about it. Though this served us well there a few
> disadvantages :
> 1) This doubles namespace in HDFS ( or other filesystem implementations ). In
> many cases, it nearly doubles the number of blocks. Taking namenode out of
> CRCs would nearly double namespace performance both in terms of CPU and
> memory.
> 2) Since CRCs are transparent to HDFS, it can not actively detect corrupted
> blocks. With block level CRCs, Datanode can periodically verify the checksums
> and report corruptions to namnode such that name replicas can be created.
> We propose to have CRCs maintained for all HDFS data in much the same way as
> in GFS. I will update the jira with detailed requirements and design. This
> will include same guarantees provided by current implementation and will
> include a upgrade of current data.
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.