[ 
https://issues.apache.org/jira/browse/HDFS-1403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12913908#action_12913908
 ] 

sam rash commented on HDFS-1403:
--------------------------------

can you elaborate?

also, this truncate option will have to work on open files.  I think 
-list-corruptfiles only works on closed ones.  we have to handle the missing 
last block problem (the main reason I filed this)


> add -truncate option to fsck
> ----------------------------
>
>                 Key: HDFS-1403
>                 URL: https://issues.apache.org/jira/browse/HDFS-1403
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs client, name-node
>            Reporter: sam rash
>
> When running fsck, it would be useful to be able to tell hdfs to truncate any 
> corrupt file to the last valid position in the latest block.  Then, when 
> running hadoop fsck, an admin can cleanup the filesystem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to