[ https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150789#comment-14150789 ]
dhruba borthakur edited comment on HDFS-3107 at 9/27/14 9:56 PM: ----------------------------------------------------------------- hi konstantin, is it possible to wait for a few more days before committing? from the comments, it feels that aaron might want to do a detailed review of the design. Especially since the design doc was added only 4 days ago and the 'truncate' feature is not trivial. also, feel free to ignore my comment in case u feel strongly about committing it today. was (Author: dhruba): hi konstantin, is it possible to wait for a few more days before committing? from the comments, it feels that aaron might want to do a detailed review of the design. Especially since the design doc was added only 4 days ago and the 'truncate' feature is not trivial. > HDFS truncate > ------------- > > Key: HDFS-3107 > URL: https://issues.apache.org/jira/browse/HDFS-3107 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, namenode > Reporter: Lei Chang > Assignee: Plamen Jeliazkov > Attachments: HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, > HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, > HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, > editsStored > > Original Estimate: 1,344h > Remaining Estimate: 1,344h > > Systems with transaction support often need to undo changes made to the > underlying storage when a transaction is aborted. Currently HDFS does not > support truncate (a standard Posix operation) which is a reverse operation of > append, which makes upper layer applications use ugly workarounds (such as > keeping track of the discarded byte range per file in a separate metadata > store, and periodically running a vacuum process to rewrite compacted files) > to overcome this limitation of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)