[
https://issues.apache.org/jira/browse/HADOOP-865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463394
]
Tom White commented on HADOOP-865:
----------------------------------
Bryan,
The patch should be a simple fix for the problem. If you try "hadoop dfs -rm
filename" it should now work.
Note that -rmr doesn't work yet (I will create another patch for this).
Thanks,
Tom
> Files written to S3 but never closed can't be deleted
> -----------------------------------------------------
>
> Key: HADOOP-865
> URL: https://issues.apache.org/jira/browse/HADOOP-865
> Project: Hadoop
> Issue Type: Bug
> Components: fs
> Reporter: Bryan Pendleton
> Attachments: hadoop-865.patch
>
>
> I've been playing with the S3 integration. My first attempts to use it are
> actually as a drop-in replacement for a backup job, streaming data offsite by
> piping the backup job output to a "hadoop dfs -put - targetfile".
> If enough errors occur posting to S3 (this happened easily last Thursday,
> during an S3 growth issue), the write can eventually fail. At that point,
> there are both blocks and a partial INode written into S3. Doing a "hadoop
> dfs -ls filename" shows the file, it has a non-zero size, etc. However,
> trying to "hadoop dfs -rm filename" a failed-written file results in the
> response "rm: No such file or directory."
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira