[ 
https://issues.apache.org/jira/browse/HADOOP-931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12467527
 ] 

Andrzej Bialecki  commented on HADOOP-931:
------------------------------------------

While we're at it, it's been often requested by Nutch users that DFS should do 
an automatic close of a partial file, if the process writing it abruptly exits. 
Currently partial files are deleted (which often means that even in case where 
partial files are usable they are deleted anyway).

> Make writes to S3FileSystem world visible only on completion
> ------------------------------------------------------------
>
>                 Key: HADOOP-931
>                 URL: https://issues.apache.org/jira/browse/HADOOP-931
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>            Reporter: Tom White
>
> Currently files written to S3 are visible to other processes as soon as the 
> first block has been written. This is different to DFS which only makes files 
> world visible after the stream writing to the file has closed (see 
> FSNamesystem.completeFile).
> We could implement this by having a piece of inode metadata that indicates 
> the visibility of the file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to