[
https://issues.apache.org/jira/browse/HADOOP-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12474756
]
[EMAIL PROTECTED] commented on HADOOP-997:
------------------------------------------
+1
I ran the previously-failing test from above. It now runs to completion
successfully copying gigabytes into S3.
On patch, I see you've added the one thing I was going to suggest after trying
v2 yesterday and tripping over 'no such file or directory' myself, the
outputting of the problematic file path when emitting the 'No such file or
directory' message. Good stuff.
> Implement S3 retry mechanism for failed block transfers
> -------------------------------------------------------
>
> Key: HADOOP-997
> URL: https://issues.apache.org/jira/browse/HADOOP-997
> Project: Hadoop
> Issue Type: Improvement
> Components: fs
> Affects Versions: 0.11.0
> Reporter: Tom White
> Assigned To: Tom White
> Fix For: 0.12.0
>
> Attachments: HADOOP-997-v2.patch, HADOOP-997-v3.patch,
> HADOOP-997.patch
>
>
> HADOOP-882 improves S3FileSystem so that when certain communications problems
> with S3 occur the operation is retried. However, the retry mechanism cannot
> handle a block transfer failure, since blocks may be very large and we don't
> want to buffer them in memory. This improvement is to write a wrapper (using
> java.lang.reflect.Proxy if possible - see discussion in HADOOP-882) that can
> retry block transfers.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.