Increase the retry buffer size in jets3t.properties and maybe up the number
of retries while you're at it.  If there is no template file included in
Hadoop's conf dir you can find it at the jets3t web site.  Make sure that
it's from the same version that your copy of Hadoop is using.

On Mon, Sep 1, 2008 at 1:32 PM, Ryan LeCompte <[EMAIL PROTECTED]> wrote:

> Hello,
>
> I'm trying to upload a fairly large file (18GB or so) to my AWS S3
> account via bin/hadoop fs -put ... s3://...
>
> It copies for a good 15 or 20 minutes, and then eventually errors out
> with a failed retry attempt (saying that it can't retry since it has
> already written a certain number of bytes, etc. sorry don't have the
> original error message at the moment). Has anyone experienced anything
> similar? Can anyone suggest a workaround or a way to specify retries?
> Should I use another tool for uploading large files to s3?
>
> Thanks,
> Ryan
>

Reply via email to