The following error is thrown when distcp ing data from hdfs (hadoop 15.5) to S3 storage. This problem is creeping in after actually applying couple of bug fixes in hadoop 15.5 that were resolved in the later versions.
Any thoughts would be greatly helpful.

With failures, global counters are inaccurate; consider running with -i
Copy failed: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 GET failed. XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>/user/root/ImplicitFeedback/linkdb-test</Key><RequestId>1249D2146A4A104E</RequestId><HostId>....</HostId></Error> at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:199) at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.inodeExists(Jets3tFileSystemStore.java:169)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
       at $Proxy1.inodeExists(Unknown Source)
at org.apache.hadoop.fs.s3.S3FileSystem.exists(S3FileSystem.java:127)
       at org.apache.hadoop.util.CopyFiles.setup(CopyFiles.java:675)
       at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:475)
       at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:550)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
       at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:563)

Regards & Thanks,
Ilayaraja

Reply via email to