[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16699715#comment-16699715
 ] 

Sean Mackrory commented on HADOOP-15370:
----------------------------------------

+1 on this patch. But I am also seeing a pretty scary number of (presumably 
unrelated) failures that we really need to get under control. I ran all tests 
in parallel with 8 threads, but with and without this patch, HADOOP-15947, and 
HADOOP-15798. I tried with S3Guard disabled, with -Ds3guard, -Ds3guard 
-Ddynamo, and finally -Ds3guard -Ddynamo -Dauth. The bouncycastle issue always 
shows up regardless, so I'm fine to ignore that until Steve's fix goes in. I 
hit HADOOP-14927 with -Ds3guard against the local metadata store. And I hit 
this issue once with S3Guard disabled, but I haven't been able to reproduce it:

{code}[ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
172.151 s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp
[ERROR] 
testUpdateDeepDirectoryStructureToRemote(org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp)
  Time elapsed: 24.869 s  <<< ERROR!
java.io.FileNotFoundException: Expected file: not found 
s3a://mackrory/fork-0002/test/ITestS3AContractDistCp/testUpdateDeepDirectoryStructureToRemote/remote/DELAY_LISTING_ME/outputDir/inputDir/file1
 in 
s3a://mackrory/fork-0002/test/ITestS3AContractDistCp/testUpdateDeepDirectoryStructureToRemote/remote/DELAY_LISTING_ME/outputDir/inputDir
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:940)
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:918)
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile(ContractTestUtils.java:826)
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyFileContents(ContractTestUtils.java:235)
        at 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest.distCpDeepDirectoryStructure(AbstractContractDistCpTest.java:499)
        at 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote(AbstractContractDistCpTest.java:223)
        ... 16 omitted by me
Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a://mackrory/fork-0002/test/ITestS3AContractDistCp/testUpdateDeepDirectoryStructureToRemote/remote/DELAY_LISTING_ME/outputDir/inputDir/file1
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2280)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2174)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2112)
        at 
org.apache.hadoop.fs.contract.ContractTestUtils.verifyPathExists(ContractTestUtils.java:934)
        ... 21 more{code}

All the really scary results are enumerated here: 
https://gist.github.com/mackrorysd/81eb96e91af1d05f59db9871da60b178

That's just too much variation for us to commit with confidence. We need to 
figure out some central way to track these failures and make a concerted, 
combined effort to get that down.

> S3A log message on rm s3a://bucket/ not intuitive
> -------------------------------------------------
>
>                 Key: HADOOP-15370
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15370
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Gabor Bota
>            Priority: Trivial
>         Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to