[jira] [Commented] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2.8
[ https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445334#comment-16445334 ] Takanobu Asanuma commented on HADOOP-15385: --- Thanks for reporting it, [~shahrs87], and thanks for working on the release of 2.9.1, [~Sammi]. I looked into this issue a little. Seems MAPREDUCE-6909 broke the tests. They succeeded with {{9f9d554edfd83cbd2249c780124a75feebc52ef3}} and failed with {{71f49406f291038ef5772f216001c9e5abb14c8d in branch-2.8}}. I'd like to ping [~jlowe] and [~ajisakaa]. > Many tests are failing in hadoop-distcp project in branch-2.8 > - > > Key: HADOOP-15385 > URL: https://issues.apache.org/jira/browse/HADOOP-15385 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Priority: Blocker > > Many tests are failing in hadoop-distcp project in branch-2.8 > Below are the failing tests. > {noformat} > Failed tests: > > TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 > expected:<2> but was:<3> > TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > Tests run: 258, Failures: 16, Errors: 0, Skipped: 0 > {noformat} > {noformat} > rushabhs$ pwd > /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp > rushabhs$ git branch > branch-2 > branch-2.7 > * branch-2.8 > branch-2.9 > branch-3.0 > rushabhs$ git log --oneline | head -n3 > c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of > format option. Contributed by Erik Krogen > 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically > fails. Contributed by Jason Lowe > c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom > leveldb logger. Contributed by Jason Lowe. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11957) if an IOException error is thrown in DomainSocket.close we go into infinite loop.
[ https://issues.apache.org/jira/browse/HADOOP-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445298#comment-16445298 ] genericqa commented on HADOOP-11957: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 40m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 90 unchanged - 0 fixed = 91 total (was 90) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 38s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.unix.TestDomainSocket | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-11957 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12732085/HADOOP-11957.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 59bd3f446a53 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / da5bcf5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/14507/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/14507/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14507/testReport/ | | Max. process+thread count | 1467 (vs. ulimit of 1) | |
[jira] [Commented] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration
[ https://issues.apache.org/jira/browse/HADOOP-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445275#comment-16445275 ] genericqa commented on HADOOP-14388: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}120m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-14388 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866612/HADOOP-14388.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c13804639e32 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / da5bcf5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14508/testReport/ | | Max. process+thread count | 1523 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14508/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Don't set the key password if
[jira] [Commented] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2.8
[ https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445219#comment-16445219 ] Rushabh S Shah commented on HADOOP-15385: - {quote} Are you going on work on this JIRA? {quote} Hi Sammi, I am _not_ planning to work on this soon. I am occupied with some other stuff. Maybe we can ping who broke/introduced these failures and request them to take a look ? These failures are not seen in trunk or branch-3.1. Haven't checked branch-3.0. bq. Otherwise, I might consider leave it to next release. Your thoughts? Since the junits are failing, I think there is a hidden bug somewhere and distcp is too important tool to ignore the failures. If there is a bug, then it can result in data loss/corruption. Would like to hear from other community members viewpoints. > Many tests are failing in hadoop-distcp project in branch-2.8 > - > > Key: HADOOP-15385 > URL: https://issues.apache.org/jira/browse/HADOOP-15385 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Priority: Blocker > > Many tests are failing in hadoop-distcp project in branch-2.8 > Below are the failing tests. > {noformat} > Failed tests: > > TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 > expected:<2> but was:<3> > TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > Tests run: 258, Failures: 16, Errors: 0, Skipped: 0 > {noformat} > {noformat} > rushabhs$ pwd > /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp > rushabhs$ git branch > branch-2 > branch-2.7 > * branch-2.8 > branch-2.9 > branch-3.0 > rushabhs$ git log --oneline | head -n3 > c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of > format option. Contributed by Erik Krogen > 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically > fails. Contributed by Jason Lowe > c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom > leveldb logger. Contributed by Jason Lowe. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2.8
[ https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445109#comment-16445109 ] SammiChen commented on HADOOP-15385: Hi [~shahrs87], thanks for ping me. Are you going on work on this JIRA? I fully agree that we should better resolve the issue before the release. And on the other hand, there are some customers who are waiting eagerly to try the enhanced features in 2.9. So If we can resolve the issue in a shot time window, that would be great. Otherwise, I might consider leave it to next release. Your thoughts? > Many tests are failing in hadoop-distcp project in branch-2.8 > - > > Key: HADOOP-15385 > URL: https://issues.apache.org/jira/browse/HADOOP-15385 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Priority: Blocker > > Many tests are failing in hadoop-distcp project in branch-2.8 > Below are the failing tests. > {noformat} > Failed tests: > > TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 > expected:<2> but was:<3> > TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > Tests run: 258, Failures: 16, Errors: 0, Skipped: 0 > {noformat} > {noformat} > rushabhs$ pwd > /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp > rushabhs$ git branch > branch-2 > branch-2.7 > * branch-2.8 > branch-2.9 > branch-3.0 > rushabhs$ git log --oneline | head -n3 > c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of > format option. Contributed by Erik Krogen > 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically > fails. Contributed by Jason Lowe > c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom > leveldb logger. Contributed by Jason Lowe. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445077#comment-16445077 ] Aaron Fabbri commented on HADOOP-14756: --- Thanks [~gabor.bota], this looks great. Sorry for the confusion on {{allowMissing()}}. I was thinking that it still returned true for TestLocalMetadataStore, but that is no longer the case, so allowMissing() would be ok. I see you use isMetadataStoreAuthoritative() here, which is also ok because we don't need to run this test on stores that do not persist the authoritative directory flag (the other test case already covers it). +1 on latest patch. I will do some testing and commit if I don't find any issues. > S3Guard: expose capability query in MetadataStore and add tests of > authoritative mode > - > > Key: HADOOP-14756 > URL: https://issues.apache.org/jira/browse/HADOOP-14756 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch, > HADOOP-14756.003.patch > > > {{MetadataStoreTestBase.testListChildren}} would be improved with the ability > to query the features offered by the store, and the outcome of {{put()}}, so > probe the correctness of the authoritative mode > # Add predicate to MetadataStore interface > {{supportsAuthoritativeDirectories()}} or similar > # If #1 is true, assert that directory is fully cached after changes > # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify > when changes are made -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445076#comment-16445076 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 23m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 42s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}142m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 14s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}335m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | |
[jira] [Commented] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
[ https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444943#comment-16444943 ] genericqa commented on HADOOP-15390: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 40s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 7s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}213m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-15390 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919702/HADOOP-15390.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4f6f02c0089b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7d06806 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Comment Edited] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks
[ https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444891#comment-16444891 ] Sean Mackrory edited comment on HADOOP-15392 at 4/19/18 10:28 PM: -- {quote}the behavior in the event no sinks are configured would not be to just leak memory forever, so I wonder if there's something else we're doing that's wrong that we can just fix.{quote} In the hopes of finding an answer to this today, I compared this to the YARN Resource Manager cluster class and I'm not seeing any significant differences in terms of how the MetricsRegistry is instantiated & referenced and how the raw data flows to it. If memory got leaked further down in metrics2, I would think it would have manifested in somebody's ResourceManager by now. I also looked through detailed logs of what happens when closing filesystems in all of the tests (as some tests instantiate dozens of filesystems and then either close them or just end) and the ref counting appears to be functioning perfectly in all of those cases. And probably not directly related, but even without any changes I'm getting this one test failure, also in the metrics: {code}ITestS3AMetrics.testMetricsRegister:42->Assert.assertNotNull:621->Assert.assertTrue:41->Assert.fail:88 No metrics under test fs for S3AMetrics1-mackrory{code} was (Author: mackrorysd): {quote}the behavior in the event no sinks are configured would not be to just leak memory forever, so I wonder if there's something else we're doing that's wrong that we can just fix.{quote} In the hopes of finding an answer to this today, I compared this to the YARN Resource Manager cluster class and I'm not seeing any significant differences in terms of how the MetricsRegistry is instantiated & referenced and how the raw data flows to it. If memory got leaked further down in metrics2, I would think it would have manifested in somebody's ResourceManager by now. I also looked through detailed logs of what happens when closing filesystems in all of the tests (as some tests instantiate dozens of filesystems and then either close them or just end) and the ref counting appears to be functioning perfectly in all of those cases. > S3A Metrics in S3AInstrumentation Cause Memory Leaks > > > Key: HADOOP-15392 > URL: https://issues.apache.org/jira/browse/HADOOP-15392 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Voyta >Priority: Major > > While using HBase S3A Export Snapshot utility we started to experience memory > leaks of the process after version upgrade. > By running code analysis we traced the cause to revision > 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static > reference (singleton): > private static MetricsSystem metricsSystem = null; > When application uses S3AFileSystem instance that is not closed immediately > metrics are accumulated in this instance and memory grows without any limit. > > Expectation: > * It would be nice to have an option to disable metrics completely as this > is not needed for Export Snapshot utility. > * Usage of S3AFileSystem should not contain any static object that can grow > indefinitely. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks
[ https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444891#comment-16444891 ] Sean Mackrory commented on HADOOP-15392: {quote}the behavior in the event no sinks are configured would not be to just leak memory forever, so I wonder if there's something else we're doing that's wrong that we can just fix.{quote} In the hopes of finding an answer to this today, I compared this to the YARN Resource Manager cluster class and I'm not seeing any significant differences in terms of how the MetricsRegistry is instantiated & referenced and how the raw data flows to it. If memory got leaked further down in metrics2, I would think it would have manifested in somebody's ResourceManager by now. I also looked through detailed logs of what happens when closing filesystems in all of the tests (as some tests instantiate dozens of filesystems and then either close them or just end) and the ref counting appears to be functioning perfectly in all of those cases. > S3A Metrics in S3AInstrumentation Cause Memory Leaks > > > Key: HADOOP-15392 > URL: https://issues.apache.org/jira/browse/HADOOP-15392 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Voyta >Priority: Major > > While using HBase S3A Export Snapshot utility we started to experience memory > leaks of the process after version upgrade. > By running code analysis we traced the cause to revision > 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static > reference (singleton): > private static MetricsSystem metricsSystem = null; > When application uses S3AFileSystem instance that is not closed immediately > metrics are accumulated in this instance and memory grows without any limit. > > Expectation: > * It would be nice to have an option to disable metrics completely as this > is not needed for Export Snapshot utility. > * Usage of S3AFileSystem should not contain any static object that can grow > indefinitely. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implementation
[ https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-15400: -- Summary: Improve S3Guard documentation on Authoritative Mode implementation (was: Improve S3Guard documentation on Authoritative Mode implemenation) > Improve S3Guard documentation on Authoritative Mode implementation > -- > > Key: HADOOP-15400 > URL: https://issues.apache.org/jira/browse/HADOOP-15400 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.0.1 >Reporter: Aaron Fabbri >Assignee: Gabor Bota >Priority: Minor > > Part of the design of S3Guard is support for skipping the call to S3 > listObjects and serving directory listings out of the MetadataStore under > certain circumstances. This feature is called "authoritative" mode. I've > talked to many people about this feature and it seems to be universally > confusing. > I suggest we improve / add a section to the s3guard.md site docs elaborating > on what Authoritative Mode is. > It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth > in general. > It *is* the ability to short-circuit S3 list objects and serve listings from > the MetadataStore in some circumstances: > For S3A to skip S3's list objects on some *path*, and serve it directly from > the MetadataStore, the following things must all be true: > # The MetadataStore implementation persists the bit > {{DirListingMetadata.isAuthorititative}} set when calling > {{MetadataStore#put(DirListingMetadata)}} > # The S3A client is configured to allow metadatastore to be authoritative > source of a directory listing (fs.s3a.metadatastore.authoritative=true). > # The MetadataStore has a full listing for *path* stored in it. This only > happens if the FS client (s3a) explicitly has stored a full directory listing > with {{DirListingMetadata.isAuthorititative=true}} before the said listing > request happens. > Note that #1 only currently happens in LocalMetadataStore. Adding support to > DynamoDBMetadataStore is covered in HADOOP-14154. > Also, the multiple uses of the word "authoritative" are confusing. Two > meanings are used: > 1. In the FS client configuration fs.s3a.metadatastore.authoritative > - Behavior of S3A code (not MetadataStore) > - "S3A is allowed to skip S3.list() when it has full listing from > MetadataStore" > 2. MetadataStore > When storing a dir listing, can set a bit isAuthoritative > 1 : "full contents of directory" > 0 : "may not be full listing" > Note that a MetadataStore *MAY* persist this bit. (not *MUST*). > We should probably rename the {{DirListingMetadata.isAuthorititative}} to > {{.fullListing}} or at least put a comment where it is used to clarify its > meaning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implemenation
[ https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri reassigned HADOOP-15400: - Assignee: Gabor Bota (was: Aaron Fabbri) > Improve S3Guard documentation on Authoritative Mode implemenation > - > > Key: HADOOP-15400 > URL: https://issues.apache.org/jira/browse/HADOOP-15400 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.0.1 >Reporter: Aaron Fabbri >Assignee: Gabor Bota >Priority: Minor > > Part of the design of S3Guard is support for skipping the call to S3 > listObjects and serving directory listings out of the MetadataStore under > certain circumstances. This feature is called "authoritative" mode. I've > talked to many people about this feature and it seems to be universally > confusing. > I suggest we improve / add a section to the s3guard.md site docs elaborating > on what Authoritative Mode is. > It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth > in general. > It *is* the ability to short-circuit S3 list objects and serve listings from > the MetadataStore in some circumstances: > For S3A to skip S3's list objects on some *path*, and serve it directly from > the MetadataStore, the following things must all be true: > # The MetadataStore implementation persists the bit > {{DirListingMetadata.isAuthorititative}} set when calling > {{MetadataStore#put(DirListingMetadata)}} > # The S3A client is configured to allow metadatastore to be authoritative > source of a directory listing (fs.s3a.metadatastore.authoritative=true). > # The MetadataStore has a full listing for *path* stored in it. This only > happens if the FS client (s3a) explicitly has stored a full directory listing > with {{DirListingMetadata.isAuthorititative=true}} before the said listing > request happens. > Note that #1 only currently happens in LocalMetadataStore. Adding support to > DynamoDBMetadataStore is covered in HADOOP-14154. > Also, the multiple uses of the word "authoritative" are confusing. Two > meanings are used: > 1. In the FS client configuration fs.s3a.metadatastore.authoritative > - Behavior of S3A code (not MetadataStore) > - "S3A is allowed to skip S3.list() when it has full listing from > MetadataStore" > 2. MetadataStore > When storing a dir listing, can set a bit isAuthoritative > 1 : "full contents of directory" > 0 : "may not be full listing" > Note that a MetadataStore *MAY* persist this bit. (not *MUST*). > We should probably rename the {{DirListingMetadata.isAuthorititative}} to > {{.fullListing}} or at least put a comment where it is used to clarify its > meaning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implemenation
[ https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444812#comment-16444812 ] Aaron Fabbri commented on HADOOP-15400: --- Assigning to you [~gabor.bota]. You have my permission to copy any / all of the text in the description here if it helps. > Improve S3Guard documentation on Authoritative Mode implemenation > - > > Key: HADOOP-15400 > URL: https://issues.apache.org/jira/browse/HADOOP-15400 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.0.1 >Reporter: Aaron Fabbri >Assignee: Gabor Bota >Priority: Minor > > Part of the design of S3Guard is support for skipping the call to S3 > listObjects and serving directory listings out of the MetadataStore under > certain circumstances. This feature is called "authoritative" mode. I've > talked to many people about this feature and it seems to be universally > confusing. > I suggest we improve / add a section to the s3guard.md site docs elaborating > on what Authoritative Mode is. > It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth > in general. > It *is* the ability to short-circuit S3 list objects and serve listings from > the MetadataStore in some circumstances: > For S3A to skip S3's list objects on some *path*, and serve it directly from > the MetadataStore, the following things must all be true: > # The MetadataStore implementation persists the bit > {{DirListingMetadata.isAuthorititative}} set when calling > {{MetadataStore#put(DirListingMetadata)}} > # The S3A client is configured to allow metadatastore to be authoritative > source of a directory listing (fs.s3a.metadatastore.authoritative=true). > # The MetadataStore has a full listing for *path* stored in it. This only > happens if the FS client (s3a) explicitly has stored a full directory listing > with {{DirListingMetadata.isAuthorititative=true}} before the said listing > request happens. > Note that #1 only currently happens in LocalMetadataStore. Adding support to > DynamoDBMetadataStore is covered in HADOOP-14154. > Also, the multiple uses of the word "authoritative" are confusing. Two > meanings are used: > 1. In the FS client configuration fs.s3a.metadatastore.authoritative > - Behavior of S3A code (not MetadataStore) > - "S3A is allowed to skip S3.list() when it has full listing from > MetadataStore" > 2. MetadataStore > When storing a dir listing, can set a bit isAuthoritative > 1 : "full contents of directory" > 0 : "may not be full listing" > Note that a MetadataStore *MAY* persist this bit. (not *MUST*). > We should probably rename the {{DirListingMetadata.isAuthorititative}} to > {{.fullListing}} or at least put a comment where it is used to clarify its > meaning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implemenation
[ https://issues.apache.org/jira/browse/HADOOP-15400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri reassigned HADOOP-15400: - Assignee: Aaron Fabbri > Improve S3Guard documentation on Authoritative Mode implemenation > - > > Key: HADOOP-15400 > URL: https://issues.apache.org/jira/browse/HADOOP-15400 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.0.1 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri >Priority: Minor > > Part of the design of S3Guard is support for skipping the call to S3 > listObjects and serving directory listings out of the MetadataStore under > certain circumstances. This feature is called "authoritative" mode. I've > talked to many people about this feature and it seems to be universally > confusing. > I suggest we improve / add a section to the s3guard.md site docs elaborating > on what Authoritative Mode is. > It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth > in general. > It *is* the ability to short-circuit S3 list objects and serve listings from > the MetadataStore in some circumstances: > For S3A to skip S3's list objects on some *path*, and serve it directly from > the MetadataStore, the following things must all be true: > # The MetadataStore implementation persists the bit > {{DirListingMetadata.isAuthorititative}} set when calling > {{MetadataStore#put(DirListingMetadata)}} > # The S3A client is configured to allow metadatastore to be authoritative > source of a directory listing (fs.s3a.metadatastore.authoritative=true). > # The MetadataStore has a full listing for *path* stored in it. This only > happens if the FS client (s3a) explicitly has stored a full directory listing > with {{DirListingMetadata.isAuthorititative=true}} before the said listing > request happens. > Note that #1 only currently happens in LocalMetadataStore. Adding support to > DynamoDBMetadataStore is covered in HADOOP-14154. > Also, the multiple uses of the word "authoritative" are confusing. Two > meanings are used: > 1. In the FS client configuration fs.s3a.metadatastore.authoritative > - Behavior of S3A code (not MetadataStore) > - "S3A is allowed to skip S3.list() when it has full listing from > MetadataStore" > 2. MetadataStore > When storing a dir listing, can set a bit isAuthoritative > 1 : "full contents of directory" > 0 : "may not be full listing" > Note that a MetadataStore *MAY* persist this bit. (not *MUST*). > We should probably rename the {{DirListingMetadata.isAuthorititative}} to > {{.fullListing}} or at least put a comment where it is used to clarify its > meaning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implemenation
Aaron Fabbri created HADOOP-15400: - Summary: Improve S3Guard documentation on Authoritative Mode implemenation Key: HADOOP-15400 URL: https://issues.apache.org/jira/browse/HADOOP-15400 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 3.0.1 Reporter: Aaron Fabbri Part of the design of S3Guard is support for skipping the call to S3 listObjects and serving directory listings out of the MetadataStore under certain circumstances. This feature is called "authoritative" mode. I've talked to many people about this feature and it seems to be universally confusing. I suggest we improve / add a section to the s3guard.md site docs elaborating on what Authoritative Mode is. It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth in general. It *is* the ability to short-circuit S3 list objects and serve listings from the MetadataStore in some circumstances: For S3A to skip S3's list objects on some *path*, and serve it directly from the MetadataStore, the following things must all be true: # The MetadataStore implementation persists the bit {{DirListingMetadata.isAuthorititative}} set when calling {{MetadataStore#put(DirListingMetadata)}} # The S3A client is configured to allow metadatastore to be authoritative source of a directory listing (fs.s3a.metadatastore.authoritative=true). # The MetadataStore has a full listing for *path* stored in it. This only happens if the FS client (s3a) explicitly has stored a full directory listing with {{DirListingMetadata.isAuthorititative=true}} before the said listing request happens. Note that #1 only currently happens in LocalMetadataStore. Adding support to DynamoDBMetadataStore is covered in HADOOP-14154. Also, the multiple uses of the word "authoritative" are confusing. Two meanings are used: 1. In the FS client configuration fs.s3a.metadatastore.authoritative - Behavior of S3A code (not MetadataStore) - "S3A is allowed to skip S3.list() when it has full listing from MetadataStore" 2. MetadataStore When storing a dir listing, can set a bit isAuthoritative 1 : "full contents of directory" 0 : "may not be full listing" Note that a MetadataStore *MAY* persist this bit. (not *MUST*). We should probably rename the {{DirListingMetadata.isAuthorititative}} to {{.fullListing}} or at least put a comment where it is used to clarify its meaning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2.8
[ https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HADOOP-15385: Target Version/s: 2.9.1, 2.8.4 (was: 2.8.4) > Many tests are failing in hadoop-distcp project in branch-2.8 > - > > Key: HADOOP-15385 > URL: https://issues.apache.org/jira/browse/HADOOP-15385 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Priority: Blocker > > Many tests are failing in hadoop-distcp project in branch-2.8 > Below are the failing tests. > {noformat} > Failed tests: > > TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 > expected:<2> but was:<3> > TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > Tests run: 258, Failures: 16, Errors: 0, Skipped: 0 > {noformat} > {noformat} > rushabhs$ pwd > /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp > rushabhs$ git branch > branch-2 > branch-2.7 > * branch-2.8 > branch-2.9 > branch-3.0 > rushabhs$ git log --oneline | head -n3 > c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of > format option. Contributed by Erik Krogen > 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically > fails. Contributed by Jason Lowe > c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom > leveldb logger. Contributed by Jason Lowe. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2.8
[ https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444808#comment-16444808 ] Rushabh S Shah commented on HADOOP-15385: - Added blocker for 2.9.1 also. Please remove if you feel its not a blocker. > Many tests are failing in hadoop-distcp project in branch-2.8 > - > > Key: HADOOP-15385 > URL: https://issues.apache.org/jira/browse/HADOOP-15385 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Priority: Blocker > > Many tests are failing in hadoop-distcp project in branch-2.8 > Below are the failing tests. > {noformat} > Failed tests: > > TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 > expected:<2> but was:<3> > TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > Tests run: 258, Failures: 16, Errors: 0, Skipped: 0 > {noformat} > {noformat} > rushabhs$ pwd > /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp > rushabhs$ git branch > branch-2 > branch-2.7 > * branch-2.8 > branch-2.9 > branch-3.0 > rushabhs$ git log --oneline | head -n3 > c4ea1c8bb73 HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of > format option. Contributed by Erik Krogen > 1548205a845 YARN-8147. TestClientRMService#testGetApplications sporadically > fails. Contributed by Jason Lowe > c01b425ba31 YARN-8120. JVM can crash with SIGSEGV when exiting due to custom > leveldb logger. Contributed by Jason Lowe. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15385) Many tests are failing in hadoop-distcp project in branch-2.8
[ https://issues.apache.org/jira/browse/HADOOP-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444805#comment-16444805 ] Rushabh S Shah commented on HADOOP-15385: - These tests are failing in branch-2.9 also. {noformat} [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 expected:<4> but was:<5> [ERROR] TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 expected:<2> but was:<3> [ERROR] TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 expected:<6> but was:<8> [ERROR] TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 expected:<4> but was:<5> [ERROR] TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 expected:<4> but was:<5> [ERROR] TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 expected:<4> but was:<5> [ERROR] TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 expected:<4> but was:<5> [ERROR] TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 expected:<2> but was:<3> [ERROR] TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 expected:<2> but was:<3> [ERROR] TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 expected:<2> but was:<3> [ERROR] TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 expected:<6> but was:<8> [ERROR] TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 expected:<6> but was:<8> [ERROR] TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 expected:<6> but was:<8> [ERROR] TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 expected:<4> but was:<5> [ERROR] TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 expected:<4> but was:<5> [ERROR] TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 expected:<4> but was:<5> [INFO] [ERROR] Tests run: 73, Failures: 16, Errors: 0, Skipped: 0 {noformat} {noformat} C02QD8LYG8WP-lm:hadoop-distcp rushabhs$ git branch * branch-2.9.1 C02QD8LYG8WP-lm:hadoop-distcp rushabhs$ git status On branch branch-2.9.1 Your branch is up-to-date with 'origin/branch-2.9.1'. nothing to commit, working tree clean {noformat} Cc [~Sammi]. > Many tests are failing in hadoop-distcp project in branch-2.8 > - > > Key: HADOOP-15385 > URL: https://issues.apache.org/jira/browse/HADOOP-15385 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Priority: Blocker > > Many tests are failing in hadoop-distcp project in branch-2.8 > Below are the failing tests. > {noformat} > Failed tests: > > TestDistCpViewFs.testUpdateGlobTargetMissingSingleLevel:326->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingMultiLevel:346->checkResult:428 > expected:<4> but was:<5> > TestDistCpViewFs.testGlobTargetMissingSingleLevel:306->checkResult:428 > expected:<2> but was:<3> > TestDistCpViewFs.testUpdateGlobTargetMissingMultiLevel:367->checkResult:428 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > TestIntegration.testUpdateGlobTargetMissingSingleLevel:431->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingMultiLevel:454->checkResult:577 > expected:<4> but was:<5> > TestIntegration.testGlobTargetMissingSingleLevel:408->checkResult:577 > expected:<2> but was:<3> > TestIntegration.testUpdateGlobTargetMissingMultiLevel:478->checkResult:577 > expected:<6> but was:<8> > Tests run: 258, Failures: 16, Errors: 0, Skipped: 0 > {noformat} > {noformat} > rushabhs$ pwd > /Users/rushabhs/hadoop/apacheHadoop/hadoop/hadoop-tools/hadoop-distcp > rushabhs$ git branch > branch-2 > branch-2.7 > * branch-2.8 > branch-2.9 > branch-3.0 > rushabhs$ git log --oneline | head -n3 >
[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-12953: Attachment: HADOOP-12953.004.patch > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-12953: Attachment: (was: HADOOP-12953.004.patch) > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444672#comment-16444672 ] Bharat Viswanadham commented on HADOOP-12953: - Thank You [~arpitagarwal] for review. {quote}We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, libhdfs_wapper_defines.h etc. {quote} Added in hdfs.h, this patch is only taken care of change in hdfs c client. For further changes to libhdfs c++, it can be taken care in a new jira. {quote}Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar to hdfsConnectAsUser. {quote} As old methods are deprecated, so not added similar method for proxyUser. {quote}Nitpick: single statement if/else blocks should still have curly braces. e.g. here: {quote} {code:java} if (bld->createProxyUser) methodToCall = "newInstanceAsProxyUser"; else methodToCall = "newInstance";{code} Addressed this. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-12953: Attachment: HADOOP-12953.004.patch > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
[ https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444663#comment-16444663 ] Xiao Chen commented on HADOOP-15390: Added links. mvninstall doesn't look related. Kicked a new run at https://builds.apache.org/job/PreCommit-HADOOP-Build/14505/ > Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens > - > > Key: HADOOP-15390 > URL: https://issues.apache.org/jira/browse/HADOOP-15390 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HADOOP-15390.01.patch, HADOOP-15390.02.patch > > > When looking at a recent issue with [~rkanter] and [~yufeigu], we found that > the RM log in a cluster was flooded by KMS token renewal errors below: > {noformat} > $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log > 2018-04-11 11:34:09,367 WARN > org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: > keyProvider null cannot renew dt. > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: > (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, > maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; > apps=[]], for [] > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, > renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, > sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 > ms, appId = [] > ... > 2018-04-11 11:34:09,367 WARN > org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: > keyProvider null cannot renew dt. > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: > (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, > maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; > apps=[]], for [] > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, > renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, > sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 > ms, appId = [] > {noformat} > Further inspection shows the KMS IP is from another cluster. The RM is before > HADOOP-14445, so needs to read from config. The config rightfully doesn't > have the other cluster's KMS configured. > Although HADOOP-14445 will make this a non-issue by creating the provider > from token service, we should fix 2 things here: > - KMS token renewer should throw instead of return 0. Returning 0 when not > able to renew shall be considered a bug in the renewer. > - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and > not go into this busy loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
[ https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15390: --- Affects Version/s: 2.8.0 > Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens > - > > Key: HADOOP-15390 > URL: https://issues.apache.org/jira/browse/HADOOP-15390 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HADOOP-15390.01.patch, HADOOP-15390.02.patch > > > When looking at a recent issue with [~rkanter] and [~yufeigu], we found that > the RM log in a cluster was flooded by KMS token renewal errors below: > {noformat} > $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log > 2018-04-11 11:34:09,367 WARN > org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: > keyProvider null cannot renew dt. > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: > (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, > maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; > apps=[]], for [] > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, > renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, > sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 > ms, appId = [] > ... > 2018-04-11 11:34:09,367 WARN > org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: > keyProvider null cannot renew dt. > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: > (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, > maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; > apps=[]], for [] > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, > renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, > sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 > ms, appId = [] > {noformat} > Further inspection shows the KMS IP is from another cluster. The RM is before > HADOOP-14445, so needs to read from config. The config rightfully doesn't > have the other cluster's KMS configured. > Although HADOOP-14445 will make this a non-issue by creating the provider > from token service, we should fix 2 things here: > - KMS token renewer should throw instead of return 0. Returning 0 when not > able to renew shall be considered a bug in the renewer. > - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and > not go into this busy loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15390) Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens
[ https://issues.apache.org/jira/browse/HADOOP-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15390: --- Target Version/s: 2.8.4 (was: 3.1.1) > Yarn RM logs flooded by DelegationTokenRenewer trying to renew KMS tokens > - > > Key: HADOOP-15390 > URL: https://issues.apache.org/jira/browse/HADOOP-15390 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HADOOP-15390.01.patch, HADOOP-15390.02.patch > > > When looking at a recent issue with [~rkanter] and [~yufeigu], we found that > the RM log in a cluster was flooded by KMS token renewal errors below: > {noformat} > $ tail -9 hadoop-cmf-yarn-RESOURCEMANAGER.log > 2018-04-11 11:34:09,367 WARN > org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: > keyProvider null cannot renew dt. > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: > (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, > maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; > apps=[]], for [] > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, > renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, > sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 > ms, appId = [] > ... > 2018-04-11 11:34:09,367 WARN > org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer: > keyProvider null cannot renew dt. > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renewed delegation-token= [Kind: kms-dt, Service: KMSIP:16000, Ident: > (kms-dt owner=user, renewer=yarn, realUser=, issueDate=1522192283334, > maxDate=1522797083334, sequenceNumber=15108613, masterKeyId=2674);exp=0; > apps=[]], for [] > 2018-04-11 11:34:09,367 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer: > Renew Kind: kms-dt, Service: KMSIP:16000, Ident: (kms-dt owner=user, > renewer=yarn, realUser=, issueDate=1522192283334, maxDate=1522797083334, > sequenceNumber=15108613, masterKeyId=2674);exp=0; apps=[] in -1523446449367 > ms, appId = [] > {noformat} > Further inspection shows the KMS IP is from another cluster. The RM is before > HADOOP-14445, so needs to read from config. The config rightfully doesn't > have the other cluster's KMS configured. > Although HADOOP-14445 will make this a non-issue by creating the provider > from token service, we should fix 2 things here: > - KMS token renewer should throw instead of return 0. Returning 0 when not > able to renew shall be considered a bug in the renewer. > - Yarn RM's {{DelegationTokenRenewer}} service should validate the return and > not go into this busy loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14154) Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)
[ https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14154: -- Description: Add support for "authoritative mode" for DynamoDBMetadataStore. The missing feature is to persist the bit set in {{DirListingMetadata.isAuthoritative}}. This topic has been super confusing for folks so I will also file a documentation Jira to explain the design better. We may want to also rename the DirListingMetadata.isAuthoritative field to .isFullListing to eliminate the multiple uses and meanings of the word "authoritative". was: Currently {{DynamoDBMetaStore::listChildren}} does not populate {{isAuthoritative}} flag when creating {{DirListingMetadata}}. This causes additional S3 lookups even when users have enabled {{fs.s3a.metadatastore.authoritative}}. > Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support) > - > > Key: HADOOP-14154 > URL: https://issues.apache.org/jira/browse/HADOOP-14154 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Rajesh Balamohan >Assignee: Gabor Bota >Priority: Minor > Attachments: HADOOP-14154-HADOOP-13345.001.patch, > HADOOP-14154-HADOOP-13345.002.patch > > > Add support for "authoritative mode" for DynamoDBMetadataStore. > The missing feature is to persist the bit set in > {{DirListingMetadata.isAuthoritative}}. > This topic has been super confusing for folks so I will also file a > documentation Jira to explain the design better. > We may want to also rename the DirListingMetadata.isAuthoritative field to > .isFullListing to eliminate the multiple uses and meanings of the word > "authoritative". > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14154) Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)
[ https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri reassigned HADOOP-14154: - Assignee: Gabor Bota > Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support) > - > > Key: HADOOP-14154 > URL: https://issues.apache.org/jira/browse/HADOOP-14154 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Rajesh Balamohan >Assignee: Gabor Bota >Priority: Minor > Attachments: HADOOP-14154-HADOOP-13345.001.patch, > HADOOP-14154-HADOOP-13345.002.patch > > > Currently {{DynamoDBMetaStore::listChildren}} does not populate > {{isAuthoritative}} flag when creating {{DirListingMetadata}}. > This causes additional S3 lookups even when users have enabled > {{fs.s3a.metadatastore.authoritative}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14154) Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support)
[ https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14154: -- Summary: Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support) (was: Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore) > Persist isAuthoritative bit in DynamoDBMetaStore (authoritative mode support) > - > > Key: HADOOP-14154 > URL: https://issues.apache.org/jira/browse/HADOOP-14154 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-14154-HADOOP-13345.001.patch, > HADOOP-14154-HADOOP-13345.002.patch > > > Currently {{DynamoDBMetaStore::listChildren}} does not populate > {{isAuthoritative}} flag when creating {{DirListingMetadata}}. > This causes additional S3 lookups even when users have enabled > {{fs.s3a.metadatastore.authoritative}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11640) add user defined delimiter support to Configuration
[ https://issues.apache.org/jira/browse/HADOOP-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1641#comment-1641 ] Chris Douglas commented on HADOOP-11640: Understood. I don't think MAPREDUCE-7069 covers this. > add user defined delimiter support to Configuration > --- > > Key: HADOOP-11640 > URL: https://issues.apache.org/jira/browse/HADOOP-11640 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.6.0 >Reporter: Xiaoshuang LU >Assignee: Xiaoshuang LU >Priority: Major > Labels: BB2015-05-TBR > Attachments: HADOOP-11640.patch > > > As mentioned by org.apache.hadoop.conf.Configuration.getStrings ("Get the > comma delimited values of the name property as an array of Strings"), only > comma separated strings can be used. It would be much better if user defined > separators are supported. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15382) Log kinit output in credential renewal thread
[ https://issues.apache.org/jira/browse/HADOOP-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444391#comment-16444391 ] Wei-Chiu Chuang commented on HADOOP-15382: -- Looks good to me overall. Thank you. Would you please use parameterized logging? I.e. LOG.debug("{}", output); +1 after that. This may be useful when kinit is successful. Makes sense to me to log it in debug level. If kinit is not successful, Shell.execCommand() throws ExitCodeException with the stderr output in the exception message. > Log kinit output in credential renewal thread > - > > Key: HADOOP-15382 > URL: https://issues.apache.org/jira/browse/HADOOP-15382 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Wei-Chiu Chuang >Assignee: Gabor Bota >Priority: Minor > Attachments: HADOOP-15382.001.patch > > > We currently run kinit command in a thread to renew kerberos credentials > periodically. > {code:java} > Shell.execCommand(cmd, "-R"); > if (LOG.isDebugEnabled()) { > LOG.debug("renewed ticket"); > } > {code} > It seems useful to log the output of the kinit too. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444042#comment-16444042 ] genericqa commented on HADOOP-14756: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 41s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-14756 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12919799/HADOOP-14756.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 741c086130fe 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 351e509 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/14504/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14504/testReport/ | | Max. process+thread count | 289 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14504/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message
[jira] [Comment Edited] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443981#comment-16443981 ] Gabor Bota edited comment on HADOOP-14756 at 4/19/18 12:27 PM: --- Thanks [~fabbri]! The things I modified in .003: * added "for debugging and testing only" to MetadataStore#getDiagnostics javadoc * moved the assertTrue(dirMeta.isAuthoritative()); to the end of the test - where it should be to test the listing of children elements. This, of course, broke the test for DynamoDBMetadataStore, so I've * changed the PERSISTS_AUTHORITATIVE_BIT to false in DynamoDBMetadataStore. * removed the assert for allowMissing() from the beginning of the test. I think this is a kind of check which could be easily misunderstood. The reason that I wanted to include this is that the javadoc for it is slightly misleading: "Tests assume that implementations will return recently set results", and I need recently set test result for my tests obviously - so I wanted to check that. (Test & verify ran on us-west-2 successfully for the patch.) was (Author: gabor.bota): Thanks [~fabbri]! The things I modified in .003: * added "for debugging and testing only" to MetadataStore#getDiagnostics javadoc * moved the assertTrue(dirMeta.isAuthoritative()); to the end of the test - where it should be to test the listing of children elements. This, of course, broke the test for DynamoDBMetadataStore, so I've * changed the PERSISTS_AUTHORITATIVE_BIT to false in DynamoDBMetadataStore. * removed the assert for allowMissing() from the beginning of the test. I think this is check which could be easily misunderstood. The reason that I wanted to include this is that the javadoc for it is slightly misleading: "Tests assume that implementations will return recently set results", and I need recently set test result for my tests obviously - so I wanted to check that. (Test & verify ran on us-west-2 successfully for the patch.) > S3Guard: expose capability query in MetadataStore and add tests of > authoritative mode > - > > Key: HADOOP-14756 > URL: https://issues.apache.org/jira/browse/HADOOP-14756 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch, > HADOOP-14756.003.patch > > > {{MetadataStoreTestBase.testListChildren}} would be improved with the ability > to query the features offered by the store, and the outcome of {{put()}}, so > probe the correctness of the authoritative mode > # Add predicate to MetadataStore interface > {{supportsAuthoritativeDirectories()}} or similar > # If #1 is true, assert that directory is fully cached after changes > # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify > when changes are made -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-14756: Status: Open (was: Patch Available) > S3Guard: expose capability query in MetadataStore and add tests of > authoritative mode > - > > Key: HADOOP-14756 > URL: https://issues.apache.org/jira/browse/HADOOP-14756 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch, > HADOOP-14756.003.patch > > > {{MetadataStoreTestBase.testListChildren}} would be improved with the ability > to query the features offered by the store, and the outcome of {{put()}}, so > probe the correctness of the authoritative mode > # Add predicate to MetadataStore interface > {{supportsAuthoritativeDirectories()}} or similar > # If #1 is true, assert that directory is fully cached after changes > # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify > when changes are made -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443981#comment-16443981 ] Gabor Bota commented on HADOOP-14756: - Thanks [~fabbri]! The things I modified in .003: * added "for debugging and testing only" to MetadataStore#getDiagnostics javadoc * moved the assertTrue(dirMeta.isAuthoritative()); to the end of the test - where it should be to test the listing of children elements. This, of course, broke the test for DynamoDBMetadataStore, so I've * changed the PERSISTS_AUTHORITATIVE_BIT to false in DynamoDBMetadataStore. * removed the assert for allowMissing() from the beginning of the test. I think this is check which could be easily misunderstood. The reason that I wanted to include this is that the javadoc for it is slightly misleading: "Tests assume that implementations will return recently set results", and I need recently set test result for my tests obviously - so I wanted to check that. (Test & verify ran on us-west-2 successfully for the patch.) > S3Guard: expose capability query in MetadataStore and add tests of > authoritative mode > - > > Key: HADOOP-14756 > URL: https://issues.apache.org/jira/browse/HADOOP-14756 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch, > HADOOP-14756.003.patch > > > {{MetadataStoreTestBase.testListChildren}} would be improved with the ability > to query the features offered by the store, and the outcome of {{put()}}, so > probe the correctness of the authoritative mode > # Add predicate to MetadataStore interface > {{supportsAuthoritativeDirectories()}} or similar > # If #1 is true, assert that directory is fully cached after changes > # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify > when changes are made -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-14756: Status: Patch Available (was: Open) > S3Guard: expose capability query in MetadataStore and add tests of > authoritative mode > - > > Key: HADOOP-14756 > URL: https://issues.apache.org/jira/browse/HADOOP-14756 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch, > HADOOP-14756.003.patch > > > {{MetadataStoreTestBase.testListChildren}} would be improved with the ability > to query the features offered by the store, and the outcome of {{put()}}, so > probe the correctness of the authoritative mode > # Add predicate to MetadataStore interface > {{supportsAuthoritativeDirectories()}} or similar > # If #1 is true, assert that directory is fully cached after changes > # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify > when changes are made -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode
[ https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-14756: Attachment: HADOOP-14756.003.patch > S3Guard: expose capability query in MetadataStore and add tests of > authoritative mode > - > > Key: HADOOP-14756 > URL: https://issues.apache.org/jira/browse/HADOOP-14756 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch, > HADOOP-14756.003.patch > > > {{MetadataStoreTestBase.testListChildren}} would be improved with the ability > to query the features offered by the store, and the outcome of {{put()}}, so > probe the correctness of the authoritative mode > # Add predicate to MetadataStore interface > {{supportsAuthoritativeDirectories()}} or similar > # If #1 is true, assert that directory is fully cached after changes > # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify > when changes are made -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15397) Failed to start the estimator of Resource Estimator Service
[ https://issues.apache.org/jira/browse/HADOOP-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443807#comment-16443807 ] zhangbutao commented on HADOOP-15397: - @[~motus] Can you review the small patch ? > Failed to start the estimator of Resource Estimator Service > --- > > Key: HADOOP-15397 > URL: https://issues.apache.org/jira/browse/HADOOP-15397 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0 >Reporter: zhangbutao >Priority: Major > Fix For: 2.9.0 > > Attachments: HADOOP-15397-001.path, > HADOOP-15397-branch-2.9.0.003.patch, HADOOP-15397.002.patch > > > You would get the following log, if you statt the estmator using script > start-estimator.sh;. And the estmator is not started. > {code:java} > starting resource estimator service > starting estimator, logging to > /hadoop/share/hadoop/tools/resourceestimator/bin/../../../../../logs/hadoop-resourceestimator.out > /hadoop/share/hadoop/tools/resourceestimator/bin/estimator-daemon.sh: line > 47: bin/estimator.sh: No such file or directory{code} > Fix the bug in the script estimator-daemon.sh. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-15397) Failed to start the estimator of Resource Estimator Service
[ https://issues.apache.org/jira/browse/HADOOP-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangbutao updated HADOOP-15397: Comment: was deleted (was: @ Rui Li Can you review this samll patch ?) > Failed to start the estimator of Resource Estimator Service > --- > > Key: HADOOP-15397 > URL: https://issues.apache.org/jira/browse/HADOOP-15397 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0 >Reporter: zhangbutao >Priority: Major > Fix For: 2.9.0 > > Attachments: HADOOP-15397-001.path, > HADOOP-15397-branch-2.9.0.003.patch, HADOOP-15397.002.patch > > > You would get the following log, if you statt the estmator using script > start-estimator.sh;. And the estmator is not started. > {code:java} > starting resource estimator service > starting estimator, logging to > /hadoop/share/hadoop/tools/resourceestimator/bin/../../../../../logs/hadoop-resourceestimator.out > /hadoop/share/hadoop/tools/resourceestimator/bin/estimator-daemon.sh: line > 47: bin/estimator.sh: No such file or directory{code} > Fix the bug in the script estimator-daemon.sh. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15397) Failed to start the estimator of Resource Estimator Service
[ https://issues.apache.org/jira/browse/HADOOP-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443803#comment-16443803 ] zhangbutao commented on HADOOP-15397: - @ Rui Li Can you review this samll patch ? > Failed to start the estimator of Resource Estimator Service > --- > > Key: HADOOP-15397 > URL: https://issues.apache.org/jira/browse/HADOOP-15397 > Project: Hadoop Common > Issue Type: Bug > Components: tools >Affects Versions: 2.9.0 >Reporter: zhangbutao >Priority: Major > Fix For: 2.9.0 > > Attachments: HADOOP-15397-001.path, > HADOOP-15397-branch-2.9.0.003.patch, HADOOP-15397.002.patch > > > You would get the following log, if you statt the estmator using script > start-estimator.sh;. And the estmator is not started. > {code:java} > starting resource estimator service > starting estimator, logging to > /hadoop/share/hadoop/tools/resourceestimator/bin/../../../../../logs/hadoop-resourceestimator.out > /hadoop/share/hadoop/tools/resourceestimator/bin/estimator-daemon.sh: line > 47: bin/estimator.sh: No such file or directory{code} > Fix the bug in the script estimator-daemon.sh. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core
[ https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443763#comment-16443763 ] SammiChen commented on HADOOP-15205: I tried "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 -Pdist,src,yarn-ui -Dtar" when uploading 2.9.1 RC0. It works. Thanks [~eddyxu] for providing the solution. > maven release: missing source attachments for hadoop-mapreduce-client-core > -- > > Key: HADOOP-15205 > URL: https://issues.apache.org/jira/browse/HADOOP-15205 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.5, 3.0.0 >Reporter: Zoltan Haindrich >Priority: Major > > I wanted to use the source attachment; however it looks like since 2.7.5 that > artifact is not present at maven central ; it looks like the last release > which had source attachments / javadocs was 2.7.4 > http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/ > this seems to be not limited to mapreduce; as the same change is present for > yarn-common as well > http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/ > and also hadoop-common > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org