[jira] [Commented] (HADOOP-15836) Review of AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17045175#comment-17045175 ] Hadoop QA commented on HADOOP-15836: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-15836 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15836 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12950230/HADOOP-15836.2.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16778/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Review of AccessControlList > --- > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-15836.1.patch, HADOOP-15836.2.patch, > assertEqualACLStrings.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12640) Code Review AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-12640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17045170#comment-17045170 ] Hadoop QA commented on HADOOP-12640: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-12640 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12640 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12945046/HADOOP-12640.1.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16776/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Code Review AccessControlList > - > > Key: HADOOP-12640 > URL: https://issues.apache.org/jira/browse/HADOOP-12640 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: AccessControlList.patch, AccessControlList.patch, > HADOOP-12640.1.patch > > > After some confusion of my own, in particular with > "mapreduce.job.acl-view-job," I have looked over the AccessControlList > implementation and cleaned it up and clarified a few points. > 1) I added tests to demonstrate the existing behavior of including an > asterisk in either the username or the group field, it overrides everything > and allows all access. > "user1,user2,user3 *" = all access > "* group1,group2" = all access > "* *" = all access > "* " = all access > " *" = all access > 2) General clean-up and simplification -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17045140#comment-17045140 ] Hadoop QA commented on HADOOP-16886: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 59m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 16s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 0s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder | | | hadoop.conf.TestCommonConfigurationFields | | | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | HADOOP-16886 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994597/HADOOP-16886-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 70752122f1fc 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 900430b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/16775/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16775/testReport/ | | Max. process+thread count | 3256 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16775/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add hadoop.http.idle_timeout.ms to core-default.xml > --- > >
[jira] [Commented] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17045124#comment-17045124 ] Hadoop QA commented on HADOOP-16882: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 44s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 40m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 0m 35s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:70a0ef5d4a6 | | JIRA Issue | HADOOP-16882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994596/HADOOP-16882.branch-3.1.v1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 839108b2dd72 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.1 / 8aaa8d1 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16773/testReport/ | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16773/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > Attachments: HADOOP-1688
[jira] [Commented] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17045116#comment-17045116 ] Hadoop QA commented on HADOOP-16882: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} root in branch-3.1 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} hadoop-project in branch-3.1 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} hadoop-project in branch-3.1 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s{color} | {color:red} hadoop-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s{color} | {color:red} hadoop-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 11s{color} | {color:red} hadoop-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 13s{color} | {color:red} hadoop-project in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 0m 37s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s{color} | {color:red} hadoop-project in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:70a0ef5d4a6 | | JIRA Issue | HADOOP-16882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12994596/HADOOP-16882.branch-3.1.v1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux b3e3f7f247c7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.1 / 8aaa8d1 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/16774/artifact/out/branch-mvninstall-root.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16774/artifact/out/branch-compile-hadoop-project.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16774/artifact/out/branch-mvnsite-hadoop-project.txt | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/16774/artifact/out/patch-mvninstall-hadoop-project.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16774/artifact/out/patch-compile-hadoop-project.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/16774/artifact/out/patch-compile-hadoop-project.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Bu
[jira] [Updated] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-16886: - Attachment: HADOOP-16886-001.patch Status: Patch Available (was: Open) > Add hadoop.http.idle_timeout.ms to core-default.xml > --- > > Key: HADOOP-16886 > URL: https://issues.apache.org/jira/browse/HADOOP-16886 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.2, 3.2.0, 3.0.4 >Reporter: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-16886-001.patch > > > HADOOP-15696 made the http server connection idle time configurable > (hadoop.http.idle_timeout.ms). > This configuration key is added to kms-default.xml and httpfs-default.xml but > we missed it in core-default.xml. We should add it there because NNs/JNs/DNs > also use it too. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-16882: - Attachment: HADOOP-16882.branch-3.1.v1.patch > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > Attachments: HADOOP-16882.branch-2.9.v1.patch, > HADOOP-16882.branch-3.1.v1.patch > > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-16882: - Attachment: HADOOP-16882.branch-2.9.v1.patch > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > Attachments: HADOOP-16882.branch-2.9.v1.patch, > HADOOP-16882.branch-3.1.v1.patch > > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-16882: - Attachment: (was: HADOOP-1688.branch-3.1.v2.patch) > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-16882: - Attachment: HADOOP-1688.branch-3.1.v2.patch Status: Patch Available (was: Open) > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > Attachments: HADOOP-1688.branch-3.1.v2.patch > > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1852: YARN-10152. Fix findbugs warnings in hadoop-yarn-applications-mawo-core module
aajisaka commented on issue #1852: YARN-10152. Fix findbugs warnings in hadoop-yarn-applications-mawo-core module URL: https://github.com/apache/hadoop/pull/1852#issuecomment-591197007 Hi @adamantal , would you review this? Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP…
xiaoyuyao commented on a change in pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP… URL: https://github.com/apache/hadoop/pull/1859#discussion_r384176957 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -329,7 +327,12 @@ public FSDataInputStream open(Path f, final int bufferSize) public FSDataInputStream doCall(final Path p) throws IOException { final DFSInputStream dfsis = dfs.open(getPathName(p), bufferSize, verifyChecksum); -return dfs.createWrappedInputStream(dfsis); +try { + return dfs.createWrappedInputStream(dfsis); +} catch (IOException ex){ Review comment: Agree. Here the case for create encrypted file is dfs.open succeeds and return a valid DFSIS, but the createWrappedInputStream throws when user does not have permission to decrypt the EDEK. The fix tries to ensure the file get closed properly in this case. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP…
hadoop-yetus commented on issue #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP… URL: https://github.com/apache/hadoop/pull/1859#issuecomment-591100710 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 23m 53s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 10s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 25s | trunk passed | | +1 :green_heart: | compile | 18m 18s | trunk passed | | +1 :green_heart: | checkstyle | 2m 48s | trunk passed | | +1 :green_heart: | mvnsite | 2m 20s | trunk passed | | +1 :green_heart: | shadedclient | 21m 31s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 30s | trunk passed | | +0 :ok: | spotbugs | 2m 28s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 33s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 36s | the patch passed | | +1 :green_heart: | compile | 18m 29s | the patch passed | | +1 :green_heart: | javac | 18m 29s | the patch passed | | +1 :green_heart: | checkstyle | 3m 1s | root: The patch generated 0 new + 37 unchanged - 1 fixed = 37 total (was 38) | | +1 :green_heart: | mvnsite | 2m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 34s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 34s | the patch passed | | +1 :green_heart: | findbugs | 5m 7s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 9m 37s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 12s | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | The patch does not generate ASF License warnings. | | | | 157m 48s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.shell.TestCopy | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1859/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1859 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cb0769523e41 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dda00d3 | | Default Java | 1.8.0_242 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1859/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1859/1/testReport/ | | Max. process+thread count | 3233 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1859/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1860: HDFS-15192. Leaking stream when access encrypted files hit exception …
hadoop-yetus commented on issue #1860: HDFS-15192. Leaking stream when access encrypted files hit exception … URL: https://github.com/apache/hadoop/pull/1860#issuecomment-591097886 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 17s | trunk passed | | +1 :green_heart: | compile | 3m 46s | trunk passed | | +1 :green_heart: | checkstyle | 0m 55s | trunk passed | | +1 :green_heart: | mvnsite | 1m 42s | trunk passed | | +1 :green_heart: | shadedclient | 17m 1s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 2s | trunk passed | | +0 :ok: | spotbugs | 0m 39s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 39s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 30s | the patch passed | | +1 :green_heart: | compile | 3m 42s | the patch passed | | +1 :green_heart: | javac | 3m 42s | the patch passed | | -0 :warning: | checkstyle | 0m 54s | hadoop-hdfs-project: The patch generated 1 new + 35 unchanged - 0 fixed = 36 total (was 35) | | +1 :green_heart: | mvnsite | 1m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 50s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 0s | the patch passed | | +1 :green_heart: | findbugs | 3m 47s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 40m 46s | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 0m 47s | hadoop-hdfs-nfs in the patch failed. | | +1 :green_heart: | asflicense | 0m 41s | The patch does not generate ASF License warnings. | | | | 119m 52s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestParallelImageWrite | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestSmallBlock | | | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens | | | hadoop.hdfs.TestDFSInputStreamBlockLocations | | | hadoop.hdfs.TestErasureCodingExerciseAPIs | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestFileCreation | | | hadoop.hdfs.TestBatchedListDirectories | | | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestLeaseRecovery | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1860/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1860 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c31a2e639384 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dda00d3 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1860/1/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1860/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1860/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1860/1/testReport/ | | Max. process+thread count | 3954 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/j
[jira] [Commented] (HADOOP-14918) Remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044933#comment-17044933 ] Hadoop QA commented on HADOOP-14918: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 53s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 11s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 20s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} branch-2.10 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} branch-2.10 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} branch-2.10 passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 59s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 59s{color} | {color:green} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 0 new + 1430 unchanged - 14 fixed = 1430 total (was 1444) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 54s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 54s{color} | {color:green} root-jdk1.8.0_242 with JDK v1.8.0_242 generated 0 new + 1334 unchanged - 12 fixed = 1334 total (was 1346) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 56s{color} | {color:orange} root: The patch generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed with JDK v1.8.0_242 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{colo
[jira] [Commented] (HADOOP-14776) clean up ITestS3AFileSystemContract
[ https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044926#comment-17044926 ] Hadoop QA commented on HADOOP-14776: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | HADOOP-14776 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882007/HADOOP-14776.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux db8b4a881099 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d68616b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16772/testReport/ | | Max. process+thread count | 424 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16772/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > clean up ITestS3AFileSystemContract > --- > > Key: HADOOP-14776 > URL: https://issues.apache.org/jira/brows
[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044853#comment-17044853 ] Wei-Chiu Chuang commented on HADOOP-16885: -- I suspect HBASE-16062 is related. IIRC HBase spent a lot of effort dealing with exceptions with opening files. > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP…
jojochuang commented on a change in pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP… URL: https://github.com/apache/hadoop/pull/1859#discussion_r384118837 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -329,7 +327,12 @@ public FSDataInputStream open(Path f, final int bufferSize) public FSDataInputStream doCall(final Path p) throws IOException { final DFSInputStream dfsis = dfs.open(getPathName(p), bufferSize, verifyChecksum); -return dfs.createWrappedInputStream(dfsis); +try { + return dfs.createWrappedInputStream(dfsis); +} catch (IOException ex){ Review comment: we would get an IOException if the dfs client is closed or if it is unable to reach NameNode for FsServerDefaults. I think it makes sense for a HDFS client to assume the file is closed if the open doesn't complete successfully. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP…
jojochuang commented on a change in pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP… URL: https://github.com/apache/hadoop/pull/1859#discussion_r384123762 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -697,6 +700,20 @@ public FSDataOutputStream next(final FileSystem fs, final Path p) }.resolve(this, absF); } + // Private helper to ensure the wrapped inner stream is closed safely + // upon IOException throw during wrap. + // Assuming the caller owns the inner stream which needs to be closed upon + // wrap failure. + private HdfsDataOutputStream safelyCreateWrappedOutputStream( + DFSOutputStream dfsos) throws IOException { +try { + return dfs.createWrappedOutputStream(dfsos, statistics); Review comment: In fact, it looks like HBASE-16062 is related. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation
hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation URL: https://github.com/apache/hadoop/pull/1820#issuecomment-591062291 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 12 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 8s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 21s | trunk passed | | +1 :green_heart: | compile | 17m 58s | trunk passed | | +1 :green_heart: | checkstyle | 2m 40s | trunk passed | | +1 :green_heart: | mvnsite | 2m 15s | trunk passed | | +1 :green_heart: | shadedclient | 20m 11s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 43s | trunk passed | | +0 :ok: | spotbugs | 1m 11s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 14s | trunk passed | | -0 :warning: | patch | 1m 36s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 24s | the patch passed | | +1 :green_heart: | compile | 16m 10s | the patch passed | | +1 :green_heart: | javac | 16m 10s | the patch passed | | -0 :warning: | checkstyle | 2m 43s | root: The patch generated 56 new + 96 unchanged - 19 fixed = 152 total (was 115) | | +1 :green_heart: | mvnsite | 2m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 13s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 33s | the patch passed | | -1 :x: | findbugs | 1m 20s | hadoop-tools/hadoop-aws generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 31s | hadoop-common in the patch passed. | | -1 :x: | unit | 1m 35s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 127m 51s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.policySetCount in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int) At S3AInstrumentation.java:[line 818] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readExceptions in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException() At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException() At S3AInstrumentation.java:[line 755] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperations in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long, long) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long, long) At S3AInstrumentation.java:[line 788] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readsIncomplete in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int, int) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int, int) At S3AInstrumentation.java:[line 799] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperations in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long, long) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long, long) At S3AInstrumentation.java:[line 777] | | | Increment of volatile
[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044805#comment-17044805 ] Xiaoyu Yao commented on HADOOP-16885: - cc: [~ste...@apache.org] and [~weichiu] > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14661) S3A to support Requester Pays Buckets
[ https://issues.apache.org/jira/browse/HADOOP-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044799#comment-17044799 ] Hadoop QA commented on HADOOP-14661: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-14661 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14661 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12877218/HADOOP-14661.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16770/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > S3A to support Requester Pays Buckets > - > > Key: HADOOP-14661 > URL: https://issues.apache.org/jira/browse/HADOOP-14661 > Project: Hadoop Common > Issue Type: Sub-task > Components: common, util >Affects Versions: 3.0.0-alpha3 >Reporter: Mandus Momberg >Assignee: Mandus Momberg >Priority: Minor > Attachments: HADOOP-14661.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > Amazon S3 has the ability to charge the requester for the cost of accessing > S3. This is called Requester Pays Buckets. > In order to access these buckets, each request needs to be signed with a > specific header. > http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #1860: HDFS-15192. Leaking stream when access encrypted files hit exception …
xiaoyuyao opened a new pull request #1860: HDFS-15192. Leaking stream when access encrypted files hit exception … URL: https://github.com/apache/hadoop/pull/1860 https://issues.apache.org/jira/browse/HDFS-15192 Fix the leaking stream by ensure the inner stream is closed if the wrapping process hit exception. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP…
xiaoyuyao opened a new pull request #1859: HADOOP-16885. Encryption zone file copy failure leaks temp file ._COP… URL: https://github.com/apache/hadoop/pull/1859 https://issues.apache.org/jira/browse/HADOOP-16885 Fix the leaking stream issues when access encrypted files hit exception during create. Move the deleteOnExit to ensure the file get deleted cleanly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14918) Remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044773#comment-17044773 ] Jonathan Hung commented on HADOOP-14918: [~mackrorysd] [~gabor.bota] [~ste...@apache.org] can we pull this to branch-3.1? It applies cleanly. Also I attached [^HADOOP-14918-branch-2.10.001.patch] for branch-2. Could I get a review? Conflicts: * Add HADOOP_TMP_DIR to org.apache.hadoop.fs.s3a.Constants (from HADOOP-13786) * Remove MAGIC_COMMITTER_ENABLED functionality (from HADOOP-13786) * Remove changes from MetadataStoreTestBase (from HADOOP-9330) > Remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, > HADOOP-14918-003.patch, HADOOP-14918-004.patch, > HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, > HADOOP-14918.006.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14918) Remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated HADOOP-14918: --- Attachment: HADOOP-14918-branch-2.10.001.patch > Remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, > HADOOP-14918-003.patch, HADOOP-14918-004.patch, > HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, > HADOOP-14918.006.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14918) Remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated HADOOP-14918: --- Status: Patch Available (was: Reopened) > Remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0, 2.9.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, > HADOOP-14918-003.patch, HADOOP-14918-004.patch, > HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, > HADOOP-14918.006.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14918) Remove the Local Dynamo DB test option
[ https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung reopened HADOOP-14918: > Remove the Local Dynamo DB test option > -- > > Key: HADOOP-14918 > URL: https://issues.apache.org/jira/browse/HADOOP-14918 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, > HADOOP-14918-003.patch, HADOOP-14918-004.patch, > HADOOP-14918-branch-2.10.001.patch, HADOOP-14918.005.patch, > HADOOP-14918.006.patch > > > I'm going to propose cutting out the localdynamo test option for s3guard > * the local DDB JAR is unmaintained/lags the SDK We work with...eventually > there'll be differences in API. > * as the local dynamo DB is unshaded. it complicates classpath setup for the > build. Remove it and there's no need to worry about versions of anything > other than the shaded AWS > * it complicates test runs. Now we need to test for both localdynamo *and* > real dynamo > * but we can't ignore real dynamo, because that's the one which matters > While the local option promises to reduce test costs, really, it's just > adding complexity. If you are testing with s3guard, you need to have a real > table to test against., And with the exception of those people testing s3a > against non-AWS, consistent endpoints, everyone should be testing with > S3Guard. > -Straightforward to remove.- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16877) S3A FS deleteOnExit to skip the exists check
[ https://issues.apache.org/jira/browse/HADOOP-16877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16877: Parent Issue: HADOOP-16829 (was: HADOOP-15620) > S3A FS deleteOnExit to skip the exists check > > > Key: HADOOP-16877 > URL: https://issues.apache.org/jira/browse/HADOOP-16877 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Steve Loughran >Priority: Major > > S3A FS deleteOnExiit is getting that 404 in because it looks for > file.exists() before adding. it should just queue for a delete > proposeAlso, processDeleteOnExit() to skip those checks too; just call > delete(). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
steveloughran commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823#issuecomment-591023245 Don't worry about the test failures, we have a patch for that which I should see about merging in https://issues.apache.org/jira/browse/HADOOP-16319 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16203) ITestS3AContractGetFileStatusV1List may have consistency issues
[ https://issues.apache.org/jira/browse/HADOOP-16203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16203: Parent Issue: HADOOP-16829 (was: HADOOP-15620) > ITestS3AContractGetFileStatusV1List may have consistency issues > --- > > Key: HADOOP-16203 > URL: https://issues.apache.org/jira/browse/HADOOP-16203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > Seeing in a failure in the listing tests which looks like it could suffer > from some consistency/concurrency issues: the path used is chosen from the > method name, but with two subclasses of the > {{AbstractContractGetFileStatusTest}} suite, the S3A tests could be > interfering. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384072576 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -550,6 +595,30 @@ public AbfsRestOperation checkAccess(String path, String rwx) return op; } + /** + * If configured for SAS AuthType, appends SAS token to queryBuilder + * @param path + * @param operation + * @param queryBuilder + * @throws SASTokenProviderException + */ + private void appendSASTokenToQuery(String path, String operation, AbfsUriQueryBuilder queryBuilder) throws SASTokenProviderException { +if (this.authType == AuthType.SAS) { + try { +LOG.trace("Fetch SAS token for {} on {}", operation, path); Review comment: Good to know, then we can resolve this comment. (I don't see an option to resolve comments.) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15347) S3ARetryPolicy to handle AWS 500 responses/error code TooBusyException with the throttle backoff policy
[ https://issues.apache.org/jira/browse/HADOOP-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15347: Parent Issue: HADOOP-16829 (was: HADOOP-15620) > S3ARetryPolicy to handle AWS 500 responses/error code TooBusyException with > the throttle backoff policy > --- > > Key: HADOOP-15347 > URL: https://issues.apache.org/jira/browse/HADOOP-15347 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Priority: Minor > > FLINK-9061 implies that some 500 responses are caused by server-side overload > of some form. > That means they should really have the throttle retry policy applied, not the > connectivity one -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14776) clean up ITestS3AFileSystemContract
[ https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14776: Parent Issue: HADOOP-16829 (was: HADOOP-15620) > clean up ITestS3AFileSystemContract > --- > > Key: HADOOP-14776 > URL: https://issues.apache.org/jira/browse/HADOOP-14776 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-14776.01.patch > > > With the move of {{FileSystemContractTest}} test to JUnit4, the bits of > {{ITestS3AFileSystemContract}} which override existing methods just to skip > them can be cleaned up: The subclasses could throw assume() so their skippage > gets noted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14661) S3A to support Requester Pays Buckets
[ https://issues.apache.org/jira/browse/HADOOP-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14661: Parent Issue: HADOOP-16829 (was: HADOOP-15620) > S3A to support Requester Pays Buckets > - > > Key: HADOOP-14661 > URL: https://issues.apache.org/jira/browse/HADOOP-14661 > Project: Hadoop Common > Issue Type: Sub-task > Components: common, util >Affects Versions: 3.0.0-alpha3 >Reporter: Mandus Momberg >Assignee: Mandus Momberg >Priority: Minor > Attachments: HADOOP-14661.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > Amazon S3 has the ability to charge the requester for the cost of accessing > S3. This is called Requester Pays Buckets. > In order to access these buckets, each request needs to be signed with a > specific header. > http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16806) AWS AssumedRoleCredentialProvider needs ExternalId add
[ https://issues.apache.org/jira/browse/HADOOP-16806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044761#comment-17044761 ] Steve Loughran commented on HADOOP-16806: - Jon -cutoff for Hadoop 3.3 is the end of the week...have you got a patch we can look att? > AWS AssumedRoleCredentialProvider needs ExternalId add > -- > > Key: HADOOP-16806 > URL: https://issues.apache.org/jira/browse/HADOOP-16806 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Jon Hartlaub >Priority: Minor > > AWS has added a security feature to the assume-role function in the form of > the "ExternalId" key in the AWS Java SDK > {{STSAssumeRoleSessionCredentialsProvider.Builder}} class. To support this > security feature, the hadoop aws {{AssumedRoleCredentialProvider}} needs a > patch to include this value from the configuration as well as an added > Constant to the {{org.apache.hadoop.fs.s3a.Constants}} file. > The ExternalId is not a required security feature, it is an augmentation of > the current assume role configuration. > Proposed: > * Get the assume-role ExternalId token from the configuration for the > configuration key {{fs.s3a.assumed.role.externalid}} > * Use the configured ExternalId value in the > {{STSAssumeRoleSessionCredentialsProvider.Builder}} > e.g. > {{if (StringUtils.isNotEmpty(externalId)) {}} > {{ builder.withExternalId(externalId); // include the token for > cross-account assume role}} > {{}}} > Tests: > * +Unit test+ which verifies the ExternalId state value of the > {{AssumedRoleCredentialProvider}} is consistent with the configured value - > either empty or populated > * Question: not sure about how to write the +integration test+ for this > feature. We have an account configured for this use-case that verifies this > feature but I don't have much context on the Hadoop project AWS S3 > integration tests, perhaps a pointer could help. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384066843 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -550,6 +595,30 @@ public AbfsRestOperation checkAccess(String path, String rwx) return op; } + /** + * If configured for SAS AuthType, appends SAS token to queryBuilder + * @param path + * @param operation + * @param queryBuilder + * @throws SASTokenProviderException + */ + private void appendSASTokenToQuery(String path, String operation, AbfsUriQueryBuilder queryBuilder) throws SASTokenProviderException { +if (this.authType == AuthType.SAS) { + try { +LOG.trace("Fetch SAS token for {} on {}", operation, path); Review comment: log @ debug/trace is low cost as long as you aren't actually concatenating strings; the sole cost is to instantiate the "Fetch SAS token.." string to pass down a reference. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044736#comment-17044736 ] Xiaoyu Yao commented on HADOOP-16885: - Similar issue exist with WebHdfsHandler#onCreate and RpcProgramNfs3#create, will open separate HDFS JIRAs for the fix. > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384055634 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java ## @@ -578,35 +584,37 @@ public AccessTokenProvider getTokenProvider() throws TokenAccessProviderExceptio } } - public String getAbfsExternalAuthorizationClass() { -return this.abfsExternalAuthorizationClass; - } - - public AbfsAuthorizer getAbfsAuthorizer() throws IOException { -String authClassName = getAbfsExternalAuthorizationClass(); -AbfsAuthorizer authorizer = null; + public SASTokenProvider getSASTokenProvider() throws AzureBlobFileSystemException { +AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, AuthType.SharedKey); +if (authType != AuthType.SAS) { + throw new SASTokenProviderException(String.format( Review comment: Within reason, we should add a test case for the error handling when SASTokenProvider fails to load or fails to initialize. Not sure how difficult it is to test, so we could do this manually if needed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384049325 ## File path: hadoop-tools/hadoop-azure/src/site/markdown/abfs.md ## @@ -626,7 +626,7 @@ points for third-parties to integrate their authentication and authorization services into the ABFS client. * `CustomDelegationTokenManager` : adds ability to issue Hadoop Delegation Tokens. -* `AbfsAuthorizer` permits client-side authorization of file operations. +* `SASTokenProvider` permits client-side authorization of file operations. Review comment: Client-side makes it sound like the client authorizes itself, which is not the case. I suggest " `SASTokenProvider` allows for custom provision of Azure Storage Shared Access Signature (SAS) tokens." to keep with the pattern on line 630. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384040926 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsUriQueryBuilder.java ## @@ -59,6 +64,20 @@ public String toString() { throw new IllegalArgumentException("Query string param is not encode-able: " + entry.getKey() + "=" + entry.getValue()); } } +// append SAS Token +if (sasToken != null) { + sasToken = sasToken.startsWith(AbfsHttpConstants.QUESTION_MARK) + ? sasToken.substring(1) + : sasToken; Review comment: The interface should require that '?' be omitted from the SAS token. Allowing '?' means the plug-in will add it and then driver will remove it every time, which is wasteful. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384038589 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -266,16 +295,23 @@ public AbfsRestOperation createPath(final String path, final boolean isFile, fin return op; } - public AbfsRestOperation renamePath(final String source, final String destination, final String continuation) + public AbfsRestOperation renamePath(String source, final String destination, final String continuation) throws AzureBlobFileSystemException { final List requestHeaders = createDefaultHeaders(); -final String encodedRenameSource = urlEncode(FORWARD_SLASH + this.getFileSystem() + source); +final AbfsUriQueryBuilder srcQueryBuilder = new AbfsUriQueryBuilder(); +appendSASTokenToQuery(source, SASTokenProvider.RENAME_SOURCE_OPERATION, srcQueryBuilder); +String sasToken = srcQueryBuilder.toString(); + +final String encodedRenameSource = +urlEncode(FORWARD_SLASH + this.getFileSystem() + source) + sasToken; +LOG.trace("Rename source queryparam added {}", encodedRenameSource); Review comment: We should only create the builder and append the sasToken when authType is AuthType.SAS. I don't think we should create throw-away objects. The driver should be lean and only allocate when necessary to reduce overall pressure on GC. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
ThomasMarquardt commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r384036257 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -550,6 +595,30 @@ public AbfsRestOperation checkAccess(String path, String rwx) return op; } + /** + * If configured for SAS AuthType, appends SAS token to queryBuilder + * @param path + * @param operation + * @param queryBuilder + * @throws SASTokenProviderException + */ + private void appendSASTokenToQuery(String path, String operation, AbfsUriQueryBuilder queryBuilder) throws SASTokenProviderException { +if (this.authType == AuthType.SAS) { + try { +LOG.trace("Fetch SAS token for {} on {}", operation, path); Review comment: This is a hot path and we are calling LOG.trace twice (L608 and L612), is this expensive? Could you step thru in debugger to confirm this is a light weight Boolean check only, when the level is not TRACE? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml
Wei-Chiu Chuang created HADOOP-16886: Summary: Add hadoop.http.idle_timeout.ms to core-default.xml Key: HADOOP-16886 URL: https://issues.apache.org/jira/browse/HADOOP-16886 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.1.2, 3.2.0, 3.0.4 Reporter: Wei-Chiu Chuang HADOOP-15696 made the http server connection idle time configurable (hadoop.http.idle_timeout.ms). This configuration key is added to kms-default.xml and httpfs-default.xml but we missed it in core-default.xml. We should add it there because NNs/JNs/DNs also use it too. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ramtinb closed pull request #94: HDFS-10382 In WebHDFS numeric usernames do not work with DataNode
ramtinb closed pull request #94: HDFS-10382 In WebHDFS numeric usernames do not work with DataNode URL: https://github.com/apache/hadoop/pull/94 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044689#comment-17044689 ] Xiaoyu Yao commented on HADOOP-16885: - Repro steps (Thanks Olivér Dózsa) kinit as hdfs Try to copy to encrypted zone directory hdfs dfs -cp /tmp/kms_text_file.txt /kms_test/encrypted_dirs/test_dir/kms_text_file.txt Observe that user hdfs doesn't have permission to do decrypt EEK. (as expected) On HDP 3.1.5.0-152, the following can be seen: Failed to close file: /kms_test/encrypted_dirs/test_dir/kms_text_file.txt._COPYING_ with inode: 18159 org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /kms_test/encrypted_dirs/test_dir/kms_text_file.txt._COPYING_ (inode 18159) Holder DFSClient_NONMAPREDUCE_1857410465_1 does not have any open files. Execute hdfs dfs -ls /kms_test/encrypted_dirs/test_dir/ and observe there's *no* kms_text_file.txt._COPYING_ file present. On HDP 7.1.0.1000-7, no error message can be seen. Execute hdfs dfs -ls /kms_test/encrypted_dirs/test_dir/ and observe there's a kms_text_file.txt._COPYING_ file present. kinit as user1 (kinit -k -t /home/hrt_qa/hadoopqa/keytabs/user1.headless.keytab user1) Try to copy file to encrypted directory again hdfs dfs -cp /tmp/kms_text_file.txt /kms_test/encrypted_dirs/test_dir/kms_text_file.txt The following happens: On HDP 3.1.5.0-152 it succeeds, no error message is shown. On HDP 7.1.0.1000-7 the operation fails with cp: Permission denied: user=user1, access=WRITE, inode="/kms_test/encrypted_dirs/test_dir/kms_text_file.txt._COPYING_":hdfs:hdfs:-rw-r--r-- Expected behavior Step 5. should succeed. No file with _COPYING_ suffix should be created when user with no permission tries to copy to a restricted directory. > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
Xiaoyu Yao created HADOOP-16885: --- Summary: Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream Key: HADOOP-16885 URL: https://issues.apache.org/jira/browse/HADOOP-16885 Project: Hadoop Common Issue Type: Bug Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp file _COPYING_ left and potential wrapped stream unclosed. This ticked is opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-16885: Affects Version/s: 3.3.0 > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16884) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
Xiaoyu Yao created HADOOP-16884: --- Summary: Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream Key: HADOOP-16884 URL: https://issues.apache.org/jira/browse/HADOOP-16884 Project: Hadoop Common Issue Type: Bug Reporter: Xiao Chen Assignee: Xiaoyu Yao Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp file _COPYING_ left and potential wrapped stream unclosed. This ticked is opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation
hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation URL: https://github.com/apache/hadoop/pull/1820#issuecomment-590962815 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 12 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 17s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 22s | trunk passed | | +1 :green_heart: | compile | 17m 3s | trunk passed | | +1 :green_heart: | checkstyle | 2m 41s | trunk passed | | +1 :green_heart: | mvnsite | 2m 21s | trunk passed | | +1 :green_heart: | shadedclient | 20m 17s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 44s | trunk passed | | +0 :ok: | spotbugs | 1m 14s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 17s | trunk passed | | -0 :warning: | patch | 1m 37s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 23s | the patch passed | | +1 :green_heart: | compile | 16m 51s | the patch passed | | +1 :green_heart: | javac | 16m 51s | the patch passed | | -0 :warning: | checkstyle | 2m 39s | root: The patch generated 45 new + 95 unchanged - 19 fixed = 140 total (was 114) | | +1 :green_heart: | mvnsite | 2m 18s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 0s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 39s | the patch passed | | -1 :x: | findbugs | 1m 19s | hadoop-tools/hadoop-aws generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 12s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 38s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | The patch does not generate ASF License warnings. | | | | 124m 22s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.policySetCount in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int) At S3AInstrumentation.java:[line 818] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readExceptions in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException() At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException() At S3AInstrumentation.java:[line 755] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperations in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long, long) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long, long) At S3AInstrumentation.java:[line 788] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readsIncomplete in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int, int) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int, int) At S3AInstrumentation.java:[line 799] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperations in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long, long) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long, long) At S3AInstrumentation.java:[line 777] | | | Increment o
[GitHub] [hadoop] steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
steveloughran commented on a change in pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#discussion_r383997333 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/utils/SASGenerator.java ## @@ -30,11 +30,12 @@ import org.apache.hadoop.fs.azurebfs.services.AbfsUriQueryBuilder; /** - * Created by tmarq on 2/17/20. + * Test container SAS generator Review comment: nit: add a . to keep javadoc happy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823#issuecomment-587626465 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 22s | trunk passed | | +1 :green_heart: | compile | 0m 36s | trunk passed | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 15m 9s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 56s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hadoop-tools/hadoop-aws: The patch generated 15 new + 17 unchanged - 3 fixed = 32 total (was 20) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 13m 39s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 58m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1823 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5fb73514b598 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a562942 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/6/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/6/testReport/ | | Max. process+thread count | 449 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823#issuecomment-587567591 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 0s | trunk passed | | +1 :green_heart: | compile | 0m 34s | trunk passed | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | trunk passed | | +1 :green_heart: | shadedclient | 15m 3s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 28s | trunk passed | | +0 :ok: | spotbugs | 0m 59s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hadoop-tools/hadoop-aws: The patch generated 8 new + 10 unchanged - 3 fixed = 18 total (was 13) | | +1 :green_heart: | mvnsite | 0m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 57s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 24s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 58m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1823 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 077010daea5d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a562942 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/5/testReport/ | | Max. process+thread count | 440 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1857: HADOOP-16878. Copy command in FileUtil to LOG at warn level if the source and destination is the same
steveloughran commented on issue #1857: HADOOP-16878. Copy command in FileUtil to LOG at warn level if the source and destination is the same URL: https://github.com/apache/hadoop/pull/1857#issuecomment-590909093 * findbugs is rejecting this as two different Path types are being referred. * I'd like an exception to be raised, as this is clearly an error. Or log @ error and return -1, I guess...whichever looks best on the CLI * going to need a test, I'm afraid This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#issuecomment-590907288 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 16s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 51s | trunk passed | | +1 :green_heart: | compile | 0m 29s | trunk passed | | +1 :green_heart: | checkstyle | 0m 20s | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | trunk passed | | +1 :green_heart: | shadedclient | 16m 19s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 52s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | | -0 :warning: | patch | 1m 7s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 15s | hadoop-tools/hadoop-azure: The patch generated 1 new + 8 unchanged - 1 fixed = 9 total (was 9) | | +1 :green_heart: | mvnsite | 0m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 32s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | +1 :green_heart: | findbugs | 0m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 17s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 63m 31s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1842 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 9702b2d3081f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / dda00d3 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/12/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/12/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/12/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044518#comment-17044518 ] Steve Loughran commented on HADOOP-16206: - this should go in to 3.4; its too big a change for the 3.3 release, especially for those test suites which do get the log4j loggers and play with their log levels > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16206) Migrate from Log4j1 to Log4j2
[ https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16206: Affects Version/s: 3.3.0 > Migrate from Log4j1 to Log4j2 > - > > Key: HADOOP-16206 > URL: https://issues.apache.org/jira/browse/HADOOP-16206 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-16206-wip.001.patch > > > This sub-task is to remove log4j1 dependency and add log4j2 dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16883) update jackon-databind version
[ https://issues.apache.org/jira/browse/HADOOP-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16883: Component/s: build > update jackon-databind version > -- > > Key: HADOOP-16883 > URL: https://issues.apache.org/jira/browse/HADOOP-16883 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: angerszhu >Priority: Major > > according to > [CVE-2020-8840|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8840], > maybe we should update jackson-databind to 2.9.10.3 or 2.10.x?* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16883) update jackon-databind version
[ https://issues.apache.org/jira/browse/HADOOP-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044506#comment-17044506 ] Steve Loughran commented on HADOOP-16883: - moved to hadoop common. JAR updates need to go there so nobody is surprised by version updates. thanks > update jackon-databind version > -- > > Key: HADOOP-16883 > URL: https://issues.apache.org/jira/browse/HADOOP-16883 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: angerszhu >Priority: Major > > according to > [CVE-2020-8840|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8840], > maybe we should update jackson-databind to 2.9.10.3 or 2.10.x?* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-16883) update jackon-databind version
[ https://issues.apache.org/jira/browse/HADOOP-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran moved HDFS-15189 to HADOOP-16883: Key: HADOOP-16883 (was: HDFS-15189) Project: Hadoop Common (was: Hadoop HDFS) > update jackon-databind version > -- > > Key: HADOOP-16883 > URL: https://issues.apache.org/jira/browse/HADOOP-16883 > Project: Hadoop Common > Issue Type: Improvement >Reporter: angerszhu >Priority: Major > > according to > [CVE-2020-8840|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8840], > maybe we should update jackson-databind to 2.9.10.3 or 2.10.x?* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
steveloughran commented on issue #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets URL: https://github.com/apache/hadoop/pull/1840#issuecomment-590889046 thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16882: - Target Version/s: 3.1.4, 2.10.1 > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044459#comment-17044459 ] Wei-Chiu Chuang commented on HADOOP-16882: -- Label as a release blocker. We should review all dependencies that require an update. > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16882: - Labels: release-blocker (was: ) > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16882: - Priority: Blocker (was: Major) > Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 > > > Key: HADOOP-16882 > URL: https://issues.apache.org/jira/browse/HADOOP-16882 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Blocker > > We updated jackson-databind multiple times but those changes only made into > trunk and branch-3.2. > Unless the dependency update is backward incompatible (which is not in this > case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
Wei-Chiu Chuang created HADOOP-16882: Summary: Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10 Key: HADOOP-16882 URL: https://issues.apache.org/jira/browse/HADOOP-16882 Project: Hadoop Common Issue Type: Improvement Reporter: Wei-Chiu Chuang We updated jackson-databind multiple times but those changes only made into trunk and branch-3.2. Unless the dependency update is backward incompatible (which is not in this case), we should update them in all active branches -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns
[ https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044362#comment-17044362 ] Bo soon Park commented on HADOOP-16881: --- Could you consider this one? {code:java} org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate() public void authenticate(URL url, AutehnticatedURLToken token) throws IOException, AuthenticationException{ conn.distconnect(); // Prevent CLOSE_WAIT }{code} > PseudoAuthenticator does not disconnect HttpURLConnection leading to > CLOSE_WAIT cnxns > - > > Key: HADOOP-16881 > URL: https://issues.apache.org/jira/browse/HADOOP-16881 > Project: Hadoop Common > Issue Type: Bug > Components: auth, security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > > PseudoAuthenticator and KerberosAuthentication does not disconnect > HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue > is observed due to this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries
mukund-thakur commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries URL: https://github.com/apache/hadoop/pull/1851#discussion_r383827548 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java ## @@ -353,6 +353,28 @@ public void testCLIFsckCheckExclusive() throws Exception { "s3a://" + getFileSystem().getBucket())); } + @Test + public void testCLIFsckDDbFixOnlyFails() throws Exception { +describe("This test serves the purpose to run fsck with the correct " + +"parameters, so there will be no exception thrown."); +final int result = run(S3GuardTool.Fsck.NAME, +"-" + Fsck.FIX_FLAG, +"s3a://" + getFileSystem().getBucket()); +LOG.info("The return value of the run: {}", result); +assertEquals(ERROR, result); + } + + @Test + public void testCLIFsckDDbFixAndInternalSucceed() throws Exception { +describe("This test serves the purpose to run fsck with the correct " + +"parameters, so there will be no exception thrown."); +final int result = run(S3GuardTool.Fsck.NAME, +"-" + Fsck.FIX_FLAG, +"-" + Fsck.DDB_MS_CONSISTENCY_FLAG, +"s3a://" + getFileSystem().getBucket()); +LOG.info("The return value of the run: {}", result); Review comment: I am wondering why there is no assert here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries
mukund-thakur commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries URL: https://github.com/apache/hadoop/pull/1851#discussion_r383811083 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java ## @@ -599,6 +613,12 @@ private void checkForViolationInPairs(Path file, private void checkNoViolationInPairs(Path file2, List comparePairs, S3GuardFsck.Violation violation) { + Review comment: It would be nice if you can add some comments here. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns
[ https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prabhu Joseph updated HADOOP-16881: --- Description: PseudoAuthenticator and KerberosAuthentication does not disconnect HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue is observed due to this. was: PseudoAuthenticator and KerberosAuthentication does not disconnect HttpURLConnection leading to lot of CLOSE_WAIT connections. > PseudoAuthenticator does not disconnect HttpURLConnection leading to > CLOSE_WAIT cnxns > - > > Key: HADOOP-16881 > URL: https://issues.apache.org/jira/browse/HADOOP-16881 > Project: Hadoop Common > Issue Type: Bug > Components: auth, security >Affects Versions: 3.3.0 >Reporter: Prabhu Joseph >Assignee: Prabhu Joseph >Priority: Major > > PseudoAuthenticator and KerberosAuthentication does not disconnect > HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue > is observed due to this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns
Prabhu Joseph created HADOOP-16881: -- Summary: PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns Key: HADOOP-16881 URL: https://issues.apache.org/jira/browse/HADOOP-16881 Project: Hadoop Common Issue Type: Bug Components: auth, security Affects Versions: 3.3.0 Reporter: Prabhu Joseph Assignee: Prabhu Joseph PseudoAuthenticator and KerberosAuthentication does not disconnect HttpURLConnection leading to lot of CLOSE_WAIT connections. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16871) Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444,CVE-2019-16869
[ https://issues.apache.org/jira/browse/HADOOP-16871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044290#comment-17044290 ] Aray Chenchu Sukesh commented on HADOOP-16871: -- [~weichiu] , yes i will contribute patch soon > Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444,CVE-2019-16869 > - > > Key: HADOOP-16871 > URL: https://issues.apache.org/jira/browse/HADOOP-16871 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.3.0 >Reporter: Aray Chenchu Sukesh >Assignee: Aray Chenchu Sukesh >Priority: Major > > [CVE-2019-20444 > |[https://rnd-vulncenter.huawei.com/vuln/toViewOfficialDetail?cveId=CVE-2019-20444]] > [CVE-2019-16869|[https://rnd-vulncenter.huawei.com/vuln/toViewOfficialDetail?cveId=CVE-2019-16869]] > We should upgrade the netty dependency to 4.1.45.Final version -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries
mukund-thakur commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries URL: https://github.com/apache/hadoop/pull/1851#discussion_r383743260 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java ## @@ -60,28 +66,51 @@ public void handle(S3GuardFsck.ComparePair comparePair) { sB.append(newLine) .append("On path: ").append(comparePair.getPath()).append(newLine); -handleComparePair(comparePair, sB); +handleComparePair(comparePair, sB, HandleMode.LOG); LOG.error(sB.toString()); } + public void doFix(S3GuardFsck.ComparePair comparePair) throws IOException { +if (!comparePair.containsViolation()) { + LOG.debug("There is no violation in the compare pair: {}", comparePair); + return; +} + +StringBuilder sB = new StringBuilder(); +sB.append(newLine) +.append("On path: ").append(comparePair.getPath()).append(newLine); + +handleComparePair(comparePair, sB, HandleMode.FIX); + +LOG.info(sB.toString()); + } + /** * Create a new instance of the violation handler for all the violations * found in the compare pair and use it. * * @param comparePair the compare pair with violations * @param sB StringBuilder to append error strings from violations. */ - protected static void handleComparePair(S3GuardFsck.ComparePair comparePair, - StringBuilder sB) { + protected void handleComparePair(S3GuardFsck.ComparePair comparePair, + StringBuilder sB, HandleMode handleMode) throws IOException { for (S3GuardFsck.Violation violation : comparePair.getViolations()) { try { ViolationHandler handler = violation.getHandler() .getDeclaredConstructor(S3GuardFsck.ComparePair.class) .newInstance(comparePair); -final String errorStr = handler.getError(); -sB.append(errorStr); + +if (handleMode == HandleMode.LOG) { Review comment: Don't you think CASE statements would be better option here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik commented on issue #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik commented on issue #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities URL: https://github.com/apache/hadoop/pull/1858#issuecomment-590739334 fs/azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik opened a new pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik opened a new pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities URL: https://github.com/apache/hadoop/pull/1858 ABFS driver enhancement - Allow customizable translation from AAD SPNs and security groups to Linux user and group Integration Test results - East US2: `Tests: 1255 Errors: 0 Failures: 0 Skipped: 390` ` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik removed a comment on issue #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik removed a comment on issue #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities URL: https://github.com/apache/hadoop/pull/1858#issuecomment-590739334 fs/azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik closed pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik closed pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities URL: https://github.com/apache/hadoop/pull/1858 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik commented on issue #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik commented on issue #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities URL: https://github.com/apache/hadoop/pull/1858#issuecomment-590736506 Integration Test results (after the javadoc and style fixes) - East US2: `Tests: 1255 Errors: 0 Failures: 0 Skipped: 390` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org